SemiLayerDocs

Query — Pagination & streaming

query() caps at limit per call. For anything bigger, you have three options: offset pagination (fine for admin UIs), cursor pagination (right for deep scroll), and stream.query() (right for exports).

offset / limit

The obvious shape. Pick a page size, advance the offset.

const page = await beam.orders.query({
  where:   { status: 'shipped' },
  orderBy: { field: 'placed_at', dir: 'desc' },
  limit:   20,
  offset:  page * 20,
})

When to use it: admin tables with a page picker, the user is unlikely to scroll past page 5.

When NOT to: data that changes between requests (rows shift under you; some rows appear twice or get skipped), or pages ≥ ~1000 (offset gets slower with depth — the bridge still has to count-and-discard everything above).

Cursor

Opaque, stable, deep-scroll-safe.

// First page
const first = await beam.orders.query({
  where:   { status: 'shipped' },
  orderBy: { field: 'placed_at', dir: 'desc' },
  limit:   20,
})

// Next page — pass the cursor from meta
const next = await beam.orders.query({
  where:   { status: 'shipped' },
  orderBy: { field: 'placed_at', dir: 'desc' },
  limit:   20,
  cursor:  first.meta.nextCursor,
})

// Terminate when meta.nextCursor is undefined

When to use it: any page-2-and-beyond scroll, infinite scroll lists, backend jobs that walk a whole table.

The cursor encodes the current sort position — so keep orderBy and where identical across calls, or the cursor is meaningless.

stream.query for large reads

For a full export or a multi-page walk, stream.query() is the simplest. One call, one WebSocket, N rows:

queryorders
orders: {
  source: 'main-db',
  table: 'public.orders',
  fields: {
    id:          { type: 'number', primaryKey: true },
    customer_id: { type: 'number' },
    status:      { type: 'enum',   values: ['pending', 'shipped', 'delivered', 'cancelled'] },
    total_cents: { type: 'number' },
  },
  grants: {
    query:  'authenticated',
    stream: { query: 'authenticated' },   // streaming opts-in separately
  },
}

When to use it: full-table walks, CSV exports, batch jobs that fan out to another system.

Protocol: one row frame per matching row, terminated by one done frame carrying { count, durationMs }. Errors arrive as error frames with a code — see HTTP & WebSocket for the full table.

Choosing

NeedUse
Admin UI with page linksoffset + limit
Infinite scroll / deep pagescursor
Export or batch jobstream.query
Server-sent updates as rows changestream.subscribe — different primitive, same lens

Gotchas

  • limit in stream.query is the total cap, not the per-batch size. batchSize is a hint to the server for how big each row frame burst should be — the server may ignore it.
  • The bridge closes the cursor when the stream ends. If you abort mid-stream, the bridge cleans up on its side — no leaked transactions.
  • grants.stream.query gates streaming separately from grants.query. A lens can expose one without the other.

Next: Recipes.