SemiLayerDocs

Bridges Without batchRead

Joins require every target bridge to implement batchRead. The capability is declared by the bridge itself:

interface BridgeCapabilities {
  batchRead: boolean      // required for joins; default: false
  // ... other flags
}

A bridge that sets batchRead: false (or omits the capability entirely) opts out of being joined to. It can still serve primary reads for search, query, and similar — it just won't appear as a target lens in anyone else's include.

What happens at call time

When a caller includes a relation whose target lens lives on a batchRead: false bridge:

  • Primary read runs normally — the owning lens's bridge does its thing.
  • That specific relation returns empty on every parent row.
  • meta.includeErrors picks up one entry: { relation: 'X', reason: 'capabilities.batchRead' }.
  • Other relations still work. Fail-partial, same as access-rule denials.
const { rows, meta } = await beam.recipes.query({
  include: { reviews: true, ingredientRows: true },
})
// rows[i].reviews        → populated (reviews lens has batchRead)
// rows[i].ingredientRows → [] if that bridge lacks batchRead
// meta.includeErrors     → [{ relation: 'ingredientRows', reason: 'capabilities.batchRead' }]

No per-row fallback. No silent degradation. The UI layer sees an empty array and a clear error reason — it can decide whether to warn the user, hide the panel, or fail loudly.

Why not fall back to per-row SELECTs?

Because "seems slow" beats "mysteriously wrong" every time.

A silent N+1 per-row fallback would:

  • Make it impossible to reason about latency — "why is this 200ms today and 8s tomorrow?"
  • Hide capacity problems until the bridge melts under load.
  • Trick integrators into shipping features that'll break under production row counts.

The explicit opt-out + fail-partial surface keeps the performance contract visible. Bridge authors who can implement efficient batchRead should; those who genuinely can't should declare that and let callers plan accordingly.

When batchRead is impossible

Some stores genuinely can't serve an efficient { [fk]: { $in: [ids] } } batch lookup:

  • Log-structured / append-only event sinks — batch lookup implies a random-access index the store doesn't maintain.
  • Derived read models — some sources only expose aggregated views, not per-row primary-key access.
  • Write-only bridges — analytics sinks, observability platforms. These can take writes but aren't meant to be read from in production paths.

For these, batchRead: false is the honest answer. If a downstream user needs to "join to" one of these, the recommended pattern is a materialized mirror: ingest the useful subset into a bridge that does support batchRead (Postgres is the common landing spot), and point the relation at the mirror lens.

Guidance for callers

If meta.includeErrors tells you a relation couldn't be joined:

  1. Check the bridge. The Console's source detail page shows each bridge's declared capabilities. If batchRead: false, that's by design.
  2. Consider a separate query. If you can narrow the child-side query by parent ids (e.g. by known FKs from the primary result), a second beam.<child>.query({ where: { fk: { $in: ids } } }) call is explicit and cheap.
  3. Ask the bridge author. Most bridges can support batchRead — if yours can't, it's usually a deliberate choice worth understanding.

Guidance for bridge authors

See the Bridge SDK for the full batchRead contract. Short version:

async batchRead(opts: BatchReadOptions): Promise<BatchReadResult> {
  // opts.where includes the FK predicate the planner added — handle $in
  // opts.select, opts.orderBy, opts.limit — honor if you can, the planner
  //   will fall back to client-side slicing if you don't
  // return { rows, cursor? }
}

If you implement it, declare capabilities.batchRead = true and you're instantly joinable.