Cross-Source Joins
Relations don't care which database each lens lives in. A recipes
lens on Postgres can include reviews living in a separate Postgres
instance, in MySQL, Mongo, or DynamoDB — the planner hash-joins the
pieces in memory and stitches the result.
How the planner picks a strategy
Every include triggers the same planner. The planner:
- Runs the primary read against the owning lens's bridge.
- Collects the FK values (e.g. all
recipes.idvalues in the result). - Resolves the target lens's bridge and calls its
batchReadwith{ [fk]: { $in: [ids] } }+ any caller-suppliedwhere/orderBy/limit/select. - Groups the target rows by FK in-process, slices to
limitper parent, attaches them.
There's no relational JOIN pushdown to the source — every join is a
hash-join, same-source or cross-source. This keeps the same predictable
two-round-trip cost model regardless of topology.
Bridge caching per request
If two relations target lenses on the same source, the planner
reuses one bridge connection for both batchRead calls — not two.
That means including three relations that all happen to live on
postgres-main is three batchReads but one opened pool.
The batchRead requirement
Every bridge that wants to serve joined reads implements batchRead:
When capabilities.batchRead === false:
- The primary query still runs normally.
- Each relation targeting that bridge returns empty on every parent row.
meta.includeErrorspicks up one entry:{ relation: 'X', reason: 'capabilities.batchRead' }.
No silent degradation, no client-side per-row SELECT fallback. Bridge
authors can opt out explicitly when the underlying store makes
batchRead impractical (log-structured sinks, append-only event
stores). See Bridges Without Support.
Performance notes
- Two round-trips per relation — one for primary, one for the batched child read. Not N+1.
- 1000-row system ceiling per relation per parent. The planner clamps silently if
limitexceeds this. - Client-side slicing for
orderBy/limitinside each parent group. The bridge returns all matching rows within the FK$inset; the planner sorts + slices per group. Predictable across bridges with differentORDER BYsemantics. - Memory sized for the result set, not the source cardinality. The planner holds at most
parents × limitrows in process at any moment.
Topologies
Same DB, same source
Normal case. One bridge, two batchReads on the same pool. Cheapest.
Same DB type, different instances
Two pools, two round-trips on different connections. Same batchRead
code path — the planner treats them as independent bridges.
Cross-bridge (Postgres → Mongo, MySQL → DynamoDB, etc.)
Works identically, as long as both bridges report batchRead: true.
Each runs its own native filter on the FK $in set; the planner stitches.
Example — verify in the Console
The example stack at github.com/semilayer/example-stack
ships a working cross-bridge setup: recipes lives on one Postgres,
reviews lives on another. Query with include: { reviews: ... } and
the response carries stitched rows. Try it locally to see two bridge
connections light up in the Console's ingest-jobs view.
Next: Access Rules.