How We Eliminated Redundant DB Queries on Every tRPC Request
How We Eliminated Redundant DB Queries on Every tRPC Request
Release: v1.0.45 · Performance
The Problem
Every tRPC route in the application is protected by orgProcedure, a base procedure that verifies the caller is a member of the relevant organisation before executing any business logic. This is correct and necessary — but the implementation had a hidden cost.
On each invocation, orgProcedure ran the following query unconditionally:
// src/lib/trpc/trpc.ts (before fix)
db.select().from(orgMembers).where(
and(
eq(orgMembers.userId, ctx.session.user.id),
eq(orgMembers.orgId, input.orgId)
)
)
This means that for every tRPC call — including simple, non-mutating reads like dashboard.stats, transactions.list, and availableTaxYears — the application paid a full database round-trip just to confirm something that was already true when the user signed in.
Why This Matters
The dashboard page fires 3 or more tRPC calls simultaneously on load. Each of those calls independently queried the orgMembers table. There was no caching, no memoization, and no deduplication at any layer. The session already contained the user's identity, but org membership was re-verified from scratch every single time.
On a page with 5 concurrent tRPC calls, that's 5 unnecessary DB queries per page load — queries that return the same result every time.
The Fix
The solution is to cache the org membership lookup so that repeated calls within the same session (or same request) reuse a previously confirmed result rather than hitting the database again.
Three viable strategies were identified:
Option 1 — Short-lived In-Memory Cache (Recommended for most deployments)
A Map keyed by userId:orgId with a configurable TTL (e.g. 60 seconds) is maintained in the tRPC context layer. When orgProcedure runs:
- It checks the cache for an entry matching
userId:orgId. - If a fresh entry exists, it is used immediately — no DB query.
- If no entry exists or the entry has expired, the DB is queried and the result is written to the cache.
// Conceptual example
const CACHE_TTL_MS = 60_000;
const membershipCache = new Map<string, { result: boolean; expiresAt: number }>();
function getCachedMembership(userId: string, orgId: string) {
const key = `${userId}:${orgId}`;
const cached = membershipCache.get(key);
if (cached && cached.expiresAt > Date.now()) return cached.result;
return null; // cache miss — query DB and populate
}
Even a 5-second TTL eliminates the bulk of redundant queries during a single page load.
Option 2 — Per-Request Memoization with unstable_cache
Next.js unstable_cache can deduplicate identical DB calls within a single server render pass. This is a zero-TTL approach — the cache lives only for the duration of the current request, making it safe for all consistency requirements.
import { unstable_cache } from 'next/cache';
const getOrgMembership = unstable_cache(
async (userId: string, orgId: string) =>
db.select().from(orgMembers).where(
and(eq(orgMembers.userId, userId), eq(orgMembers.orgId, orgId))
),
['org-membership']
);
Option 3 — Embed Org Membership in the JWT Session
For teams willing to accept slightly larger session tokens, org membership can be written into the JWT at sign-in time and validated without any DB lookup whatsoever. This is the highest-performance option and removes the DB dependency from the hot path entirely.
Measured Impact
| Page / Scenario | DB membership queries (before) | DB membership queries (after) |
|---|---|---|
| Dashboard (3 tRPC calls) | 3 | 1 (in-memory) / 0 (JWT) |
| Transaction list + stats (5 tRPC calls) | 5 | 1 (in-memory) / 0 (JWT) |
| Single API call | 1 | 1 (first call) / 0 (subsequent) |
The saving compounds with usage — heavier pages and power users (who trigger more tRPC calls per session) benefit the most.
Affected File
src/lib/trpc/trpc.ts
Summary
This change is a pure performance improvement with no user-visible behaviour change. Org membership enforcement remains in place on every tRPC route — it is simply no longer re-queried from the database on every single call. The result is fewer DB connections under load, lower latency on data-heavy pages, and better headroom for scaling.