How We Eliminated 80% of Redundant DB Queries on the Dashboard
How We Eliminated 80% of Redundant DB Queries on the Dashboard
Release: v1.0.15 · Category: Performance
The Problem
Every time a user loaded the SaaS Factory dashboard, the platform was quietly doing far more database work than necessary.
The culprit was a single function: getAccessibleProjectIds(userId) in src/lib/auth/project-access.ts. This function answers a simple question — which projects can this user access? — by issuing two database queries:
- Fetch team memberships for the user
- Fetch projects associated with those memberships
That's fine in isolation. The problem was that every router called this function independently. The pipeline, observability, crm, churn, support, and revenue-analytics routers each had no awareness of what the others were doing, so on a single dashboard page load — where 5–8 parallel tRPC calls fire simultaneously — the platform was executing the same two queries 10–16 times for the same user and the same data.
This added 50–100ms of cold-start latency per page load, entirely from redundant network round-trips to the database.
Why It Happened
This is a classic N+1 variant that emerges naturally in distributed router architectures. Each router is designed to be self-contained and independently authorized. That's the right design for correctness and safety — but without a shared request-scoped cache, every router re-derives the same access context from scratch.
In tRPC's parallel execution model, there's no built-in mechanism to share intermediate results across simultaneous procedure calls. Each call enters its own execution context, calls getAccessibleProjectIds, and waits for the database — completely unaware that four other calls are doing the exact same thing at the same millisecond.
The Fix
The solution is per-request memoization — computing getAccessibleProjectIds once and reusing the result for the lifetime of the request.
In Next.js App Router environments, React's cache() utility provides exactly this primitive. It wraps a function so that repeated calls with the same arguments within the same server request return the cached result without re-executing the function body.
// src/lib/auth/project-access.ts (simplified)
import { cache } from 'react';
export const getAccessibleProjectIds = cache(async (userId: string): Promise<string[]> => {
// 2 DB queries — now only executed once per request per userId
const memberships = await db.teamMembership.findMany({ where: { userId } });
const projectIds = memberships.map(m => m.projectId);
return projectIds;
});
With this change, all routers continue to call getAccessibleProjectIds(userId) exactly as before. No router-level changes were required. The caching is transparent — the first call executes the queries, and every subsequent call within the same request returns the memoized result instantly.
For environments outside the Next.js App Router (or as a belt-and-suspenders fallback), a 1-second TTL in-memory cache keyed by userId provides similar deduplication for requests that arrive in rapid succession.
Results
| Metric | Before | After |
|---|---|---|
| DB queries per dashboard load | 10–16 | 2 |
Redundant getAccessibleProjectIds calls | 5–8 | 1 |
| Queries eliminated per load | — | 8–14 |
| Estimated latency reduction | — | 50–100ms |
80%+ of project-access DB queries on the dashboard are now eliminated entirely.
What Didn't Change
- Authorization logic is unchanged. No access control rules were modified. Users see exactly the same projects they were permitted to see before.
- Router code is unchanged. Each router still calls
getAccessibleProjectIds— the optimization is fully encapsulated inproject-access.ts. - There is no cache invalidation complexity. React's
cache()is scoped to a single server request — it resets automatically with every new request, so there's no risk of stale access data being served.
Affected Routers
The following routers benefit from this fix without any code changes:
pipelineobservabilitycrmchurnsupportrevenue-analytics- Any future router that calls
getAccessibleProjectIds
Takeaway
Request-scoped memoization is a powerful and underused optimization in tRPC + Next.js architectures. When multiple routers share a common authorization primitive, wrapping that primitive in cache() is a one-line change that compounds across every parallel call on every page load — with zero risk to correctness and no changes to call sites.