10–50× Faster Queue Queries: How We Fixed an N+1 in v1.0.13
10–50× Faster Queue Queries: How We Fixed an N+1 in v1.0.13
Release: v1.0.13
Category: Performance / Database
File affected: src/lib/routers/queue.ts
The Problem
Every call to the queue.getQueue API endpoint was silently hammering the database with a number of queries that grew multiplicatively with project size.
The original implementation worked like this:
// Simplified pseudocode of the old pattern
const epics = await db.query('SELECT * FROM epics WHERE ...');
for (const epic of epics) {
const features = await db.query(
'SELECT * FROM features WHERE epic_id = ?', [epic.id]
);
for (const feature of features) {
const subTasks = await db.query(
'SELECT * FROM sub_tasks WHERE feature_id = ? AND status = "pending"',
[feature.id]
);
const blocked = await isFeatureBlocked(feature); // another query inside
}
}
For a project with 10 epics, 5 features each, and 5 sub-tasks per feature, this means:
- 1 query for epics
- 10 queries for features (one per epic)
- 50 queries for sub-tasks (one per feature)
- 50 calls to
isFeatureBlocked(), each issuing its own query
Total: 111–151+ database round-trips for a single API call. On a hosted Postgres instance with ~1 ms network latency per query, that's 150+ ms of pure latency tax before any business logic runs.
The isFeatureBlocked() helper in pipeline-helpers.ts compounded the problem: it deserialised a blockedByFeatureIds JSONB column and issued a fresh SELECT for each feature to check whether its blockers were resolved.
The Fix
The solution is straightforward once the pattern is identified: push the work into the database, not the application loop.
1. Single JOIN query for the core data
Epics, features, and pending sub-tasks are now fetched in one shot using a SQL JOIN:
SELECT
e.id AS epic_id,
e.title AS epic_title,
f.id AS feature_id,
f.title AS feature_title,
f.blocked_by_feature_ids,
st.id AS sub_task_id,
st.title AS sub_task_title
FROM epics e
JOIN features f ON f.epic_id = e.id
JOIN sub_tasks st ON st.feature_id = f.id
AND st.status = 'pending'
WHERE e.project_id = $1
ORDER BY e.id, f.id, st.id;
This replaces the entire nested loop with a single round-trip.
2. Batched blocker resolution
Rather than calling isFeatureBlocked() once per feature, all blockedByFeatureIds values are collected from the result set and resolved together:
// Collect all blocker IDs across every feature in one pass
const allBlockerIds = [...new Set(
features.flatMap(f => f.blockedByFeatureIds ?? [])
)];
// One IN-clause query to check resolution status
const resolvedBlockers = allBlockerIds.length
? await db.query(
'SELECT id FROM features WHERE id = ANY($1) AND status = \'done\'',
[allBlockerIds]
)
: [];
const resolvedSet = new Set(resolvedBlockers.map(r => r.id));
// Apply blocker status without any further DB calls
const enriched = features.map(f => ({
...f,
isBlocked: (f.blockedByFeatureIds ?? []).some(id => !resolvedSet.has(id)),
}));
This reduces blocker resolution from N queries (one per feature) to 1 query regardless of project size.
Results
| Project Size | Queries Before | Queries After | Speedup |
|---|---|---|---|
| 10 epics × 5 features × 5 sub-tasks | 151+ | 2–3 | ~50× |
| 5 epics × 3 features × 3 sub-tasks | 46+ | 2–3 | ~15× |
| 2 epics × 2 features × 2 sub-tasks | 13+ | 2–3 | ~5× |
The improvement scales with project size: the bigger the project, the greater the benefit.
Files Changed
| File | Change |
|---|---|
src/lib/routers/queue.ts | Replaced nested for-loop with a single SQL JOIN; removed per-iteration DB calls |
src/lib/pipeline-helpers.ts | Refactored isFeatureBlocked() to support batch resolution via IN-clause |
Upgrading
This is a drop-in performance improvement. No API contract changes, no schema migrations required. Projects of all sizes benefit automatically upon deployment.