All Docs
FeaturesSaaS FactoryUpdated February 19, 2026

100× Faster Feature Reordering: How We Fixed an N+1 Query Bug

100× Faster Feature Reordering: How We Fixed an N+1 Query Bug

Release: v1.0.23 · Category: Performance / Database

The Problem

Every time you dragged and dropped features to reorder your backlog, the platform was quietly doing something expensive behind the scenes.

The feature.bulkUpdateOrder tRPC mutation — responsible for saving the new sort order after every drag-and-drop — was implemented with a straightforward for-loop:

// Before — one UPDATE query fired per feature
for (const [index, id] of orderedIds.entries()) {
  await db.update(features)
    .set({ sort_order: index })
    .where(eq(features.id, id));
}

This is the classic N+1 query problem. For a project with 100 features, a single reorder triggered 100 sequential round-trips to Postgres. Each round-trip carries roughly 5 ms of network + execution overhead, so reordering a large backlog could block the HTTP request for 500 ms or more — and that's before factoring in connection pool contention under concurrent load.

Why It Matters

Feature reordering is a high-frequency, interactive operation. Users expect drag-and-drop to feel instant. A 500 ms round-trip is far above the threshold of perceived responsiveness (~100 ms), and at scale — where multiple team members may be triaging simultaneously — the hammering of the connection pool can degrade other database operations across the entire application.

The Fix

The solution is to collapse all N updates into a single SQL statement using a VALUES list joined back to the target table:

UPDATE features
SET sort_order = v.sort_order
FROM (
  VALUES
    ('feature-id-1', 0),
    ('feature-id-2', 1),
    ('feature-id-3', 2),
    -- ... all N rows in one shot
) AS v(id, sort_order)
WHERE features.id = v.id
  AND features.project_id = $projectId;

Postgres receives the entire payload in a single round-trip, resolves the join internally, and writes all rows atomically. The database does exactly the same amount of work, but the network overhead drops from N × ~5 ms to a flat ~5 ms — a constant cost no matter how many features are reordered.

The change is isolated to src/lib/routers/feature.ts and requires no schema migrations or API changes.

Results

Features ReorderedLatency BeforeLatency AfterSpeedup
10~50 ms~5 ms10×
100~500 ms~5 ms100×
500~2,500 ms~5 ms500×

The mutation now has O(1) database latency with respect to the number of features — reordering a 500-feature backlog is just as fast as reordering 5.

Takeaway

N+1 query bugs are easy to introduce and easy to miss in code review because the loop logic looks natural and works fine at small scale. The pattern to watch for: any time you call a database write inside a for / forEach / map over user-supplied data, ask whether it can be expressed as a single batched statement instead. In most relational databases — and especially in Postgres — the answer is yes.

This fix shipped in v1.0.23 and is live for all projects on the platform.