All Docs
FeaturesCalmony PayUpdated March 15, 2026

Security Update: Rate Limiting for tRPC & Redis-Backed Pay API — v1.0.30

Security Update: Rate Limiting for tRPC & Redis-Backed Pay API

Version: 1.0.30 Security Control: SEC-15 Severity: High


Background

A security review identified that two separate surfaces in the Calmony Pay API stack lacked effective rate limiting:

  1. The tRPC endpoint (/api/trpc) — handling all internal mutations and queries — had no rate limiting at all. An authenticated attacker could enumerate data or repeatedly trigger expensive operations without restriction.

  2. The pay API (/v1/*) — while it had a 100 req/s per-key limit — used an in-memory store. On Vercel's serverless infrastructure, each function instance maintains its own isolated memory, meaning the limit was applied per-instance rather than globally. Under load, many instances can run concurrently, effectively multiplying the true limit by the number of active instances.


What Changed

1. tRPC Handler — Per-User & Per-IP Rate Limiting

The tRPC route handler at src/app/api/trpc/[trpc]/route.ts now enforces rate limits on every incoming request before the tRPC procedure executes.

Limits are evaluated against two dimensions:

  • Per authenticated user — limits abuse from compromised or malicious accounts.
  • Per IP address — limits unauthenticated or pre-authentication probing.

Requests that exceed the limit receive a 429 Too Many Requests response before any tRPC logic runs.

2. Pay API (/v1/*) — Migrated from In-Memory to Upstash Redis

The rate limiter backing the pay API has been replaced with a Redis-backed implementation using Upstash.

Upstash provides a serverless-compatible Redis API over HTTPS, making it well-suited to Vercel deployments. All instances of the pay API now share a single, consistent rate limit counter per API key — the 100 req/s limit is now enforced globally rather than per-instance.

PropertyBeforeAfter
StorageIn-process memoryUpstash Redis
ScopePer serverless instanceGlobal across all instances
Limit100 req/s (per instance)100 req/s (enforced globally)
tRPC coverageNonePer-user + per-IP

Configuration

The Redis-backed rate limiter requires two environment variables to be set in your deployment environment:

UPSTASH_REDIS_REST_URL=https://<your-upstash-endpoint>.upstash.io
UPSTASH_REDIS_REST_TOKEN=<your-upstash-token>

These values are available from your Upstash console after creating a Redis database.

Important: Without these variables set, the application will not be able to enforce distributed rate limits. Ensure they are configured before deploying v1.0.30 or later.


Known Limitations

  • The previous in-memory rate limiter provided no cross-instance protection in serverless environments. This was a known architectural limitation documented in rate-limit.ts.
  • Redis-backed rate limiting introduces a network call per request to Upstash. Upstash's HTTPS REST API is low-latency by design, but this dependency should be accounted for in p99 latency budgets.
  • Rate limit counters in Redis use a sliding window strategy. Brief traffic spikes may still be processed if they arrive within the window boundary across multiple concurrent requests before the counter is incremented.

References