All Docs
FeaturesMaking Tax DigitalUpdated March 11, 2026

ERR-24: Closing the Silent Failure Gap in Sentry Error Alerting

ERR-24: Closing the Silent Failure Gap in Sentry Error Alerting

Version: 1.0.419
Category: Error Monitoring

Background

The MTD ITSA platform uses Sentry for error tracking and a Slack webhook (SLACK_INCIDENT_WEBHOOK_URL) for on-call incident notification. The existing captureError() helper function explicitly sends P0 and P1 severity events to the Slack on-call channel, providing immediate notification for known, instrumented error paths.

However, an audit identified a meaningful gap: unhandled exceptions that do not pass through captureError() were silently captured in the Sentry dashboard with no corresponding alert. This means an on-call engineer would only discover these failures by manually checking Sentry, rather than receiving a proactive notification.

What Was Missing

No Sentry Alert Rules

There were no Sentry alert rule definitions in the codebase — neither a sentry.yml configuration file nor programmatic alert rule setup. As a result, the following error categories produced no on-call notification:

Error TypeExample
React render errorsUnhandled exception in a component tree
Next.js route handler crashesUncaught error in an API route or server action
tRPC framework-level errorsMiddleware or procedure-level failures not caught by app code
Unhandled promise rejections (client)Async operations without a .catch() handler

No Client-Side Unhandled Rejection Hooks

sentry.server.config.ts contained no onUnhandledRejection or beforeSendTransaction hooks, leaving client-side promise rejections and transaction-level failures outside the alerting perimeter.

Split Alerting Surface

Because Sentry's Slack integration was not pointed at the same channel as SLACK_INCIDENT_WEBHOOK_URL, teams were required to monitor two separate notification surfaces to achieve complete incident visibility.

Recommended Configuration

1. Sentry Alert Rules

Configure the following alert rules in the Sentry UI or via sentry.properties:

New Issue Alert

Condition:  A new issue is created
Environment: production
Action:     Notify via Slack → #on-call

Error Rate Alert

Condition:  Number of events > 10 in 1 minute
Environment: production
Action:     Notify via Slack → #on-call

Transaction Failure Rate Alert

Condition:  Failure rate > 5% for [critical transactions]
Environment: production
Action:     Notify via Slack → #on-call

2. Sentry–Slack Integration

Connect the Sentry Slack integration to the same channel used by SLACK_INCIDENT_WEBHOOK_URL. This ensures:

  • captureError()-driven P0/P1 alerts and Sentry rule-based alerts appear in the same channel.
  • On-call engineers have a single place to triage production incidents regardless of how the error was captured.

3. Client-Side Hooks in sentry.server.config.ts

Consider adding the following hooks to improve coverage:

// sentry.server.config.ts
Sentry.init({
  // ... existing config ...
  beforeSend(event) {
    // Filter or enrich events before they are sent
    return event;
  },
  integrations: [
    // Captures unhandled promise rejections
    new Sentry.Integrations.OnUnhandledRejection({ mode: 'strict' }),
  ],
});

Impact

ScenarioBefore ERR-24After ERR-24 Remediation
captureError() called explicitly✅ Slack alert fired✅ Slack alert fired
React render error (unhandled)❌ Dashboard only✅ Sentry rule alert → Slack
Next.js route handler crash❌ Dashboard only✅ Sentry rule alert → Slack
tRPC framework error❌ Dashboard only✅ Sentry rule alert → Slack
Unhandled promise rejection❌ Not captured✅ Hook captures + alerts
Error rate spike (>10/min)❌ No notification✅ Rate-based alert → Slack

Related

  • Control: ERR-24
  • Affected file: sentry.server.config.ts
  • Environment variable: SLACK_INCIDENT_WEBHOOK_URL