Agent Audit Log & Explainability Dashboard
Agent Audit Log & Explainability Dashboard
The Agent Audit Log surfaces the complete decision history of every AI agent that has run on the platform. It answers the enterprise compliance question: "Why did the AI make this decision, what did it consider, and what did it change?"
Unlike general observability surfaces that show what ran, the Audit Log is focused on why — exposing reasoning chains, causal attribution, tool use, and error context in a single queryable interface.
Accessing the Audit Log
There are two entry points:
| URL | Scope |
|---|---|
/dashboard/agent-audit | All products — fleet-wide view |
/dashboard/products/[id]/agent-audit | Single product — scoped to one project |
The global view is also reachable from the account-level sidebar under Audit Log (between Agent Health and ROI Dashboard).
Dashboard Overview
Summary Cards
The top of the dashboard displays four fleet-level metrics for the selected time range:
- Total Entries — All log entries in the period
- Tool Calls — Count of
tool_call-level entries - Reasoning Logs — Count of
thinking-level entries - Error Rate — Errors as a percentage of total entries (highlighted red if above 10%)
Below the cards, an Agent Activity Distribution bar shows the proportional share of log volume by agent type (up to 8 agent types displayed).
Time Range Filter
All data is scoped to a selected time range: 1h, 6h, 24h, 7d, or 30d.
Tabs
Audit Feed
A paginated, searchable stream of all agent log entries.
- Search — Full-text filter across log messages. Matching terms are highlighted inline.
- Level filter — Narrow to a specific log level:
info,tool_call,tool_result,thinking,error, ormetric. - Agent filter — Narrow to a specific agent type.
- Expand rows — Click any row to expand inline metadata.
- Trace button — Any entry associated with an agent job shows a Trace button. Clicking it opens the Job Trace Side Sheet for that job.
Log Levels
| Level | Colour | Meaning |
|---|---|---|
info | Grey | General informational message |
tool_call | Blue | Agent invoked a tool |
tool_result | Green | Tool returned a result |
thinking | Purple | Agent inner reasoning/monologue |
error | Red | An error occurred |
metric | Orange | A measured outcome or performance metric |
Error Audit
Filtered to error-level entries only. Each row can be expanded to show:
- Stack trace
- Causal context: the pipeline run that triggered the error, the feature title, and the project
Use this tab for incident investigation and compliance reporting.
Reasoning Chain
A chronological timeline of all thinking-level logs — the AI's inner monologue as it worked through decisions.
- Filterable by agent type
- Ordered oldest-to-newest to reconstruct a causal narrative
- Useful for auditing why a particular code change or decision was made
Job Trace Side Sheet
Opened by clicking the Trace button on any log entry in the Audit Feed.
The side sheet (non-modal, scrollable) shows the complete decision trace for a single agent job, grouped into four phases:
| Phase | Contents |
|---|---|
| Reasoning | thinking-level log entries — the agent's step-by-step deliberation |
| Actions | tool_call / tool_result pairs with measured durations |
| Decisions | metric-level entries — outcomes and measured results |
| Errors | error-level entries with full context |
The trace also lists files written during the job, including line counts and causal attribution back to the log entries that produced each file.
tRPC API
The dashboard is powered by the agentAudit router. All procedures are available under trpc.agentAudit.*.
agentAudit.listAuditFeed
Returns a paginated feed of log entries.
trpc.agentAudit.listAuditFeed.useQuery({
projectId?: string, // Omit for cross-product
level?: LogLevel, // 'info' | 'tool_call' | 'tool_result' | 'thinking' | 'error' | 'metric'
agentType?: string,
timeRange: '1h' | '6h' | '24h' | '7d' | '30d',
cursor?: string, // ISO timestamp cursor for next page
limit?: number, // Default 50
})
Returns: { items: LogEntry[], nextCursor: string | null }
agentAudit.getJobTrace
Returns the full grouped decision trace for a single agent job.
trpc.agentAudit.getJobTrace.useQuery({
agentJobId: string,
})
Returns: { reasoning: LogEntry[], actions: ActionPair[], decisions: LogEntry[], errors: LogEntry[], filesWritten: FileWrite[] }
agentAudit.getAuditSummary
Returns fleet-level or project-level aggregate stats.
trpc.agentAudit.getAuditSummary.useQuery({
projectId?: string,
timeRange: TimeRange,
})
Returns: { totalEntries: number, toolCalls: number, thinkingLogs: number, errorRate: number, agentDistribution: { agentType: string, count: number, pct: number }[] }
agentAudit.listErrorAudit
Returns error-only entries enriched with causal context.
trpc.agentAudit.listErrorAudit.useQuery({
projectId?: string,
timeRange: TimeRange,
cursor?: string,
limit?: number,
})
agentAudit.listReasoningChain
Returns all thinking-level logs in chronological order.
trpc.agentAudit.listReasoningChain.useQuery({
projectId?: string,
agentType?: string,
timeRange: TimeRange,
cursor?: string,
limit?: number,
})
Implementation Notes
- No new database tables. All data is read from the existing
agentLogstable — the audit log is a new view over data that was always captured. - Cursor-based pagination uses ISO timestamp values as cursors to avoid offset drift as new entries are written to a live feed.
- The Job Trace side sheet is implemented as a
Sheet(not aDialog) so users can scroll long decision traces without a blocking modal overlay.