Agent Audit Log & Explainability Dashboard
Agent Audit Log & Explainability Dashboard
The Agent Audit Log & Explainability Dashboard answers the core enterprise compliance question: "Why did the AI make this decision, what did it consider, and what did it change?"
Existing observability surfaces show what ran. This dashboard surfaces why decisions were made — every reasoning step, every tool call, every error — in a queryable, traceable UI.
Accessing the Dashboard
Global (cross-product)
Navigate to Audit Log in the account-level sidebar, or go directly to:
/dashboard/agent-audit
Product-scoped
To see audit data for a single product, navigate to that product and select Audit Log, or go to:
/dashboard/products/[id]/agent-audit
The product-scoped view pre-filters all queries to the specified product ID.
Dashboard Layout
Summary Stat Cards
Four cards at the top of the page provide fleet-level (or product-level) at-a-glance metrics:
| Card | What it shows |
|---|---|
| Total Entries | Total log entries in the selected scope/time range |
| Tool Calls | Number of tool_call-level log entries |
| Reasoning Logs | Number of thinking-level log entries |
| Error Rate | Percentage of entries that are error-level |
Agent Activity Distribution
A colour-coded proportional bar below the stat cards shows how log volume is distributed across agent types (e.g. Implementation, Research, Testing). Hover over a segment to see the agent name and exact count. The legend below the bar lists each agent type with its count and error count.
Three-Tab Main View
Audit Feed
A live, paginated log stream across all agents and products.
Filtering options:
- Search — full-text search across log messages (matches are highlighted inline)
- Level — filter by log level:
thinking,tool_call,tool_result,error,metric,info - Agent type — filter to a specific agent (e.g. Implementation, Research)
Each row shows:
- Level badge with icon and colour coding
- Log message (with search term highlighting)
- Tool name badge (for
tool_call/tool_resultentries) - Agent type, project name, and timestamp
- Expand arrow to reveal full metadata inline
- Trace button — opens the Job Trace sheet for the associated agent job
Pagination is cursor-based (ISO timestamp), so the feed remains stable as new logs arrive.
Error Audit
Filtered to error-level entries only, enriched with causal context:
- Stack trace (expandable)
- Pipeline trigger that caused the run
- Feature title being implemented
- Project the error occurred in
Use this tab to quickly identify which pipeline runs failed and trace the cause back to a specific feature or trigger.
Reasoning Chain
A chronological timeline of all thinking-level logs — the AI's inner monologue as it deliberated on a task.
- Filterable by agent type
- Shows the full thought text, agent type, project, and timestamp for each entry
- Useful for auditing why the AI chose a particular approach or rejected an alternative
Job Trace Sheet
Clicking Trace on any audit feed row (or any entry linked to an agent job) opens a side sheet that slides in from the right. The sheet displays the complete decision trace for that single agent job, organised into four phases:
| Phase | Log levels included | What it shows |
|---|---|---|
| Reasoning | thinking | Every deliberation step — why the agent was considering an action |
| Actions | tool_call + tool_result | Paired tool invocations with tool name, input, output, and duration |
| Decisions | metric, info | Conclusions reached, metrics recorded, and informational checkpoints |
| Errors | error | All errors with full stack traces |
The sheet also shows:
- Job summary — agent type, pipeline run ID, total duration, entry count
- Files written — list of files the agent wrote during this job, with line counts
- Pipeline run link — direct link to the associated pipeline run for further investigation
The sheet can be scrolled independently of the main dashboard and dismissed without losing your current feed position.
Log Levels
| Level | Colour | Description |
|---|---|---|
thinking | Purple | Agent reasoning / deliberation |
tool_call | Blue | Agent invoking a tool |
tool_result | Green | Result returned from a tool |
error | Red | An error occurred |
metric | Amber | A metric or measurement was recorded |
info | Grey | General informational log |
Supported Agent Types
The dashboard recognises and labels the following agent types:
research · design · implementation · testing · release · marketing · documentation · security · uiux · dependency · migration · branch_sync · performance · revenue · compliance · curation · billing_audit · architect · mission_alignment · onboarding
Technical Notes
- No schema changes required. The dashboard reads entirely from the existing
agentLogstable. - Cursor-based pagination is used throughout the feed to prevent offset drift on high-volume live data.
- Server-side aggregation is used for summary stats; trace phase grouping is performed client-side on already-paginated data.
- The Job Trace sheet uses a
Sheet(slide-over) rather than a modal dialog so that long traces can be scrolled without blocking the main view.