The Traces page is the main debugging surface — every trace your instrumented app sends lands here within a few seconds of ingest.Documentation Index
Fetch the complete documentation index at: https://docs.trulayer.ai/llms.txt
Use this file to discover all available pages before exploring further.
List view
Columns:- Timestamp — when the trace started
- Name — the trace name you passed to
trace() - Session —
session_idif set, clickable to jump to the session view - Duration — wall-clock span from start to last-span-end
- Tokens — prompt + completion totals across all
llmspans - Cost — computed from tokens × model price
- Status —
ok/error - Feedback — thumbs-up/down icon if feedback is attached
Filters
All filters compose. Combine freely.- Time range — inherits from the global picker; override here for fine-grained queries.
- Project — scope to a single project ID.
- Model — any
llmspan in the trace used this model. - Error only — traces that ended with
setError()or an uncaught exception. - Metadata — any key-value filter; use
metadata.tier = "pro"syntax. - Tags — if you tag traces via
metadata, filter by tag value. - Search — full-text over trace names and input/output content (respects your scrub function — redacted fields aren’t searchable).
Export
The Export button at the top of the list downloads the current filtered set as CSV or JSONL:- Download as CSV —
traces-YYYY-MM-DD.csvwith columnsid, project_id, model, environment, status, duration_ms, token_count, cost_usd, created_at. - Download as JSONL — one JSON object per line, same fields.
Trace detail
Click any row to open the detail view. You’ll see:Header
- Trace ID (copy icon — useful for support requests)
- Start time, duration, session link
- Total tokens and cost
- Attached feedback (if any)
- Attached eval results (if any)
- Control loop depth — shown when one or more retry control actions have executed on this trace. This integer counts how many retry actions have run across all policy executions for the trace. Escalation actions are excluded from the count. This field is only present on the trace detail view, not in the list.
Span waterfall
A Gantt-style visualisation of every span. Hover any span to see:- Input and output (JSON-pretty-printed, syntax-highlighted)
- Latency and token counts (for
llmspans) - Model, metadata, errors
Raw JSON
The tab in the top-right shows the full trace payload as JSON — useful for debugging the SDK itself or replaying elsewhere.Session replay
If the trace has asession_id, a Replay session button in the header opens the session view — every trace emitted under that session, ordered by timestamp, with the conversation reconstructed on the left and the span waterfall for the selected trace on the right. Great for “what was the user doing when it broke?”
Add to dataset
The toolbar’s Add to dataset action pushes the trace into any existing eval dataset (or creates a new one) with one click. The trace’s inputs and outputs become the dataset row; if the trace already has feedback or eval results, those become the expected outputs. See Evals → Datasets for how to run evaluators against the dataset.Keyboard
| Key | Action |
|---|---|
j / k | Next / previous trace in list |
Enter | Open selected trace |
Esc | Close detail |
/ | Focus search |
Common workflows
Debug a production error: filter by Error only and the last hour. Open any failing trace — the red span in the waterfall is the culprit. Check latency regressions: filter by Model = your production model, sort by Duration desc. Outliers rise to the top. Investigate a user complaint: if they gave you a trace ID (or session ID), paste it into search. Filter by their user identifier (e.g.metadata.user_id = "u_42") to see every trace from that user.