Documentation Index
Fetch the complete documentation index at: https://docs.trulayer.ai/llms.txt
Use this file to discover all available pages before exploring further.
This page documents the public surface of trulayer. For narrative usage, see the tutorial. For configuration details, see configuration. For runnable end-to-end demos, see examples.
Source and issues
Module functions
trulayer.init()
Initialise the global client. Call once at app startup.
trulayer.init(
api_key: str,
project_name: str, # human-readable project name
project_id: str | None = None, # deprecated — use project_name. Removed in 0.3.x.
endpoint: str = "https://api.trulayer.ai",
batch_size: int = 50,
flush_interval: float = 2.0,
sample_rate: float = 1.0,
scrub_fn: Callable[[str], str] | None = None,
metadata_validator: Callable[[dict], None] | None = None,
redactor: Redactor | None = None,
) -> TruLayerClient
Returns the created TruLayerClient. Stored as the process-wide global — retrievable via get_client().
trulayer.get_client()
trulayer.get_client() -> TruLayerClient
Return the global client. Raises RuntimeError if init() has not been called.
trulayer.trace()
Context manager that starts a new trace.
trulayer.trace(
name: str,
session_id: str | None = None,
metadata: dict | None = None,
) -> TraceContext
Usage:
with trulayer.trace("answer_question") as trace:
trace.set_input({"question": question})
...
trace.set_output({"answer": answer})
trulayer.atrace()
Async variant of trace(). Same signature; used with async with.
async with trulayer.atrace("answer_question") as trace:
trace.set_input({"question": question})
trulayer.current_trace()
trulayer.current_trace() -> TraceContext | None
Return the trace in the current async-local context, or None if no trace is active.
trulayer.shutdown()
Flush buffered spans and stop the background worker. Call on process exit.
Instrumentation helpers
trulayer.instrument_openai()
trulayer.instrument_openai(client: openai.OpenAI | openai.AsyncOpenAI) -> None
Patch client.chat.completions.create and client.embeddings.create to emit spans automatically. Reversible via uninstrument_openai(client).
trulayer.instrument_anthropic()
trulayer.instrument_anthropic(client: anthropic.Anthropic | anthropic.AsyncAnthropic) -> None
Patch client.messages.create to emit spans. Reversible via uninstrument_anthropic(client).
trulayer.instrument_langchain()
trulayer.instrument_langchain() -> BaseCallbackHandler
Return a LangChain callback handler. Pass it to any ChatModel, Chain, or Retriever via the callbacks=[...] argument.
Classes
TruLayerClient
Explicit client for multi-tenant apps or tests. Prefer init() + get_client() for single-client apps.
from trulayer import TruLayerClient
client = TruLayerClient(api_key="...", project_name="...")
with client.trace("work") as trace:
...
Key methods:
| Method | Signature |
|---|
trace(name, session_id=None, metadata=None) | → TraceContext |
atrace(name, ...) | → async TraceContext |
submit_feedback(trace_id, label=None, score=None, comment=None, metadata=None) | → None |
run_eval(trace_id, evaluator_type, metric_name) | → EvalResult |
get_metrics(project_id, from_time, to_time, ...) | → Metrics |
flush() | → None (blocks until shipped) |
shutdown() | → None |
TraceContext
Returned by trace() and atrace(). Key methods:
| Method | Purpose |
|---|
set_input(value) | Set the trace’s input payload |
set_output(value) | Set the trace’s output payload |
set_metadata(**kwargs) | Attach key-value metadata |
set_error(exc) | Mark the trace as errored with an exception |
span(name, span_type="other", ...) | → SpanContext |
.id | Trace UUID (read-only) |
.session_id | Session identifier (read-only) |
SpanContext
Returned by TraceContext.span(). Key methods:
| Method | Purpose |
|---|
set_input(value) | Span input |
set_output(value) | Span output |
set_metadata(**kwargs) | Key-value metadata |
set_model(model) | Model name (for llm spans) |
set_tokens(prompt_tokens, completion_tokens) | Token counts |
set_cost(usd) | Cost of this span in USD (float) |
set_error(exc) | Mark errored |
.id | Span UUID (read-only) |
Models (Pydantic)
TraceData
class TraceData(BaseModel):
id: UUID
project_id: str
session_id: str | None
external_id: str | None
name: str | None
input: Any
output: Any
model: str | None
latency_ms: int | None
cost: float | None
error: str | None
tags: list[str]
tag_map: dict[str, str] | None # structured key→value tags; takes precedence over tags on the wire
metadata: dict[str, Any]
spans: list[SpanData]
started_at: datetime
ended_at: datetime | None
SpanData
class SpanData(BaseModel):
id: UUID
trace_id: UUID
name: str
span_type: Literal["llm", "retrieval", "tool", "other"]
input: Any
output: Any
error: str | None
latency_ms: int
model: str | None
prompt_tokens: int | None
completion_tokens: int | None
metadata: dict[str, Any]
started_at: datetime
ended_at: datetime | None
FeedbackData
class FeedbackData(BaseModel):
trace_id: UUID
label: Literal["good", "bad", "neutral"] | None
score: float | None
comment: str | None
metadata: dict[str, Any]
EventData
class EventData(BaseModel):
id: UUID
trace_id: UUID
span_id: UUID | None
level: Literal["debug", "info", "warn", "error"]
message: str
metadata: dict[str, Any]
timestamp: datetime
Exceptions
| Exception | Raised when |
|---|
TruLayerError | Base class for all SDK errors |
AuthenticationError | Invalid or revoked API key |
RateLimitError | Plan limit hit; trace dropped |
ValidationError | Trace/span payload failed validation |
The SDK never raises these to your call site — they’re logged and the trace is dropped. Subscribe via trulayer.on_error(fn) if you need to handle them yourself.