TruLayer integrates with the major LLM SDKs and orchestration frameworks via auto-instrumentation. Once instrumented, every call these frameworks make becomes a span in the active trace — no further code changes.Documentation Index
Fetch the complete documentation index at: https://docs.trulayer.ai/llms.txt
Use this file to discover all available pages before exploring further.
Tier 1 (supported at V1 Phase 1)
| Framework | Language(s) | Helper |
|---|---|---|
| OpenAI | Python, TypeScript | instrument_openai(client) / instrumentOpenAI(client) |
| Anthropic | Python, TypeScript | instrument_anthropic(client) / instrumentAnthropic(client) |
| Vercel AI SDK | TypeScript | instrumentVercelAI(tl, { generateText, streamText }) |
| LlamaIndex | Python | instrument_llamaindex() |
| PydanticAI | Python | instrument_pydanticai(agent, trace) |
Tier 2 (supported at V1 Phase 2)
| Framework | Language(s) | Helper |
|---|---|---|
| LangChain | Python, TypeScript | instrument_langchain(client) / TruLayerCallbackHandler |
| CrewAI | Python | instrument_crewai(crew, trace) |
| Mastra | TypeScript | Generic trace() / span() API |
| DSPy | Python | instrument_dspy(trace) |
| Haystack | Python | instrument_haystack(pipeline, trace) |
| AutoGen | Python | instrument_autogen(agent, trace) |
Not supported? Use manual instrumentation.
If your framework isn’t listed, you can still get full tracing — wrap calls withtrace() and span() manually. See Traces and spans.
Or open a feature request against the relevant SDK (Python / TypeScript) — we prioritise based on demand.
How auto-instrumentation works
Each helper monkey-patches the framework’s client methods to emit a span before each call and record the result after. The patch is:- Reversible — call
uninstrument_*()/uninstrumentLangChain()to restore the original methods - Idempotent — calling
instrument_*()twice is a no-op - Thread/async-safe — spans attach to the active trace via async-local context
- Non-blocking — span emission is buffered; no added latency on the hot path