Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.trulayer.ai/llms.txt

Use this file to discover all available pages before exploring further.

Feedback is a human-provided label on a trace. Typical sources: a thumbs-up / thumbs-down button in your UI, a star rating, or an internal review.

Shape

FieldTypeRequired
trace_idstringyes
label"good" | "bad" | "neutral"one of label, score, or comment
scorefloatoptional; 0.0 – 1.0 or any numeric rating
commentstringoptional free-text
metadataobjectoptional — user_id, source, etc.

Submitting feedback

import trulayer

client = trulayer.get_client()
client.submit_feedback(
    trace_id=trace_id,
    label="good",
    score=0.95,
    comment="Exactly what I needed.",
    metadata={"user_id": "u_123", "source": "thumbs_button"},
)

Why feedback matters

Feedback is the source of truth that evaluators are measured against. When you publish an eval, TruLayer compares its scores to the feedback on those traces and reports correlation — so you can tell whether your automated evals actually track what humans think is good. Feedback also flows into datasets: traces with consistent human labels are candidates for your regression test set.

UX patterns

  • Thumbs up / down on response: submit immediately on click with label: "good" or "bad".
  • Edit/regenerate: submit label: "bad" when the user regenerates — it’s a strong implicit signal.
  • Explicit rating: expose a 1–5 star widget for low-friction quantitative feedback.
  • Free-text: always provide an optional comment field — a handful of qualitative comments are worth more than thousands of thumbs.
Never require feedback. Collection rate is a quality signal too.