DocsEvaluationExperimentsData Model

Data Model

This page describes the data model for evaluation-related objects in Langfuse. For an overview of how these objects work together, see the Concepts page.

For detailed reference please refer to

The following objects are covered in this page:

Object/Function definitionDescription
DatasetA collection of dataset items to run experiments on.
Dataset ItemAn individual item in a dataset.
Dataset RunOr experiment run. The object linking the results of an experiment.
Dataset Run ItemOr experiment run item.
ScoreThe output of an evaluator.
Score ConfigConfiguration defining how a score is calculated and interpreted.
Task FunctionFunction definition of the task to run on dataset items for a specific experiment.
Evaluator FunctionFunction definition for an evaluator.

Objects

Datasets

Datasets are a collection of inputs and, optionally, expected outputs that can be used during Dataset runs.

Datasets are a collection of DatasetItems.

Dataset object

AttributeTypeRequiredDescription
idstringYesUnique identifier for the dataset
namestringYesName of the dataset
descriptionstringNoDescription of the dataset
metadataobjectNoAdditional metadata for the dataset
remoteExperimentUrlstringNoWebhook endpoint for triggering experiments
remoteExperimentPayloadobjectNoPayload for triggering experiments

DatasetItem object

AttributeTypeRequiredDescription
idstringYesUnique identifier for the dataset item. Dataset items are upserted on their id. Id needs to be unique (project-level) and cannot be reused across datasets.
datasetIdstringYesID of the dataset this item belongs to
inputobjectNoInput data for the dataset item
expectedOutputobjectNoExpected output data for the dataset item
metadataobjectNoAdditional metadata for the dataset item
sourceTraceIdstringNoID of the source trace to link this dataset item to
sourceObservationIdstringNoID of the source observation to link this dataset item to
statusDatasetStatusNoStatus of the dataset item. Defaults to ACTIVE for newly created items. Possible values: ACTIVE, ARCHIVED

DatasetRun (Experiment Run)

Dataset runs are used to run a dataset through your LLM application and optionally apply evaluation methods to the results. This is often referred to as Experiment run.


DatasetRun object

AttributeTypeRequiredDescription
idstringYesUnique identifier for the dataset run
namestringYesName of the dataset run
descriptionstringNoDescription of the dataset run
metadataobjectNoAdditional metadata for the dataset run
datasetIdstringYesID of the dataset this run belongs to

DatasetRunItem object

AttributeTypeRequiredDescription
idstringYesUnique identifier for the dataset run item
datasetRunIdstringYesID of the dataset run this item belongs to
datasetItemIdstringYesID of the dataset item to link to this run
traceIdstringYesID of the trace to link to this run
observationIdstringNoID of the observation to link to this run

Most of the time, we recommend that DatasetRunItems reference TraceIDs directly. The reference to ObservationID exists for backwards compatibility with older SDK versions.

Scores

Scores are the data object to store evaluation results. They are used to assign evaluation scores to traces, observations, sessions, or dataset runs. Scores can be added manually via annotations, programmatically via the SDK/API, or automatically via LLM-as-a-Judge evaluators.


Scores have the following properties:

  • Each Score references exactly one of Trace, Observation, Session, or DatasetRun
  • Scores are either numeric, categorical, or boolean
  • Scores can optionally be linked to a ScoreConfig to ensure they comply with a specific schema

Score object

AttributeTypeRequiredDescription
idstringYesUnique identifier of the score. Auto-generated by SDKs. Optionally can also be used as an idempotency key to update scores.
namestringYesName of the score, e.g. user_feedback, hallucination_eval
valuenumberNoNumeric value of the score. Always defined for numeric and boolean scores. Optional for categorical scores.
stringValuestringNoString equivalent of the score’s numeric value for boolean and categorical data types. Automatically set for categorical scores based on the config if the configId is provided.
dataTypestringNoAutomatically set based on the config data type when the configId is provided. Otherwise can be defined manually as NUMERIC, CATEGORICAL or BOOLEAN
sourcestringYesAutomatically set based on the source of the score. Can be either API, EVAL, or ANNOTATION
commentstringNoEvaluation comment, commonly used for user feedback, eval reasoning output or internal notes
traceIdstringNoId of the trace the score relates to
observationIdstringNoId of the observation (e.g. LLM call) the score relates to
sessionIdstringNoId of the session the score relates to
datasetRunIdstringNoId of the dataset run the score relates to
configIdstringNoScore config id to ensure that the score follows a specific schema. Can be defined in the Langfuse UI or via API.

Common Use Cases

LevelDescription
TraceUsed for evaluation of a single interaction. (most common)
ObservationUsed for evaluation of a single observation below the trace level.
SessionUsed for comprehensive evaluation of outputs across multiple interactions.
Dataset RunUsed for performance scores of a Dataset Run.

Score Config

Score configs are used to ensure that your scores follow a specific schema. Using score configs allows you to standardize your scoring schema across your team and ensure that scores are consistent and comparable for future analysis.

You can define a ScoreConfig in the Langfuse UI or via our API. Configs are immutable but can be archived (and restored anytime).

A score config includes:

  • Score name
  • Data type: NUMERIC, CATEGORICAL, BOOLEAN
  • Constraints on score value range (Min/Max for numerical, Custom categories for categorical data types)

ScoreConfig object

AttributeTypeRequiredDescription
idstringYesUnique identifier of the score config.
namestringYesName of the score config, e.g. user_feedback, hallucination_eval
dataTypestringYesCan be either NUMERIC, CATEGORICAL or BOOLEAN
isArchivedbooleanNoWhether the score config is archived. Defaults to false
minValuenumberNoSets minimum value for numerical scores. If not set, the minimum value defaults to -∞
maxValuenumberNoSets maximum value for numerical scores. If not set, the maximum value defaults to +∞
categorieslistNoDefines categories for categorical scores. List of objects with label value pairs
descriptionstringNoProvides further description of the score configuration

End to end data relations

An experiment can combine a few Langfuse objects:

  • DatasetRuns (or Experiment runs) are created by looping through all or selected DatasetItems of a Dataset with your LLM application.
  • For each DatasetItem passed into the LLM application as an Input a DatasetRunItem & a Trace are created.
  • Optionally Scores can be added to the Traces to evaluate the output of the LLM application during the DatasetRun.

See the Concepts page for more information on how these objects work together conceptually. See the observability core concepts page for more details on traces and observations.

Function Definitions

When running experiments via the SDK, you define task and evaluator functions. These are user-defined functions that the experiment runner calls for each dataset item. For more information on how experiments work conceptually, see the Concepts page.

Task

A task is a function that takes a dataset item and returns an output during an experiment run.

See SDK references for function signatures and parameters:

Evaluator

An evaluator is a function that scores the output of a task for a single dataset item. Evaluators receive the input, output, expected output, and metadata, and return an Evaluation object that becomes a Score in Langfuse.

See SDK references for function signatures and parameters:

Run Evaluator

A run evaluator is a function that assesses the full experiment results and computes aggregate metrics. When run on Langfuse datasets, the resulting scores are attached to the dataset run.

See SDK references for function signatures and parameters:

For detailed usage examples of tasks and evaluators, see Experiments via SDK.

Local Datasets

Currently, if an Experiment via SDK is used to run experiments on local datasets, only traces are created in Langfuse - no dataset runs are generated. Each task execution creates an individual trace for observability and debugging.

We have improvements on our roadmap to support similar functionality such as run overviews, comparison views, and more for experiments on local datasets as for Langfuse datasets.

Was this page helpful?