About Evaluation Studio¶
Evaluation Studio is a unified workspace for evaluating AI system performance across two main areas: Model Evaluation and Agentic Evaluation. It enables users to systematically assess both the quality of large language model (LLM) outputs and the behavior of agentic applications in real-world scenarios.
By supporting model and agentic app evaluation, Evaluation Studio provides a comprehensive foundation for improving LLM quality and agentic application behavior. Whether you're validating prompt effectiveness, debugging tools, or auditing full workflows, Evaluation Studio enables scalable, data-driven iteration — helping you build safer, more reliable, and higher-performing AI systems.
Model Evaluation¶
Model Evaluation enables you to assess the performance of large language models (LLMs) using configurable quality and safety metrics. You can:
- Upload datasets with input-output pairs
- Apply built-in or custom evaluators
- Analyze model effectiveness through visual scoring, thresholds, and collaborative projects
This evaluation is ideal for fine-tuning, comparing, and validating models before or after deployment.
Agentic Evaluation¶
Agentic Evaluation is designed to assess how effectively an agentic application performs in production. You can:
- Import app sessions and trace data
- Run multi-level evaluations to see how well the app achieves goals, follows workflows, and uses tools
- Analyze inputs and outputs across supervisors, agents, and tools
Agentic Evaluation enables multi-level evaluation across sessions and traces, offering deep insights into how orchestrators, agents, and tools operate in production. This helps uncover coordination issues, workflow failures, and opportunities for optimization.
Accessing Evaluation Studio¶
-
Log in to your Agent Platform account.
-
Go to the Agent Platform Modules menu and select Evaluation Studio.

-
On the Evaluation page, select Model evaluation or Agent evaluation to begin.
