Basic example: Trace Llama 3.1 8B with Weave
This example shows how to send a prompt to the Llama 3.1 8B model and trace the call with Weave. Tracing captures the full input and output of the LLM call, monitors performance, and lets you analyze results in the Weave UI.Learn more about tracing in Weave.
- You define a
@weave.op()
-decorated function that makes a chat completion request - Your traces are recorded and linked to your W&B entity and project
- The function is automatically traced, logging inputs, outputs, latency, and metadata
- The result prints in the terminal, and the trace appears in your Traces tab at https://wandb.ai
- Clicking the link printed in the terminal (for example:
https://wandb.ai/<your-team>/<your-project>/r/call/01977f8f-839d-7dda-b0c2-27292ef0e04g
) - Or navigating to https://wandb.ai and selecting the Traces tab
Advanced example: Use Weave Evaluations and Leaderboards
Besides tracing model calls, you can also evaluate performance and publish leaderboards. This example compares two models on a question-answer dataset. Before running this example, complete the prerequisites.- Select the Traces tab to view your traces
- Select the Evals tab to view your model evaluations
- Select the Leaders tab to view the generated leaderboard


Next steps
- Explore the API reference for all available methods
- Try models in the UI