How to create a W&B Experiment
Create a W&B Experiment in four steps:- Initialize a W&B Run
- Capture a dictionary of hyperparameters
- Log metrics inside your training loop
- Log an artifact to W&B
Initialize a W&B run
Usewandb.init()
to create a W&B Run.
The following snippet creates a run in a W&B project named “cat-classification”
with the description “My first experiment”
to help identify this run. Tags “baseline”
and “paper1”
are included to remind us that this run is a baseline experiment intended for a future paper publication.
wandb.init()
returns a Run object.
Note: Runs are added to pre-existing projects if that project already exists when you call
wandb.init()
. For example, if you already have a project called “cat-classification”
, that project will continue to exist and not be deleted. Instead, a new run is added to that project.Capture a dictionary of hyperparameters
Save a dictionary of hyperparameters such as learning rate or model type. The model settings you capture in config are useful later to organize and query your results.Log metrics inside your training loop
Callrun.log()
to log metrics about each training step such as accuracy and loss.
Log an artifact to W&B
Optionally log a W&B Artifact. Artifacts make it easy to version datasets and models.Putting it all together
The full script with the preceding code snippets is found below:Next steps: Visualize your experiment
Use the W&B Dashboard as a central place to organize and visualize results from your machine learning models. With just a few clicks, construct rich, interactive charts like parallel coordinates plots, parameter importance analyzes, and additional chart types.
Best practices
The following are some suggested guidelines to consider when you create experiments:- Finish your runs: Use
wandb.init()
in awith
statement to automatically mark the run as finished when the code completes or raises an exception.-
In Jupyter notebooks, it may be more convenient to manage the Run object yourself. In this case, you can explicitly call
finish()
on the Run object to mark it complete:
-
In Jupyter notebooks, it may be more convenient to manage the Run object yourself. In this case, you can explicitly call
- Config: Track hyperparameters, architecture, dataset, and anything else you’d like to use to reproduce your model. These will show up in columns— use config columns to group, sort, and filter runs dynamically in the app.
- Project: A project is a set of experiments you can compare together. Each project gets a dedicated dashboard page, and you can easily turn on and off different groups of runs to compare different model versions.
- Notes: Set a quick commit message directly from your script. Edit and access notes in the Overview section of a run in the W&B App.
- Tags: Identify baseline runs and favorite runs. You can filter runs using tags. You can edit tags at a later time on the Overview section of your project’s dashboard on the W&B App.
- Create multiple run sets to compare experiments: When comparing experiments, create multiple run sets to make metrics easy to compare. You can toggle run sets on or off on the same chart or group of charts.
wandb.init()
API docs in the API Reference Guide.