wandb.Run.log()
. Data logged from your script is saved locally to your machine in a directory called wandb
, then synced to the W&B cloud or your private server.
Key-value pairs are stored in one unified dictionary only if you pass the same value for each step. W&B writes all of the collected keys and values to memory if you log a different value for
step
.wandb.Run.log()
is a new step
by default. W&B uses steps as the default x-axis when it creates charts and panels. You can optionally create and use a custom x-axis or capture a custom summary metric. For more information, see Customize log axes.
Use
wandb.Run.log()
to log consecutive values for each step
: 0, 1, 2, and so on. It is not possible to write to a specific history step. W&B only writes to the “current” and “next” step.Automatically logged data
W&B automatically logs the following information during a W&B Experiment:- System metrics: CPU and GPU utilization, network, etc. For the GPU, these are fetched with
nvidia-smi
. - Command line: The stdout and stderr are picked up and show in the logs tab on the run page.
- Git commit: Pick up the latest git commit and see it on the overview tab of the run page, as well as a
diff.patch
file if there are any uncommitted changes. - Dependencies: The
requirements.txt
file will be uploaded and shown on the files tab of the run page, along with any files you save to thewandb
directory for the run.
What data is logged with specific W&B API calls?
With W&B, you can decide exactly what you want to log. The following lists some commonly logged objects:- Datasets: You have to specifically log images or other dataset samples for them to stream to W&B.
- Plots: Use
wandb.plot()
withwandb.Run.log()
to track charts. See Log Plots for more information. - Tables: Use
wandb.Table
to log data to visualize and query with W&B. See Log Tables for more information. - PyTorch gradients: Add
wandb.Run.watch(model)
to see gradients of the weights as histograms in the UI. - Configuration information: Log hyperparameters, a link to your dataset, or the name of the architecture you’re using as config parameters, passed in like this:
wandb.init(config=your_config_dictionary)
. See the PyTorch Integrations page for more information. - Metrics: Use
wandb.Run.log()
to see metrics from your model. If you log metrics like accuracy and loss from inside your training loop, you’ll get live updating graphs in the UI.
Metric naming constraints
Due to GraphQL limitations, metric names in W&B must follow specific naming rules:- Allowed characters: Letters (A-Z, a-z), digits (0-9), and underscores (_)
- Starting character: Names must start with a letter or underscore
- Pattern: Metric names should match
/^[_a-zA-Z][_a-zA-Z0-9]*$/
Avoid naming metrics with invalid characters (such as commas, spaces, or special symbols), which may cause problems with sorting, querying, or display in the W&B UI.
Common workflows
-
Compare the best accuracy: To compare the best value of a metric across runs, set the summary value for that metric. By default, summary is set to the last value you logged for each key. This is useful in the table in the UI, where you can sort and filter runs based on their summary metrics, to help compare runs in a table or bar chart based on their best accuracy, instead of final accuracy. For example:
wandb.run.summary["best_accuracy"] = best_accuracy
-
View multiple metrics on one chart: Log multiple metrics in the same call. For example:
You can then plot both metrics in the UI.
-
Customize the x-axis: Add a custom x-axis to the same log call to visualize your metrics against a different axis in the W&B dashboard. For example:
To set the default x-axis for a given metric use Run.define_metric().
-
Log rich media and charts:
wandb.Run.log()
supports the logging of a wide variety of data types, from media like images and videos to tables and charts.