- Track a single process: Track a rank 0 process (also known as a “leader” or “coordinator”) with W&B. This is a common solution for logging distributed training experiments with the PyTorch Distributed Data Parallel (DDP) Class.
- Track multiple processes: For multiple processes, you can either:
- Track each process separately using one run per process. You can optionally group them together in the W&B App UI.
- Track all processes to a single run.
Track a single process
This section describes how to track values and metrics available to your rank 0 process. Use this approach to track only metrics that are available from a single process. Typical metrics include GPU/CPU utilization, behavior on a shared validation set, gradients and parameters, and loss values on representative data examples. Within the rank 0 process, initialize a W&B run withwandb.init()
and log experiments (wandb.log
) to that run.
The following sample Python script (log-ddp.py
) demonstrates one way to track metrics on two GPUs on a single machine using PyTorch DDP. PyTorch DDP (DistributedDataParallel
intorch.nn
) is a popular library for distributed training. The basic principles apply to any distributed training setup, but the implementation may differ.
The Python script:
- Starts multiple processes with
torch.distributed.launch
. - Checks the rank with the
--local_rank
command line argument. - If the rank is set to 0, sets up
wandb
logging conditionally in thetrain()
function.


Track multiple processes
Track multiple processes with W&B with one of the following approaches:- Tracking each process separately by creating a run for each process.
- Tracking all processes to a single run.
Track each process separately
This section describes how to track each process separately by creating a run for each process. Within each run you log metrics, artifacts, and forth to their respective run. Callwandb.Run.finish()
at the end of training, to mark that the run has completed so that all processes exit properly.
You might find it difficult to keep track of runs across multiple experiments. To mitigate this, provide a value to the group
parameter when you initialize W&B (wandb.init(group='group-name')
) to keep track of which run belongs to a given experiment. For more information about how to keep track of training and evaluation W&B Runs in experiments, see Group Runs.
Use this approach if you want to track metrics from individual processes. Typical examples include the data and predictions on each node (for debugging data distribution) and metrics on individual batches outside of the main node. This approach is not necessary to get system metrics from all nodes nor to get summary statistics available on the main node.

Organize distributed runs
Set thejob_type
parameter when you initialize W&B (wandb.init(job_type='type-name')
) to categorize your nodes based on their function. For example, you might have a main coordinating node and several reporting worker nodes. You can set job_type
to main
for the main coordinating node and worker
for the reporting worker nodes:
job_type
for your nodes, you can create saved views in your workspace to organize your runs. Click the … action menu at the top right and click Save as new view.
For example, you could create the following saved views:
-
Default view: Filter out worker nodes to reduce noise
- Click Filter, then set Job Type to
worker
. - Shows only your reporting nodes
- Click Filter, then set Job Type to
-
Debug view: Focus on worker nodes for troubleshooting
- Click Filter, then set Job Type
==
worker
and set State toIN
crashed
. - Shows only worker nodes that have crashed or are in error states
- Click Filter, then set Job Type
-
All nodes view: See everything together
- No filter
- Useful for comprehensive monitoring
Track all processes to a single run
Parameters prefixed by
x_
(such as x_label
) are in public preview. Create a GitHub issue in the W&B repository to provide feedback.RequirementsTo track multiple processes to a single run, you must have:
-
W&B Python SDK version
v0.19.9
or newer. - W&B Server v0.68 or newer.
wandb.init()
. Pass in a wandb.Settings
object to the settings
parameter (wandb.init(settings=wandb.Settings()
) with the following:
- The
mode
parameter set to"shared"
to enable shared mode. - A unique label for
x_label
. You use the value you specify forx_label
to identify which node the data is coming from in logs and system metrics in the W&B App UI. If left unspecified, W&B creates a label for you using the hostname and a random hash. - Set the
x_primary
parameter toTrue
to indicate that this is the primary node. - Optionally provide a list of GPU indexes ([0,1,2]) to
x_stats_gpu_device_ids
to specify which GPUs W&B tracks metrics for. If you do not provide a list, W&B tracks metrics for all GPUs on the machine.
x_primary=True
distinguishes a primary node from worker nodes. Primary nodes are the only nodes that upload files shared across nodes such as configuration files, telemetry and more. Worker nodes do not upload these files.wandb.init()
and provide the following:
- A
wandb.Settings
object to thesettings
parameter (wandb.init(settings=wandb.Settings()
) with:- The
mode
parameter set to"shared"
to enable shared mode. - A unique label for
x_label
. You use the value you specify forx_label
to identify which node the data is coming from in logs and system metrics in the W&B App UI. If left unspecified, W&B creates a label for you using the hostname and a random hash. - Set the
x_primary
parameter toFalse
to indicate that this is a worker node.
- The
- Pass the run ID used by the primary node to the
id
parameter. - Optionally set
x_update_finish_state
toFalse
. This prevents non-primary nodes from updating the run’s state tofinished
prematurely, ensuring the run state remains consistent and managed by the primary node.
- Use the same entity and project for all nodes. This helps ensure the correct run ID is found.
- Consider defining an environment variable on each worker node to set the run ID of the primary node.
See the Distributed Training with Shared Mode report for an end-to-end example on how to train a model on a multi-node and multi-GPU Kubernetes cluster in GKE.
- Navigate to the project that contains the run.
- Click on the Runs tab in the left sidebar.
- Click on the run you want to view.
- Click on the Logs tab in the left sidebar.
x_label
in the UI search bar located at the top of the console log page. For example, the following image shows which options are available to filter the console log by if values rank0
, rank1
, rank2
, rank3
, rank4
, rank5
, and rank6
are provided to x_label
.`

rank_0
, rank_1
, rank_2
) that you specify in the x_label
parameter.

Example use cases
The following code snippets demonstrate common scenarios for advanced distributed use cases.Spawn process
Use thewandb.setup()
method in your main function if you initiate a run in a spawned process:
Share a run
Pass a run object as an argument to share runs between processes:Troubleshooting
There are two common issues you might encounter when using W&B and distributed training:- Hanging at the beginning of training - A
wandb
process can hang if thewandb
multiprocessing interferes with the multiprocessing from distributed training. - Hanging at the end of training - A training job might hang if the
wandb
process does not know when it needs to exit. Call thewandb.Run.finish()
API at the end of your Python script to tell W&B that the run finished. Thewandb.Run.finish()
API will finish uploading data and will cause W&B to exit. W&B recommends usingwandb service
command to improve the reliability of your distributed jobs. Both of the preceding training issues are commonly found in versions of the W&B SDK where wandb service is unavailable.
Enable W&B Service
Depending on your version of the W&B SDK, you might already have W&B Service enabled by default.W&B SDK 0.13.0 and above
W&B Service is enabled by default for versions of the W&B SDK0.13.0
and above.
W&B SDK 0.12.5 and above
Modify your Python script to enable W&B Service for W&B SDK version 0.12.5 and above. Use thewandb.require
method and pass the string "service"
within your main function:
WANDB_START_METHOD
environment variable to "thread"
to use multithreading instead if you use a W&B SDK version 0.12.4 and below.