Try in Colab
For details, see the Data types reference.
For more details, check out a demo report about visualize model predictions or watch a video walkthrough.
Pre-requisites
In order to log media objects with the W&B SDK, you may need to install additional dependencies. You can install these dependencies by running the following command:Images
Log images to track inputs, outputs, filter weights, activations, and more.
It’s recommended to log fewer than 50 images per step to prevent logging from becoming a bottleneck during training and image loading from becoming a bottleneck when viewing results.
- Logging arrays as Images
- Logging PIL Images
- Logging Images from Files
Provide arrays directly when constructing images manually, such as by using We assume the image is gray scale if the last dimension is 1, RGB if it’s 3, and RGBA if it’s 4. If the array contains floats, we convert them to integers between
make_grid
from torchvision
.Arrays are converted to png using Pillow.0
and 255
. If you want to normalize your images differently, you can specify the mode
manually or just supply a PIL.Image
, as described in the “Logging PIL Images” tab of this panel.Image overlays
- Segmentation Masks
- Bounding Boxes
Log semantic segmentation masks and interact with them (altering opacity, viewing changes over time, and more) via the W&B UI.
To log an overlay, provide a dictionary with the following keys and values to the Segmentation masks for a key are defined at each step (each call to

masks
keyword argument of wandb.Image
:- one of two keys representing the image mask:
"mask_data"
: a 2D NumPy array containing an integer class label for each pixel"path"
: (string) a path to a saved image mask file
"class_labels"
: (optional) a dictionary mapping the integer class labels in the image mask to their readable class names
run.log()
).- If steps provide different values for the same mask key, only the most recent value for the key is applied to the image.
- If steps provide different mask keys, all values for each key are shown, but only those defined in the step being viewed are applied to the image. Toggling the visibility of masks not defined in the step do not change the image.
Image overlays in Tables
- Segmentation Masks
- Bounding Boxes

wandb.Image
object for each row in the table.An example is provided in the Code snippet below:Histograms
- Basic Histogram Logging
- Flexible Histogram Logging
If a sequence of numbers, such as a list, array, or tensor, is provided as the first argument, we will construct the histogram automatically by calling 
np.histogram
. All arrays/tensors are flattened. You can use the optional num_bins
keyword argument to override the default of 64
bins. The maximum number of bins supported is 512
.In the UI, histograms are plotted with the training step on the x-axis, the metric value on the y-axis, and the count represented by color, to ease comparison of histograms logged throughout training. See the “Histograms in Summary” tab of this panel for details on logging one-off histograms.
3D visualizations
Log 3D point clouds and Lidar scenes with bounding boxes. Pass in a NumPy array containing coordinates and colors for the points to render.The W&B UI truncates the data at 300,000 points.
NumPy array formats
Three different formats of NumPy arrays are supported for flexible color schemes.[[x, y, z], ...]
nx3
[[x, y, z, c], ...]
nx4
| c is a category
in the range[1, 14]
(Useful for segmentation)[[x, y, z, r, g, b], ...]
nx6 | r,g,b
are values in the range[0,255]
for red, green, and blue color channels.
Python object
Using this schema, you can define a Python object and pass it in to thefrom_point_cloud
method.
points
is a NumPy array containing coordinates and colors for the points to render using the same formats as the simple point cloud renderer shown above.boxes
is a NumPy array of python dictionaries with three attributes:corners
- a list of eight cornerslabel
- a string representing the label to be rendered on the box (Optional)color
- rgb values representing the color of the boxscore
- a numeric value that will be displayed on the bounding box that can be used to filter the bounding boxes shown (for example, to only show bounding boxes wherescore
>0.75
). (Optional)
type
is a string representing the scene type to render. Currently the only supported value islidar/beta
Point cloud files
You can use thefrom_file
method to load in a JSON file full of point cloud data.
NumPy arrays
Using the same array formats defined above, you can usenumpy
arrays directly with the from_numpy
method to define a point cloud.
pdb
, pqr
, mmcif
, mcif
, cif
, sdf
, sd
, gro
, mol2
, or mmtf.
W&B also supports logging molecular data from SMILES strings, rdkit
mol
files, and rdkit.Chem.rdchem.Mol
objects.

PNG image
wandb.Image
converts numpy
arrays or instances of PILImage
to PNGs by default.
Video
Videos are logged using thewandb.Video
data type:
2D view of a molecule
You can log a 2D view of a molecule using thewandb.Image
data type and rdkit
:
Other media
W&B also supports logging of a variety of other media types.Audio
audio-file
.
Video
ffmpeg
and the moviepy
python library are required when passing numpy objects). Supported formats are "gif"
, "mp4"
, "webm"
, and "ogg"
. If you pass a string to wandb.Video
we assert the file exists and is a supported format before uploading to wandb. Passing a BytesIO
object will create a temporary file with the specified format as the extension.
On the W&B Run and Project Pages, you will see your videos in the Media section.
For more usage information, see video-file
.
Text
Usewandb.Table
to log text in tables to show up in the UI. By default, the column headers are ["Input", "Output", "Expected"]
. To ensure optimal UI performance, the default maximum number of rows is set to 10,000. However, users can explicitly override the maximum with wandb.Table.MAX_ROWS = {DESIRED_MAX}
.
DataFrame
object.
string
.
HTML
inject=False
.
html-file
.