
Try in Colab
Why should I use W&B?

- Unified dashboard: Central repository for all your model metrics and predictions
- Lightweight: No code changes required to integrate with Hugging Face
- Accessible: Free for individuals and academic teams
- Secure: All projects are private by default
- Trusted: Used by machine learning teams at OpenAI, Toyota, Lyft and more
Install, import, and log in
Install the Hugging Face and W&B libraries, and the GLUE dataset and training script for this tutorial.- Hugging Face Transformers: Natural language models and datasets
- W&B: Experiment tracking and visualization
- GLUE dataset: A language understanding benchmark dataset
- GLUE script: Model training script for sequence classification
Put in your API key
Once you’ve signed up, run the next cell and click on the link to get your API key and authenticate this notebook.Train the model
Next, call the downloaded training script run_glue.py and see training automatically get tracked to the W&B dashboard. This script fine-tunes BERT on the Microsoft Research Paraphrase Corpus— pairs of sentences with human annotations indicating whether they are semantically equivalent.Visualize results in dashboard
Click the link printed out above, or go to wandb.ai to see your results stream in live. The link to see your run in the browser will appear after all the dependencies are loaded. Look for the following output: “wandb: View run at [URL to your unique run]” Visualize Model Performance It’s easy to look across dozens of experiments, zoom in on interesting findings, and visualize highly dimensional data.

Track key information effortlessly by default
W&B saves a new run for each experiment. Here’s the information that gets saved by default:- Hyperparameters: Settings for your model are saved in Config
- Model Metrics: Time series data of metrics streaming in are saved in Log
- Terminal Logs: Command line outputs are saved and available in a tab
- System Metrics: GPU and CPU utilization, memory, temperature etc.