How can I watch layer activations?

It looks like wandb.watch only logs gradients and weights? Coming from TensorFlow and TensorBoard, where this was well integrated. Now I’m on PyTorch + W&B.

Hi @pcoady00 ,

Thank you for reaching out. Let me go ahead and double-check this for you and will get back to you for any updates.

hi @pcoady00

Weights & Biases can log more than just gradients and weights. When using wandb.watch, you can log gradients of the weights as histograms in the UI. However, Weights & Biases also allows you to log a variety of other data types. Here are some examples:

  • Datasets: You can log images or other dataset samples.
  • Plots: You can use wandb.plot with wandb.log to track charts.
  • Tables: You can use wandb.Table to log data to visualize and query with W&B.
  • Configuration information: You can log hyperparameters, a link to your dataset, or the name of the architecture you’re using as config parameters, passed in like this: wandb.init(config=your_config_dictionary).
  • Metrics: You can use wandb.log to see metrics from your model. If you log metrics like accuracy and loss from inside your training loop, you’ll get live updating graphs in the UI.

Here’s an example of how to log gradients and weights in PyTorch:

import wandb

# Initialize a new run
wandb.init(project="my-project")

# Define and setup your model
model = ...
wandb.watch(model, log="all")

# Train your model
for batch_idx, (data, target) in enumerate(train_loader):
    output = model(data)
    loss = F.nll_loss(output, target)
    loss.backward()
    optimizer.step()
    if batch_idx % args.log_interval == 0:
        wandb.log({"loss": loss})

In this example, wandb.watch(model, log="all") is used to log all gradients and weights of the model. The log parameter can be set to "gradients", "parameters", "all", or False. If set to "gradients", it logs the gradients of the model parameters. If set to "parameters", it logs the model parameters (weights). If set to "all", it logs both gradients and parameters. If set to False, it doesn’t log gradients or parameters.

Hi @pcoady00 ,

I just want to follow up if this helps and if you still need more assistance.

Regards,
Carlo Argel

Hi @pcoady00 , since we have not heard back from you we are going to close this request. If you would like to re-open the conversation, please let us know!

I would also like to how to watch layer activations.

Hi @carlo-catimbang - your code doesn’t track activations. But I was able to figure it out.

Need to use forward hook in Lightning and run a batch through and then disable the hook. Here are key code snippets.

# add to __init__ method of your LightningModule
for name, module in self.named_modules():
    module.name = name
    # forward hook callback - static method in my LightningModule
    @staticmethod
    def tb_hook(writer, step):
        """Tensorboard activation histogram hook."""
        def hook(module, input, output):
            writer.add_histogram('act/' + str(module.name), output, global_step=step)

        return hook

And, finally, …

        # add to training_step method in your LightningModule
        if self.global_step % 250 == 0:
            with torch.nn.modules.module.register_module_forward_hook(
                LitCNN.tb_hook(tb_logger, self.global_step)):
                self(x)  # call forward method

Edit : Oh, above is for TensorBoard, but basically the same code is needed for W&B. Here is hook code for W&B …

def wandb_hook(run, step):
    """ Weights & Biases histogram hook. """
    def hook(module, input, output):
        run.log({module.name: wandb.Histogram(output.detach())}, step=step)

    return hook

Hi @cstein06 - I finally got around to putting example up on GitHub. Logs activations, gradients, parameters, and metrics to both TensorBoard and W&B:

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.