How to create a copy of wandb plots online as well as offline

Hi Everyone,

How I can create a copy of a wandb logs? For example, I trained a model for 100 epochs and now I like to try several different configurations on top of it. The wandb should show me the previous logs and new logs on the same figure.
Suppose I have “base_experiment” and on top of it, i want to run 4 different experiments. So I will have the following 5 experiments;

  1. base_experiment
  2. base_experiment_followed_by_configuration_1
  3. base_experiment_followed_by_configuration_2
  4. base_experiment_followed_by_configuration_3
  5. base_experiment_followed_by_configuration_4

How i can create a copy of base experiment and use it for other 4 experiments?

Hi Vishal, and when you are talking about wandb logs, are you talking about the output logs in this case?

I am talking about the wandb logs which get pushed to the online wandb-api. Suppose I ran an “experiment_A”, now I need to run the “experiment_B”, the “experiment_B” start from where “experiment_A” left. One way to do it is to resume that training, but I want to have two separate experiments, one is original A and 2nd A+B.

Hi Vishal, apologies it took so long to get back to you had to check a few things internally. For a follow-up question, I am assuming when you are talking about different experiments, the main differences between them are going to be the datasets you are testing/training the models on?

Hi there, I wanted to follow up on this request. Please let us know if we can be of further assistance or if your issue has been resolved.

Hey @artsiom,

Thanks for following up with me. Just wanted to let you know that I’ll be using the same dataset for my project. Basically, I’m working on a problem of continual learning. The idea is to train the model on a set of “base classes” and then, at a later stage, train it on a set of “other classes” using different training configurations. This could be as simple as adjusting the learning rate or using knowledge distillation methods.

The training process for the base classes takes around 3 days, so I’m planning to save the logs for that run and then use them to try out different hyperparameters for the other classes. That way, I can start the wandb_logs directly from the base class logs for multiple runs.

I hope this is clear. I have added a sample plot as well please see below.

Thanks!

Gotcha!

Thank you so much for the graph and the explanation. Is there a specific python library you are using for your training?

Hi Vishal,

We wanted to follow up with you regarding your support request as we have not heard back from you. Is there a specific Python library you are using for your training?

Hi @artsiom, Thank you for your reply. I am using PyTorchLightning for my experiments.

Perfect! Although it’s not a perfect workaround, and is pretty hacky, I think what you could do is run the first part of your run in offline mode. Then sync it to wandb 4 different times using Wandb sync —Id 12345678 path/to/dir but with 4 different id’s which should make it into 4 different runs in the UI. Then you can resume those runs with the configs you would like.

We do have a know bug that is being worked on, although logging all of the metrics works with the method, where the console logs do get overwritten when you are resuming a run in that way.

Hey Vishal,

Following up on this thread to see if the suggestion has helped you out at all?

Hi Vishal, since we have not heard back from you we are going to close this request. If you would like to re-open the conversation, please let us know!

If you would like to reopen the conversation, please go ahead and open a new thread and link this one in there.

Hi Vishal, since we have not heard back from you we are going to close this request. If you would like to re-open the conversation, please let us know!

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.