Hello! I’m currently investigating SimCLR, which consists of a pretraining and fine-tuning/linear evaluation step. I can log pretraining loss and linear eval accuracy metrics in the same W&B run by running eval after every pretraining epoch, but the pretraining script has to wait until eval is done before continuing with the next epoch. Is there any way to run linear eval after pretraining is done (e.g., in a separate eval.py
script, and log the results to the same run_id?
Hi @goh,
You can resume a run from another script using the resume
argument in wandb.init
. Here is a link to our docs describing hot do do this.
Does this work for your use case?
Thanks,
Ramit
Hi Edwin,
We wanted to follow up with you regarding your support request as we have not heard back from you. Please let us know if we can be of further assistance or if your issue has been resolved.
Best,
Weights & Biases
Hi Edwin, since we have not heard back from you we are going to close this request. If you would like to re-open the conversation, please let us know!
This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.