I have some issues when trying to resume my wandb logging, but at specific steps.
- I have trained my network for 200K iterations, and logged everything up to this iteration on wandb, BUT without explicitly specifying the current step.
- I have found some wrong configurations in my scheduler.
- Thus, I resume the overall training, starting from 150K iterations.
In this case, since I have not explicitly specified the steps to wandb_run.log(), the log continues from 200K, while I’m training my network at 150K. Thus, the steps of the logs and the steps of the actual training show inconsistency.
[What I want to do]
I want to delete the logs from 150K to 200K manually, so that I can resume logging in an appropriate step, without explicitly indicating the current step to wandb_run.log().