SOLVED: Runs are not logged separately

Hello, I’m trying to run several studies and have them log into separate runs but be grouped un der the name “Scenario Eval” as part of the project “Scenario”. I have one python script running a for-loop with multiple calls of the following lines:

for STUDY_NAME, study ...:
    wandb_logger = WandbLogger(project="Scenario",  name=STUDY_NAME+'_EVAL_'+str(study.best_params['lr']),
group='Scenario Eval',
 log_model="all",
 reinit=True)
            trainer = pl.Trainer(
                stochastic_weight_avg=False,
                logger=wandb_logger,
                checkpoint_callback=False,
                log_every_n_steps=10,
                default_root_dir = cache_path,
                max_epochs=N_EPOCHS_EVAL,
                gpus=1 if torch.cuda.is_available() else None,
            )
            wandb_logger.watch(model)
            trainer.fit(model, datamodule=datamodule)
            trainer.logger.log_hyperparams(hyperparameters)

In Wandb everything shows up as a single run, despite multiple loggers being created in python with explicit “reinit=True”.


Do I need to do something else to separate these runs?

1 Like

Fix mentioned here:

seems to be just weird behaviour

1 Like

Thanks for updating this.
Did you solve your issue? Were you able to get different runs for your different studies?

1 Like

Yup, I’m now getting new runs for every training as expected

1 Like

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.