Many logged in messages

Hi,

I am currently using:

  • Visual Studio Code
  • Python 3.8.17
  • Pytorch 2.0.1
  • Wandb 0.15.12

I am using a script similar to the Jupyter Notebook example: Simple_PyTorch_Integration.ipynb. However, I am running my script directly as Python file, not as notebook. In my pipeline loop, I get messages that I am logged in every few iterations. Could someone explain and is there a way to turn this off?

e.g.

Summary
wandb: Currently logged in as: jobdvogel. Use `wandb login --relogin` to force relogin
  0%|                                                                                                                                                                                                      | 0/5 [00:00<?, ?it/s]Start training epoch 0 with 10 batches...
wandb: Currently logged in as: jobdvogel. Use `wandb login --relogin` to force relogin
wandb: Currently logged in as: jobdvogel. Use `wandb login --relogin` to force relogin
wandb: Currently logged in as: jobdvogel. Use `wandb login --relogin` to force relogin
wandb: Currently logged in as: jobdvogel. Use `wandb login --relogin` to force relogin
[Epoch 0: it00000/10] train loss: 0.322444
wandb: Currently logged in as: jobdvogel. Use `wandb login --relogin` to force relogin
wandb: Currently logged in as: jobdvogel. Use `wandb login --relogin` to force relogin
wandb: Currently logged in as: jobdvogel. Use `wandb login --relogin` to force relogin
wandb: Currently logged in as: jobdvogel. Use `wandb login --relogin` to force relogin
[Epoch0: it00000/0] test loss: 0.126046
[Epoch 0: it00002/10] train loss: 0.483436
[Epoch 0: it00004/10] train loss: 0.306451
wandb: Currently logged in as: jobdvogel. Use `wandb login --relogin` to force relogin
wandb: Currently logged in as: jobdvogel. Use `wandb login --relogin` to force relogin
wandb: Currently logged in as: jobdvogel. Use `wandb login --relogin` to force relogin
wandb: Currently logged in as: jobdvogel. Use `wandb login --relogin` to force relogin
[Epoch0: it00005/0] test loss: 0.183672
[Epoch 0: it00006/10] train loss: 0.214047
[Epoch 0: it00008/10] train loss: 0.208517
 20%|██████████████████████████████████████                                                                                                                                                        | 1/5 [01:16<05:05, 76.50s/it]Start training epoch 1 with 10 batches...
wandb: Currently logged in as: jobdvogel. Use `wandb login --relogin` to force relogin
wandb: Currently logged in as: jobdvogel. Use `wandb login --relogin` to force relogin
wandb: Currently logged in as: jobdvogel. Use `wandb login --relogin` to force relogin
wandb: Currently logged in as: jobdvogel. Use `wandb login --relogin` to force relogin

Thank you for your time!

It seems to be caused by my train and test loader being called. Since my loaders are using 4 cpu’s this line is called 4 times. This message seems to be sent when my loader is called.

Hi @jobdvogel , happy to help. Could you please provide the script you are executing a simple toy example to help us reproduce this. Thanks

Hi @jobdvogel ,

We would like to follow up on our previous request to please share a toy example of the code you are using for us to reproduce the case you have.

Thank you.

Hi, I will try to make one upcoming days, and share it with you asap!

After making a toy example I found the solution to this problem. Similarly to how the example colab was built, I used the wandb.login() statement outside an if __name__ == '__main__' function. Since my dataloader is using multiple cpu cores, I think the wandb.login() statement is called every time the loader is generating a new batch.

When I put it inside the loop like this, the problem was resolved:

if __name__ == '__main__':
   wandb.login()
   pipeline()

For reference, my dataloader:

dataloader = torch.utils.data.DataLoader(
        dataset,
        batch_size=batch_size,
        shuffle=True,
        num_workers=4,
        pin_memory=True
        )

Thank you for your time!

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.