PyTorch Lightning and wandb.init() compatible?

My questions is if its possible to use wandb.init() before using WandbLogger() in a PyTorchLightning trainer?
So like this:

import lightning.pytorch as pl
from lightning.pytorch.loggers import WandbLogger
import wandb

run = wandb.init(...)
logger = WandbLogger(...)
trainer = pl.Trainer(logger=logger)

The reason I would like to do this is because wandb.init() returns run with which run.log_code() could be called. I think it’s not possible with the logger from WandbLogger(), isnt it?

I supect that this combination isnt working because I get an error message that a session is already started when I run my code (wandb.init() vs. WandbLogger() I would guess). On the other hand in this official best-practice guide about model-checkpoints the two are combined like this:

                  "model_name": "resnet",
                  "batch_size": 16
# Logs all checkpoints.
wandb_logger = WandbLogger(log_model='all', checkpoint_name=f'nature-{}') 
checkpoint_callback = ModelCheckpoint(every_n_epochs=1)
model = NatureLitModule(model_name=wandb.config['model_name']) # Access hyperparameters downstream to instantiate models/datasets
trainer = Trainer(logger=wandb_logger,  # W&B integration
                  log_every_n_steps=5)                 , datamodule=nature_module)

So my question is if it’s possible to do that or if this breaks the logging?

Hi @veltin-gieseke ! Thank you for writing in. In theory this should work.

Could you send me your full error stack trace, as well as the minimul reproduction code (unless it’s the exact same thing you’ve sent from the example above)

Hi there, I wanted to follow up on this request. Please let us know if we can be of further assistance or if your issue has been resolved.

Hi Veltin, since we have not heard back from you we are going to close this request. If you would like to re-open the conversation, please let us know!

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.