I am using pytorch lightning, but its not logging

This is my code

```
def training_step(self,batch,batch_idx):
x,y = batch['video'],batch['label'].squeeze(1).to(torch.int64)
logits = self(x)
loss = self.loss_fn(logits,y)
acc = self.metric(logits,y)
f1 = self.f1(logits,y)
wandb.log({"train_step_loss": loss})
self.training_step_outputs.append({"loss": loss.detach().cpu().numpy(),
"acc": acc.detach().cpu().numpy(),
"f1": f1.detach().cpu().numpy()})
return loss
def on_train_epoch_end(self):
avg_loss = np.mean([x['loss'] for x in self.training_step_outputs])
avg_acc = np.mean([x['acc'] for x in self.training_step_outputs])
avg_f1 = np.mean([x['f1'] for x in self.training_step_outputs])
wandb.log({"train/loss": avg_loss, "train/acc": avg_acc,"train/f1": avg_f1})
self.training_step_outputs.clear()
```

This is how i intialized it import wandb

```
wandb.login()
from pytorch_lightning.loggers import WandbLogger
wandb_logger = WandbLogger(project="video")
```

Here is trainer

```
trainer = Trainer(max_epochs=30,
accelerator='gpu',
devices=-1,
precision=16,
accumulate_grad_batches=4,
enable_progress_bar = True,
num_sanity_val_steps=2,
limit_train_batches=15,
log_every_n_steps=1,
limit_val_batches=5,
logger=wandb_logger )
```

I have even tried, `self.log()`

but that also did not worked

This is how it look like