Ah I think I found the magic combination to work for my training script. For posterity in case anyone else has the issues and stumbles on this post. This is a raw torch trainer.
(ex log_folder = “logs/projectname20230325_124523” and contains the events.out.tfevents… file)
wandb.tensorboard.patch(root_logdir=log_folder, pytorch=False, tensorboard_x=False, save=False)
wandb_run = wandb.init(
project=args.project_name,
config={"main_cfg": vars(args), "optimizer_cfg": optimizer_config},
name=args.run_name
)
log_writer = SummaryWriter(log_dir=log_folder...)
log_writer.add_scalar(...)
tensorboard 2.12.0
wandb 0.14.0