Hi there! I was wondering, how do I deal with having multiple variables to log, but one of those variables I only want to log every 100 timesteps? The wandb docs seem to suggest that I need to collect all my metrics into one log function call, but in my scenario above where I want to track one variable every step and another variable every 100 steps, I would need multiple log calls. I saw the docs for the define metrics function, but I’m not quite sure if that’s the way to handle this. How do I approach this in PyTorch? Thanks!
As an example, I currently have this Tensorboard logging that I’m trying to convert to wandb:
print(f"global_step={global_step}, episodic_return={info['episode']['r']}")
writer.add_scalar("charts/episodic_return", info["episode"]["r"], global_step)
if global_step % 100 == 0:
writer.add_scalar(
"losses/qf1_values", qf1_a_values.mean().item(), global_step
)