Wandb.log inconsistent behavior with step parameter

Why does these code snippets produce different results?

for i in range(100):
    wandb.log({"train/loss": i}, step=i)
    
for i in range(100):
    wandb.log({"val/loss": i**2}, step=i)
for i in range(100):
    wandb.log({"train/loss": i}, step=i)
    wandb.log({"val/loss": i**2}, step=i)

Hey @dminn, thanks for flagging this. I’ll check internally on what’s causing this issue and get back to you.

As far as I understand the step variable, once the step is incremented, the value is stored immutably. So, if chronologically, you execute:


wandb.log({'potato': 1}, step=0}
wandb.log({'tomato': 1}, step=0}
wandb.log({'potato': 2}, step=1}
wandb.log({'tomato': 2}, step=1}

It’ll be fine but instead if you execute:


wandb.log({'potato': 1}, step=0}
wandb.log({'potato': 2}, step=1}
wandb.log({'tomato': 1}, step=0}
wandb.log({'tomato': 2}, step=1}

the third command (tomato = 1, step 0) will not be executed since the logger has already moved past step 0.

1 Like

Thanks that makes sense, I like to keep my training/validation steps separate. Are there any solutions/plans to accommodate metrics with different steps aside from storing the different steps as a dictionary.

e.g.

for i in range(100):
    wandb.log({"potato": i, "step": i})
    
for i in range(100):
    wandb.log({"tomato": i**2, "step": i})

Hey @dminn, correctly pointed out by @geraltofrivia783 , as soon you call wandb.log() with a different value for step than the previous one, W&B will write all the collected keys and values to the history, and start collection over again. Therefore, wandb.log doesn’t let you write to any history step that you’d like, only the “current” one and the “next” one.
As a potential workaround/solution, you can make use of define_metric which will meet your needs for this use-case.

For example, you can update your script to

# define your custom x axis metric
wandb.define_metric("custom_step")

# define which metrics will be plotted against it
wandb.define_metric("potato", step_metric="custom_step")
wandb.define_metric("tomato", step_metric="custom_step")

...

for i in range(100):
  log_dict = {
      "custom_step": i,
      "potato": i,
  }
  wandb.log(log_dict)

...

for i in range(100):
  log_dict = {
      "custom_step": i,
      "tomato": i**2,
  }
  wandb.log(log_dict)

Hope this helps.

Hi @dminn , I wanted to follow up on this request. Please let us know if we can be of further assistance or if your issue has been resolved.

Hi @dminn , since we have not heard back from you we are going to close this request. If you would like to re-open the conversation, please let us know!

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.