New version of existing run not showing updated charts

Hello there!

I’m new to the use of WandB, so maybe mine is a noob situation, I’ll try to describe the problem in the most accurate way possible.

I’m re-running an experiment with the same run_id in online mode, and I’m experiencing an update issue described as follows:

  • charts on the WandB platform are not updated with the latest values, making me think that the wandb file wasn’t synched.
  • logs, metadata and summary were updated with the latest values.

I tried the following workaround to solve the problem with no results:

  • wandb sync folder_path
  • wandb sync --append folder_path

Some useful information about the environment and changes:

  • Currentlyusing WandB at version 0.13.11 with Python 3.11.0.
  • Main changes with respect to the previous run consist of an increment in the amount of logged metrics (from 7 to 31) while disabling system metrics log.

Thank you all for your time and help!

Hello, This a reply from our support bot designed to assist you with your Weights & Biases related queries. To reach a human please reply to this message.

It sounds like you’re encountering an issue where charts on the Weights & Biases platform are not updating with the latest values for a run, despite logs, metadata, and summary being updated. This can be particularly frustrating, especially when you’re making significant changes to your experiment setup, such as increasing the number of logged metrics. Here are a few steps and considerations that might help resolve the issue:

  1. Ensure Proper Initialization: Make sure that each run is properly initialized with wandb.init() and that you’re specifying the id parameter with the resume='allow' option if you’re intending to update an existing run. This tells W&B to resume the run with the given ID instead of creating a new one.

    wandb.init(project="your_project_name", id="your_run_id", resume="allow")
  2. Check for Errors or Warnings: Sometimes, W&B might log warnings or errors that could give clues as to why the charts are not updating. Check your console output or W&B logs for any messages that might indicate issues with logging or syncing data.

  3. Verify Data Logging: Ensure that the metrics are being logged correctly in your script. When you increase the number of metrics, make sure each wandb.log() call includes the correct key-value pairs for the metrics you’re tracking.

    wandb.log({"metric_name": value})
  4. Syncing Issues: Since you’ve already tried using wandb sync without success, consider checking if there’s an issue with the specific version of the W&B client you’re using. Although you’re on version 0.13.11, it might be worth trying to update to the latest version of the W&B client, as updates often include bug fixes and improvements:

    pip install --upgrade wandb
  5. Disabling System Metrics: Disabling system metrics logging should not affect the updating of charts for your custom metrics. However, it’s good to double-check if any changes in the configuration might have inadvertently affected the logging of your metrics.

  6. Contact Support: If after trying these steps, you’re still facing issues, it might be helpful to reach out to W&B support with details of your issue. You can contact them at or post on the W&B Community Forum for further assistance.

Remember, when logging a large number of metrics, it’s also a good practice to log them less frequently or aggregate them over steps if real-time updating is not critical. This can help with performance and ensure that your charts remain responsive.

For more detailed information on using wandb.init() and wandb.log(), as well as troubleshooting tips, please refer to the Weights & Biases Documentation.

To reach a human please reply to this message.

-WandBot :robot:

I missed the resume=‘allow’; just update it in the project’s default configuration for future runs. Behind that, is there a way to force the update of the existing runs?

Unfortunately, upgrading WandB is not an option due to conflicting dependencies on other packages in the project.

Metrics are correctly logged; indeed, in new runs, they’ve been correctly logged.