Last year I used WandB for my thesis project and I was very content with the execution of this. Because there are talks of publishing this work I wanted to look up the data and tracked experiments again only to see my dashboard is completely broken.
- Only one step is shown in the graphs instead of all steps during training effectively rendering the graphs useless as they are singular data points.
- Most of my projects included cross-fold validation. It seems all my cross-fold validation runs are completely gone in the dashboard and I’m only able to see summaries. I’m unable to locate the cross-folds anywhere.
- Code artifacts is non-accessible in the dashboard
My storage on my account is as full as the day I finished my thesis but I cannot clearly see any of the data in the dashboard. I can see that the code artifacts are still on the storage (as well as other custom image artifacts) but are not accessible from the dashboard. I hope this is the same for all logged data and cross-folds.
I am on a free account but I’m unaware that the data will get deleted after some point. In addition, it seems the data is not deleted from my storage but visualization is competely broken. Is there a way to restore my dashboard to its former glory?
The link to the workspace is: Weights & Biases
Hi @spruijssers ,
Thank you for reaching out. I’ll be glad to assist you with this. Can you provide the following for me please?
- Link to the affected projects
- highlight/describe the graphs that are missing
- what metrics were you logging and do you remember against what x axis parameter?
- you mentioned you were unable to locate cross-folds, were you referring to sweeps here? if yes, which project is the affected project?
You can also see specific usage statistic per project at https://wandb.ai/usage/spruijsser
Essentially it concerns all my public projects on https://wandb.ai/spruijssers. So all projects starting with “DRS” or “US” .
Essentially all graphs are missing/faulty. I used to have (on the y-axis) the validation/train loss, validation/train IoU and other custom metrics such as validation/train rmseTop for every step/epoch (on the x-axis) during training. For classification projects I also logged confusion matrices. While these graphs are still there with the right parameters on the x and y, only a single data point is shown instead of a data point per step/epoch of training.
When referring to cross-folds I was not referring to sweeps. Cross-fold validation was executed by grouping the different folds with the “group” parameter in wandb.init(). I used to be see them per fold and then also the summary value for the groups.
I did also have sweeps (which are also gone) which were made after all runs were logged as I used a separate approach for hyperparameter tuning (KerasTuner with Bayesian optimization). The sweeps were consequentially created by simply selecting all runs in a project and adding them to a sweep in order to get the importance metrics.
When looking at the specific usage per project the storage seems to still be filled but is not displayed on the dashboard anymore. However I noticed when looking at the hours tracked statistic for every project that is ~20 minutes for all projects which is way off as training for each project spanned several days.
I really would like to see this fixed as it is blatantly careless to see my project completely broken. I hope you can help me fix this.
With kind regards,
Hi @spruijssers , I’ll be assisting my colleague Carlo further on this. As I’ll be sharing information related to your projects, I will follow up with you via email to keep this info private and finalize this investigation.
This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.