Is it the “best” accuracy ever measured (during training) versus the accuracy of the “best” (validation) model? I understand that wandb does not care what metric I log, but what is the intended use?
For each metric logged, there is a summary metric that’ll summarize the logged values as one value for each run. By default, W&B uses the latest value, but you can update it with wandb.summary['acc'] = best_acc or using the two define_metric calls you show.
This is then used to decide which value is displayed in plots that only use one value for each run (e.g. Scatter plots).
These two calls are both functionally the same, one will show acc.best and one will show as acc.max in the summary metrics of your run. Both will be the maximum value that you log for acc like wandb.log('acc':acc) during a run.
You can see the summary metrics of each run by clicking the icon in the top left nav bar in a run.
Hi @_scott, thank you very much for the swift and in-depth response which is always good as a confirmation of one’s understanding/assumptions. I am glad that the only difference is the name of the summary metrics (.best vs .max). I worried that I missed something.