Very Large numbers in Logged Metrics in Runs Table

I’m running physics experiments and tracking it with WandB.

I’m tracking the logarithm of a very large number (logmetric) and a very large number directly (metric).

The logarithmic value of the metric has a numerical value of logmetric=140, but when I track the exponentiated metric which is metric=1.3e+65, the dashboard says ‘Infinity’.

Is my assumption correct that the dashboard simply translates very large numbers to ‘Infinity’ (which makes sense)?
Is there a way to enable scientific number representation?

Hi @ludwigwinkler , that is is correct, very large numbers are represented as infinity and there isn’t currently a method to prevent this from happening in the workspace. I’ll be happy to submit a feature request in for you to all users the option to choose how to represent very large numbers.

Thank yo for your reply.
The distinction to real infinity values like np.inf is currently not clear.
You’d quickly get the impression that your algorithm/prediction/metric has diverged but instead it’s still valid, albeit very large.
‘(Infinity)’ in brackets or some other visual hint that it’s just WandB that’s portraying it as ‘Infinity’ would be helpful.

I’d be glad if you’d submit a feature request for some visual clarification of the ‘Infinity’ vs ‘np.inf’ clarification.

Hi @ludwigwinkler , following back on this. Do you have an example workspace or brief code snippet to reproduce this behavior. I was successful in logging the following without wandb modify the representation of the data.

wandb.init(project=project, entity=entity, config ={"np.inf": np.inf, "exp-num": 2.5e35, "very-lrg-num": 1.3e+65, "null-col": None})

When you logged your large values, were they logged in scientific notation?

Hi @mohammadbakir ,

I could successfully log the quantities as shown in the attached screenshot.
I also logged the values as metrics as compared to configs.
I think the representation bug lies with pytorch, as tensors are evaluated to infinity internally in PyTorch.

torch.scalar_tensor(141.).exp()=tensor(inf)

So I guess WandB is doing everything correctly and surprisingly, PyTorch is ‘cutting corners’ due to numerical overflows. :slight_smile:

Hi @ludwigwinkler , thank you for confirming your findings! Please do write in again anytime we could be of help.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.