Logged value available in graph panel, but not in columns

I log values which have names in the form of test/temp_top-k.---1 (I want the dashes for sorting reasons). I can create graph panels with these values, but they do not show up in the column view. When I Manage Columns they are not listed in the Hidden Columns. When I search for them, it gives me no (an empty) result. Even when I select Show All they don’t show up in the column view. A bug?

Hi Stephan,

I wasn’t able to reproduce this bug on my end when using the code:
wandb.log({``"test/temp_top-k.---1"``:metric})

Can you tell me how you are logging this data? I was also able to find the column train/temp_top-k_mean within your run table from your most recent project. When you created the column, was the quotations properly around the ---1?

Warmly,
Leslie

Hi Stephan,

We wanted to follow up with you regarding your support request as we have not heard back from you. Please let us know if we can be of further assistance or if your issue has been resolved.

Best,
Weights & Biases

Hi Leslie,

I apologize for not responding earlier. I assumed to be notified by e-mail when this thread is updated. Probably I need to check my settings, or “watch” this thread.

I log via Pytorch Lightning:

wandb_logger = WandbLogger(project=settings.project_name, log_model=True)
wandb_logger.watch(model, log='gradients', log_freq=50, log_graph=True)

The actual code for the logging is this:

temp_accs_top_k = {f'{k:->4d}': v for k, v in zip(settings.ks, temp_accs)}
lightning_module.log(f'{split}/temp_top-k', temp_accs_top_k, batch_size=lightning_module.batch_size)

That looks a bit odd I suppose. The code is in a function that I call from several different pl.LightningModules. The variable lightning_module refers to that module. The parameter temp_accs_top_k evaluates to (straight from the debugger):

{'---1': 0.00019996000628452748, '---2': 0.00019996000628452748, '---3': 0.00039992001256905496, '---5': 0.0005998800043016672, '--10': 0.0005998800043016672, '--20': 0.001399720087647438, '--50': 0.004199160262942314, '-100': 0.007598480209708214, '1000': 0.08318336308002472}

Which is wrong. But I am seeing the values in the graph panels (see attached screenshot).
Screen Shot 2022-03-22 at 21.52.37

I changed the code so that temp_accs_top_know contains {'test/temp_top-k.---1': 0.2963850498199463, 'test/temp_top-k.---2': 0.3962452709674835, 'test/temp_top-k.---3': 0.44557619094848633, 'test/temp_top-k.---5': 0.5052925944328308, 'test/temp_top-k.--10': 0.5733972191810608, 'test/temp_top-k.--20': 0.6277211904525757, 'test/temp_top-k.--50': 0.6810465455055237, 'test/temp_top-k.-100': 0.716596782207489, 'test/temp_top-k.1000': 0.802676260471344}.

I log in a loop since Pytorch Lightning can’t log a dict (I believe). I know that wandb does it, but I need the batch_size parameter (I have two dataloaders with different sizes/lenghts and need to make sure that Pytorch Lightning does not get confused with steps/epochs).

for k, v in temp_accs_top_k.items():
    lightning_module.log(k, v, batch_size=lightning_module.batch_size)

Update: just realized that Pytorch Lightning has a log_dict function which lets me get rid of the awkward for loop.

So the “bug” is more like “why did it work in the first place (in the graph panels)?”

Hope that’s not too much to digest and it is traceable.

Best,
Stephan

Hi Stephan, the values that you first pointed to that you had mentioned were wrong are not the values that are shown on the graph. You can find the values that are logged there by creating a weave Table. This is done by clicking on ‘+ Add Panel’ then navigating to Weave. From there type on the top of the navigation bar runs.history.concat["train/temp_top-k\.---1"] and then clicking on the gear icon on the right and changing ‘Render As’ to ‘Table’ as shown in the image attached. Let me know if you have any problems pulling up this table!

Hi Stephan, I’m just checking in to see if you were able to get everything working properly?

Hi Stephan, since we have not heard back from you we are going to close this request. If you would like to re-open the conversation, please let us know!