Logging table line by line

Is there an option to continously logging a table?

i.e not only to add_data to the table line by line sequentially, but this data to appear online line by line

Thanks!
p.s it’s my very first thread here, sorry if this question is inappropriate

Hi @tankwell Good day and thank you for reaching out to us. Happy to help you on this!

May I know your exact use case here? I would like to know how exactly you want your table to appear when you say continuously logging a table. If you can further explain what you are trying to construct in wandb, we will really appreciate it as it can help us to identify a recommendation that best suits your requirement.

Hey,
First of all thanks for the answer and sorry for the delay :slight_smile:
I would like that each row would apper immidiatly when logging it.
Right now I could not do it while logging the table on every step (It was just overriding the data on every step and presenting only the last one eventually)

What I am doing now is presenting the predictions (the table holds a model predictions of generated text and an image column) in the end of the execution, but it do take a while and I would like to see at least some of them in the epoch.

BTW now I am doing a single epoch with logging, so each image is logged only once

Thanks!
Idan

Hi @tankwell Good day and thank you for sharing those information, I have reviewed this with my team. Unfortunately, the feature that you are looking for is not supported at this time.

If you are currently ok with using a single epoch as a workaround, you may continue using this approach as long as it meets your requirement.

I would also like to share an alternative workaround which is to use partitioned tables. With partitioned tables, you can log the tables incrementally as Artifacts and then use the following weave query in the W&B dashboard to load the table. Here’s an example code that you can try.

project.artifactVersion(“incremental_table”, “latest”).file(“incremental_table.partitioned-table.json”)

Let me know if this can also help you.

Hi @tankwell ,

We wanted to follow up with you regarding your support request as we have not heard back from you. Please let us know if we can be of further assistance or if your issue has been resolved.

Best Regards,
Weights & Biases

I am not the original poster, but I could also use this functionality.

I’m working on a audio generation task, so the table format showing spectrograms, and being able to see and play the audio in the table is important.

I log summary statistics and an example prediction during the validation loop of training. I don’t need to log the entire validation batch as that is too costly, just an example to see how training is going. It also would provide a good snapshot of how the model improves over time to have one table with 1 example for every 100k iterations. The way tables work now I have 3 options.

  1. Create a new table 1 row long each time I run the validation summary
  2. Create 1 table that is only visible once training has completed
  3. Create an incremental table

Option 1 is what I am using, but it is not ideal, since I have hundreds of 1 row tables with the same information. Option 2 defeats the purpose of the table, which is partly monitoring. Also, sometimes time limits for computing resources or other factors will stop a run before it reaches that logging statement, and then the run will have been pointless. I tried option 3, but it actually doesn’t solve this issue. You can’t log the table artifact more than once in the same run, so it actually has the same issue as option 2, where you have to log everything at the end.

Let me know if you have any good solutions that meet these requirements.

1 Like

Hi @bryn Thank you for reaching out to us for this. We have raised a feature request for this and we will follow up once we receive a feedback from our team.