What causes this phenomenon?

Hello friends, I used Colab to run the “PyTorch | Weights & Biases Documentation” sample code. In order to test the test set multiple times, I modified part of the code as follows. But what puzzles me is that the results output by the console are exactly the same. What is the reason? Thanks in advance!
The top picture is the part I modified, and the bottom picture is the result of the console output!

Hi @15039929632 ,

Thank you for reporting what you have encountered, will look into this and will get back to you.

I’m looking forward to the answer to this question I’ve posed, thanks in advance!

Hello @15039929632 Ading,

Actually this would be the expected behavior because at the very beginning of the code the random seed is being set but this isn’t wandb related. WandB is only responsible for tracking the experiment but not for the results itself. As the results depends on the model and the data used to train and test.

Hope that helps clarify this thing.

Hi @15039929632 , since we have not heard back from you we are going to close this request. If you would like to re-open the conversation, please let us know!

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.