#1 PyTorch Book Thread: Sunday, 29 Aug 8 AM PT

You just need to setup a seed so that your results are reproducible.

If you are not setting a seed and doing augmentation on the fly, you may not be able to reproduce because the augmentations change on every batch/epoch, so on every batch there are different set of augmentations applied to the same image.

One mantra a mentor of mine taught me about software engineering in general is: Make it work, make it right, then make it fast. I imagine it extends to deep learning as well. :slight_smile:

5 Likes

Saw this yesterday about an upcoming PyTorch Profiling session from W&B https://twitter.com/charles_irl/status/1431658731557715968

4 Likes

In the FastAI course, Jeremy shared a case study when the authors created a spectrogram from the signal data and then applied ideas of transfer learning and treated a problem as a computer vision problem. You can find the same on forums.fast.ai

1 Like

If you want to work on the raw signal, use a network with 1D convolutions.

If you convert your image into mel spectorgrams then you can use a network with 2D convolutions.

1 Like

wandb.ai/site/webinars

Sorry, I think I might have missed it, did you called out something as our homework before coming for next class?

Thank you for the session @bhutanisanyam1 , will re-look the concepts and post queries if any.!

1 Like

Sanyam said he will share some exercise by the next week’s session in this thread if he is feeling better.

2 Likes

Could someone share beginner resources for getting started with Pytorch apart from this book?

Official PyTorch Tutorials are really good.

2 Likes

Thanks for joining everyone!
I’ll look at the unanswered Qs tomorrow, need to rest now :tea:

For homework, also consider writing notebooks (And Publishing them!) or blogposts about what we have learned. I’ll speak a bit more about how this changed my life and why this is useful in the next session. :slight_smile:

I would also point you to join my colleague, @charlesfrye’s event happening this week, livecoding on “What is torch.nn really?”. Maybe Charles co-ordinated this just in time for our Study group :smiley:

7 Likes

Never Worked before!

@harveenchadha , Why do image augmentation and preprocessing in-memory when we can do it once and then read from the disk the next time? The preprocessed images can then be inspected by a human eye (which does not seem possible if done on-the-fly), would that not save time/cpu/gpu one the next training iterations?

One point of augmentation is to simulate what it would look like if we had a much bigger dataset – infinitely big, in the case of most random augmentations – so the augmented dataset doesn’t generally fit on disk, just like the original dataset doesn’t fit in memory.

Also, thanks to multiprocessing-capable dataloaders (num_workers>0), we can do at least some of these transformations for the next batch “on the side” while the forward and backward pass happen. See here.

preprocessed images can then be inspected by a human eye (which does not seem possible if done on-the-fly)

W&B can actually help here – we stay as much out of your way while logging as possible, using multiprocessing. You’d just need to add some logging code to your data pipeline. It’s an experiment I’ve always wanted to try with PyTorch/Lightning!

I know our integration with YOLOv5 does this (example workspace, see Input/Output Visualizations section). HMU if you want the code.

2 Likes

This tutorial from Jeremy Howard is very good, especially if you’re already a little bit comfortable with doing some light array math with e.g. MATLAB and NumPy.

As Sanyam said, I’ll be doing a livestream on it this week!

4 Likes

@aadil and @harveenchadha

I wrote a paper with some neuroscientists who used the spectrogram+ConvNet approach to build a classifier for determining what stage of sleep mice were in, based on their brain waves!

https://charlesfrye.github.io/external/2019/12/13/sleepscore.html

5 Likes

This is awesome!! I’ll definitely read the paper.

1 Like

I was wondering the same thing.
turns out it’s for changing the output value range from [-1, 1] to [0,1]
Removing this transform produces a funny image :smiley: I think transforms.ToPILImage expects the input to be in [0,1] range.

5 Likes

2 months of learning use. Thank you

1 Like