Thank you for the PyTorch series covering the Deep Learning with PyTorch Book.
I wanted to know how to coordinate between the fastbook (Jeremy Howard’s material) and this series?
Also should we pick fundamental papers like those by LeCun and Hinton or stick with interpretation of their concepts in recent works?
While implementating papers using the submissions at paperswithcode, many repositories are in TensorFlow, how to deal with that?
I try to learn from Jeremy’s fastbook by keeping the book and code only colab notebook side by side, try to modify and learn about the code, and then watch the lecture (as warching the lectures take time without knowing the code and approach.
How do you recommend tackling with PyTorch and Fast.ai (part 1)
This is not working as I learned that this dataset has been removed. Any solution about how to load this dataset in pytorch? Or any solution about how to load dataset from an url in pytorch? I searched for it but couldn’t understand their solution.
In one session coukd you cover the updated approach of using EC2 as an alternative to google coLab (pr example when data augmentation is to be done for a huge data set but locally)