#2 PyTorch Book Thread: Sunday, 5th Sept 8AM PT

Note: This is a wiki, please edit it & add resources!

YouTube Link: PyTorch Book Reading - 2. Tensors, Autograd & NNs in PyTorch - YouTube

Hey Everyone!
This thread is for discussing, Q&A, and everything else for the #2 meetup of the book reading group.

We’ll continue reading Ch-3 and make it to Ch-6. The goal is to learn about Tensors, modelling and autograd.

Link to sign up

Resources:

Homework suggestions:

  • Explain the different slicing operations on line 55 of the tensors notebook
  • Read about imageio and torchvision
  • Undersrand what the torch.unsqueeze function in PyTorch does
  • Understand what the tensor.permute method does and consider contributing to the PyTorch documentation
  • Understand the bike share notebook with time series data
  • Experiment with loading different datasets into PyTorch. See this blog on getting different datasets
  • Create a cheat sheet of the PyTorch functions we’ve learned about so far
  • Read about the h5py python package

<<<< Previous Session Thread

9 Likes

What are torch tensors?

2 Likes

The Basic Data Structure in pytorch!!

3 Likes

What is the difference between transfer learning and fine-tuning?

2 Likes

They are the core data structures used for building Neural Networks. Basically its Numpy on Gpu (and little more).

2 Likes

A torch
Tensor is a multi-dimensional matrix containing elements of a single data type.

2 Likes

Deep learning requires less FE - does this apply in general to tabular data too?

1 Like

Transfer Learning a basically an idea which involves using a model which is originally trained for a different task say A model train of ImageNet for classification to a different task say object detection of classification into a class which was not in the original dataset.

Fine-tuning is basically the same thing , just it’s implementation which includes things such as adding a new final layer and then training the entire model for the new task. [Many other techniques are involved. You can check Jeremy Howard’s lecture on fast.ai, it’s really nice]

2 Likes

Depends. Many of time it may happen that Deep Learning might not give good results while ML algos like Random Forest, XGBOOST, etc. performs awesomely on the same.

But the place where deep learning works, FE is significantly lesser than traditional ML. Again depends on dataset.

1 Like

A small analogy that we can think of is.
Fine tuning is when a organization hires the graduate and make them learn the stuff that will make the graduate efficient for current deliverable.
So the organization was able to use transfer learning ( using the prior knowledge of graduate) and make them learn just the stuff which is most important for the current task in hand.

1 Like

where can I get the notebooks …any links?

what is the difference between - .cpu() and .detach() ?

2 Likes

We can use stride may be?

Reshape or extend the dim

1 Like

using stride I guess

I guess we can reshape it

Reshape the image in required format

can also use opencv i guess

skip reading the column header

1 Like