#6 PyTorch Book Thread: Sunday, 3rd Oct 8AM PT

I think you’re casting it explicitly to a numpy array in the next line cta = np.array(...)

1 Like

The functools module in Python deals with higher-order functions, that is, functions operating on(taking as arguments) or returning functions and other such callable objects.

lru_cache method helps in reducing the execution time of the function by using the memoization technique. In simple words, in simple words memoization stores the function results before it returns the result to the function caller. This way, when another caller point to the results of this function, Memoization will return the result stored (cached) in the memory, and the function will not be executed over and over again.

Memoization gives a drastic improvement in performance when you are dealing with huge files that might require I/O operation from disks such as hard drives or SSDs(which are way slower compared to RAM)

2 Likes

Caching gives a drastic improvement in performance when you are dealing with huge files that might require I/O operation from disks such as hard drives or SSDs(which are way slower compared to RAM)

2 Likes

I have already written this in 3rd session but today… Which may not reach to others…
So reposting the reply—

read the chapter 5 mechanics of learning earlier but i recently read it again and its basically -simple high school stuff of differetiating functions and having slopes or here they say gradients ,
But the whole point is that the developer have to come from far high level dealing complex problems to basic statistics and make the reader realize the loss funtion and then differentiating it wrt weights and biases then optimizing it all with code as good as pen and paper , it makes us realize that as a user of torch. nn or even sklearn ,how close we get to the truth but yet we are too far with our implementations …
The chapter starts with simple " mx + c " and beautifully fits and optimizes everything in our world and you can only realize the calmness if you truly try to forget everything that you have learnt about ML or DL… The only way to enjoy this chapter is to know x^2 derivative wrt x is 2x and nothing more!