Collect Data > Clean Data > Train Model > Evaluate > Deploy > Monitor
Do you use FastAPI to create API’s for users?
When is it not around NN ? Can you give some examples please ?
Thank You
This doc by Jeremy Howard made it very simple to understand to PyTorch classes we discussed. It’s available in PyTorch official docs.
Started about a year ago with fast.ai; have used on and off since then.
Why do we need to re-train an already pretrained model?
We usually retrain a pretrained model (also known as “fine-tuning”) if the model is trained on a different “task” than what we intend to use it for.
For example, a model that is trained on ImageNet is capable of identifying objects in that dataset. But if we want to use it to detect some other kinds of objects then we need to fine-tune it.
Earlier layers of the model learn very primitive representations of images (edges, gradients, textures, etc.) and that is what we want to re-use from a pre-trained model, but we then want to use it to predict something else that is of interest to us.
- Other than ImageNet, what other data might the pretrained models trained on?
2 . Is there any function or way to calculate the normalization values (mean & std) that have be used for models.
Watching a session drinking tea
Most classification baselines are usually trained on ImageNet as that is one of it’s kind dataset(Samples for large number of classes and large number of training images to train a bulky model like ResNet without heavily overfitting). For other tasks, other datasets are used like for 3D data there is a ImageNet equivalent in 3D named Shapenet.
is GAN part of Transfer Learning?
when doing preprocessing, what is the best practice. Should we do it on the fly while training or save them and then load them from the disk while training. If i recollect correctly, fast.ai does it on the fly, but i want reproducibility and to be able to peek into the images, for this i prefer saving it on the disk prior to loading it. But is it maybe not recommended and why are some doing it on the fly?
A simple min max normalization scheme might work in most cases. For imagenet, people have experimented and found out the particular values that gives the best performance and that has been implemented in the pytorch’s normalization transform.
Why are we adding 1 and dividing by 2 to the output tensor? Is it some kind of reverse z-score to convert to image?
out_t = (batch_out.data.squeeze()+1)/2
Thank you!
Sorry, I have used Imagenet rained resnet once to build a satellite imaging classification model, it worked fairly well.
are you going to talk about train the sounds or the digital processing ?
If the preprocessing is so CPU intensive that your GPU has idle time in the training process, then I think it makes sense to save the intermediate results.
can you suggests anything about signal processing using Pytorch ?