Transfer Learning a basically an idea which involves using a model which is originally trained for a different task say A model train of ImageNet for classification to a different task say object detection of classification into a class which was not in the original dataset.
Fine-tuning is basically the same thing , just it’s implementation which includes things such as adding a new final layer and then training the entire model for the new task. [Many other techniques are involved. You can check Jeremy Howard’s lecture on fast.ai, it’s really nice]
Depends. Many of time it may happen that Deep Learning might not give good results while ML algos like Random Forest, XGBOOST, etc. performs awesomely on the same.
But the place where deep learning works, FE is significantly lesser than traditional ML. Again depends on dataset.
A small analogy that we can think of is.
Fine tuning is when a organization hires the graduate and make them learn the stuff that will make the graduate efficient for current deliverable.
So the organization was able to use transfer learning ( using the prior knowledge of graduate) and make them learn just the stuff which is most important for the current task in hand.