What is Transfer Learning?
Data Science

What is Transfer Learning?

Transfer Learning


Transfer learning is a technique for representing features from a previously learned model without having to build a new model from scratch. A pre-trained model is generally developed on a large dataset like ImageNet, and the weights acquired from the trained model may be used with your own neural network for any other similar application. These freshly constructed models can be used to make predictions on relatively new jobs or to train models for related applications.


Machine learning algorithms create predictions and generate new output values using past data as input. They're usually made to do one thing at a time. A source task is one that is used to convey information to a target job. When information from a source task is transferred to a target task, enhanced learning happens.


During transfer learning, the information gained and quick progress made on a previous task is applied to the learning and growth of a new target task. The application of knowledge is based on the traits and characteristics of the source task, which are then applied and mapped to the target task.


Negative transfers, on the other hand, occur when the transfer mechanism causes a decline in the performance of the new target job. When dealing with transfer learning methods, one of the most difficult issues is being able to give and ensure positive transfer between connected tasks while preventing negative transfer between less related activities.


Types of Transfer Learning:

Inductive Transfer LearningThe source and target tasks are the same in this sort of transfer learning, yet they are nonetheless distinct from one another. Inductive biases from the source task will be used by the model to boost performance on the target job. The source task may or may not include labelled data, leading to a model that incorporates multitask learning and self-taught learning.


Unsupervised Transfer Learning:  Unsupervised learning is when an algorithm is challenged to find patterns in datasets that haven't been labelled or categorised. The source and target are comparable in this scenario, but the goal is different, as both the data in the source and target are unlabelled. Unsupervised learning techniques such as conditionality reduction and clustering are well-known.


Transductive Transfer Learning: The source and target tasks are comparable in this sort of transfer learning, but the domains are distinct. The source domain has a lot of labelled data, but the target domain has none, leading to the domain adaptation model.


How does Transfer Learning work?

In computer vision, neural networks typically aim to identify edges in the first layer, forms in the middle layer, and task-specific properties in the latter layers. The early and intermediate layers are utilised in transfer learning, whereas the subsequent layers are just retrained. It makes use of the labelled data from the job it was trained on.


Let's return to the example of a model that has been trained to recognise a backpack in an image and will now be used to recognise sunglasses. Because the model has been trained to detect things in the early levels, we will merely retrain the subsequent layers to understand what distinguishes sunglasses from other objects.


In transfer learning, we strive to transfer as much information as possible from the previous task the model was trained on to the new task at hand in transfer learning. Depending on the situation and the data, this knowledge might take many different forms. It may, for example, be the way models are built, which makes it easier to recognise new items.



Why Transfer Learning?

Transfer learning provides a number of advantages, the most important of which are reduced training time, improved neural network performance (in most circumstances), and the absence of a large amount of data.


To train a neural network from scratch, a lot of data is usually required, but access to that data isn't always possible – this is when transfer learning comes in useful. Because the model has previously been pre-trained, a good machine learning model may be generated with relatively little training data using transfer learning. This is especially useful in natural language processing, where huge labelled datasets require a lot of expert knowledge. Additionally, training time is minimised because building a deep neural network from the start of a challenging job can take days or even weeks.

  • Prince Kumar
  • Apr, 01 2022

Add New Comments

Please login in order to make a comment.

Recent Comments

Be the first to start engaging with the bis blog.