Day 3: Transfer learning 1 - Introduction
update for data augmentation questions yesterday,
After discussion with @Arka, @Anna Scott and @George Christopoulus, my understanding of data augmentation would be the transforms we applied to the dataset will not increase the size of the dataset. The data augmentation is process to vary the dataset with applied transformation such as random resize, random crop, random horizontal flipping, and many other transformations functions. The increasing number of dataset would be done by caling the loaded model in iteration. The applied transform functions would return the copy of dataset differently in each iteration. I just need to find out on how it is actually done. The iteration process. Let see if I can found out today on the transfer learning process.
Today, I want to learn about transfer learning. What is transfer learning, from this article, Transfer learning is a machine learning technique where a model trained on one task is re-purposed on a second related task. Or simply, I would say transfer learning is a technique to use the knowledge from pre-trained network to our similar problem.
Transfer learning mostly used in analysing visual imagery (computer vision) or it is classified as Convolutional Neural Network (CNN). PyTorch has built in network models, such as:
After discussion with @Arka, @Anna Scott and @George Christopoulus, my understanding of data augmentation would be the transforms we applied to the dataset will not increase the size of the dataset. The data augmentation is process to vary the dataset with applied transformation such as random resize, random crop, random horizontal flipping, and many other transformations functions. The increasing number of dataset would be done by caling the loaded model in iteration. The applied transform functions would return the copy of dataset differently in each iteration. I just need to find out on how it is actually done. The iteration process. Let see if I can found out today on the transfer learning process.
Today, I want to learn about transfer learning. What is transfer learning, from this article, Transfer learning is a machine learning technique where a model trained on one task is re-purposed on a second related task. Or simply, I would say transfer learning is a technique to use the knowledge from pre-trained network to our similar problem.
Transfer learning mostly used in analysing visual imagery (computer vision) or it is classified as Convolutional Neural Network (CNN). PyTorch has built in network models, such as:
Few weeks ago, @Arka has helped me understanding on using this pre trained network. However, it is gone from the slack at the moment. I just remembered he was saying that we need to check the model architecture. For example on densenet we just need to redefine our classifier. But for resnet it can be different, since we need to redefine the fc.
However, in any cases we need to take care the input and output size.For example, in densenet121 we need 1024 inputs and can have up to 1000 output while in resnet101 we need 2048 inputs to get up to 1000 output. Since dog and cat problem only 2 output, we can use both pre trained networks but do not forget to set the output to 2.
That is for today. Tomorrow I will try to run the transfer learning process, since I do not have my computer with me today.
Comments
Post a Comment