Sudha Subramanian (Self-motivated & Data-Driven)

What is Transfer Learning?

One of the advanced techniques you hear in any image classification or NLP task is “Transfer Learning”. Well, what is it? Let’s find out:

What is Transfer Learning?

Well, as the name implies, “Transfer Learning” refers to a learning gained in one task or usecase that can be repurposed for another. When it comes to any neural network or Deep Learning task, it comes down to how well the weights are obtained through gradient descent and back propagation to learn the non-linearities in the data that it sees. And, the deeper the neural network architecture is, the better it has shown to perform. Some of the architectures that have produced great results using ImageNet dataset are AlexNet, LeNet, ResNet, VGG etc. All of these architectures are several layers deep and so, it takes several hours to train the model. Through Transfer Learning, one of the approaches is to use a pre-trained model. What we are essentially trying to do is to freeze up to a certain number of layers and use the weights at that point to feed into our final layers, where we will be training specific to our own dataset.

Why is Transfer Learning gaining so much popularity?

Through Transfer Learning techniques, people have been able to apply the learnings from huge architectures on their own dataset and have shown to produce great results. It also helps cut down in terms of training time and costs and yet have the benefit from huge successful architectures.

Where do I start?

NVIDIA offers a “Transfer Learning Toolkit”, that is ideal for deep learning application tasks, thus enabling Data Scientists to achieve faster and efficient workflow. Google recently released their “Tensorflow Hub”, which is essentially a library for reusing pre-built machine learning models.

One of the architectures that has gained popularity in the field of Natural Language Processing is the Transformer architecture on which BERT (Bidirectional Encoder Representations from Transformers) is based on. This technique generates embeddings for a word, based on the context in which the word appears. In one of my recent NLP projects, using BERT’s contextual embeddings, we were able to show that a model is able to tag as high risk for heart disease when there is a mention of BP above a certain level, say 190/140 and not to tag when the value is say, 120/80. The transformer architecture has shown to have superior performance in modeling long-term dependencies in the text compared to RNN or LSTM. Google offers a Collab environment for you to play with BERT fine-tuning and TPU. Here is a link to my notebook on Google Collab. Google’s collab is a great place to get started!

Why Learn R?

With Machine Learning and Big Data being a hot topic in today’s tech world, here are a few questions that come up to a beginner’s mind:

Why Data Science?

What programming languages should I learn?

Where do I start?

If you enjoy solving problems and want to use data to substantiate your findings, Data Science is the field you want to be in. We all deal with data in some form, on a day-to-day basis. Let’s say you are reviewing your annual credit card statement, which categorizes the spending across the year. You can quickly see trends of your spending in each category across several months and derive where you need to reduce your spending. If we are able to realize value out of the data we come across and make informed decisions, we have given ourselves the jumpstart we need. Of course, you definitely need tools and skills to do this more efficiently.

Let’s understand what tools would be helpful in your journey to become a Data Scientist. One would be R, which is an open source programming language and is primarily used for statistical analysis. R is well known for its visualization capabilities and data manipulation techniques and is very popular among academic scientists and researchers. Although R is widely used, there are areas where Python and other packages may be better suited. There may a scenario where you may have to deal with very large datasets and in such cases, Python will be a better tool to go with. Clearly, there is no single programming language that is perfect for every data problem. It is really up to the Data Scientist to identify and use the best tools on hand for the job.

So, now you may be wondering where to start. I am not an R-pro, but definitely pro-R. So, you may hear a more biased response from me. Yes, R should be your starting point for the following reason. Data analysis is the starting point of finding a solution to a Data Science problem. R, with its vast number of libraries is certainly a great tool for data analysis. As you may already know or have heard before, data wrangling is a very time-consuming process and R offers several packages to tidy up your data and get it to a format that can be absorbed by the model. So, learn and use R to an extent where you can comfortably build models and fine-tune them as well.

And, the best way to learn a new tool or language is to experience it! Just download a public dataset or download the famous Titanic dataset and start exploring. This was the first dataset I played with on Kaggle, which is a platform where Data Scientists learn and compete. This platform lets you explore the dataset, build and score models, thereby giving you a complete overview of the process. Do watch this video on Data Analysis, that serves as a great way to get started with both R and Data Science.