Deep Learning has received a lot of attention over the past few years and has been employed successfully by companies like Google, Microsoft, IBM, Facebook, Twitter etc. to solve a wide range of problems in Computer Vision and Natural Language Processing. In this course we will learn about the building blocks used in these Deep Learning based solutions. Specifically, we will learn about feedforward neural networks, convolutional neural networks, recurrent neural networks and attention mechanisms. We will also look at various optimization algorithms such as Gradient Descent, Nesterov Accelerated Gradient Descent, Adam, AdaGrad and RMSProp which are used for training such deep neural networks. At the end of this course students would have knowledge of deep architectures used for solving various Vision and NLP tasks
INTENDED AUDIENCE: Any Interested LearnersPREREQUISITES: Working knowledge of Linear Algebra, Probability Theory. It would be beneficial if the participants have
done a course on Machine Learning.
INTENDED AUDIENCE: Any Interested LearnersPREREQUISITES: Working knowledge of Linear Algebra, Probability Theory. It would be beneficial if the participants have
done a course on Machine Learning.