Creative Applications of Deep Learning with TensorFlow

بواسطة: Kadenze

Overview

This first course in the two-part program, Creative Applications of Deep Learning with TensorFlow, introduces you to deep learning: the state-of-the-art approach to building artificial intelligence algorithms. We cover the basic components of deep learning, what it means, how it works, and develop code necessary to build various algorithms such as deep convolutional networks, variational autoencoders, generative adversarial networks, and recurrent neural networks.

A major focus of this course will be to not only understand how to build the necessary components of these algorithms, but also how to apply them for exploring creative applications. We'll see how to train a computer to recognize objects in an image and use this knowledge to drive new and interesting behaviors, from understanding the similarities and differences in large datasets and using them to self-organize, to understanding how to infinitely generate entirely new content or match the aesthetics or contents of another image.

Deep learning offers enormous potential for creative applications and in this course we interrogate what's possible. Through practical applications and guided homework assignments, you'll be expected to create datasets, develop and train neural networks, explore your own media collections using existing state-of-the-art deep nets, synthesize new content from generative algorithms, and understand deep learning's potential for creating entirely new aesthetics and new ways of interacting with large amounts of data.



What students are saying:


"It was a fantastic course and I want to say "super big thank you" to the instructor Parag Mital. His lectures were highly valuable and inspiring. It not only gave me inspiration in how to apply ML for art, it gave me deeper insights in Deep Learning in general."


"After taking several courses in Machine Learning, I came across this course and it immediately caught my attention due to the the speed of delivery, content topics and it's pace when talking about concepts such as gradient descent and convolutions. Honestly, the course truly is EXCELLENT. Parag really is great a presenting the materials in an easy-to-understand manner, and perhaps more importantly, he has you focus on the RIGHT concepts and not going down rabbit-holes."

Syllabus

Session 1: Introduction To Tensorflow 
We'll cover the importance of data with machine and deep learning algorithms, the basics of creating a dataset, how to preprocess datasets, then jump into Tensorflow, a library for creating computational graphs built by Google Research. We'll learn the basic components of Tensorflow and see how to use it to filter images.   Session 2: Training A Network W/ Tensorflow 
We'll see how neural networks work, how they are "trained", and see the basic components of training a neural network. We'll then build our first neural network and use it for a fun application of teaching a neural network how to paint an image.   Session 3: Unsupervised And Supervised Learning 
This session goes deep. We create deep neural networks capable of encoding a large dataset, and see how we can use this encoding to explore "latent" dimensions of a dataset or for generating entirely new content. We'll see what this means, how "autoencoders" can be built, and learn a lot of state-of-the-art extensions that make them incredibly powerful. We'll also learn about another type of model that performs discriminative learning and see how this can be used to predict labels of an image.   Session 4: Visualizing And Hallucinating Representations 
This sessions works with state of the art networks and sees how to understand what "representations" they learn. We'll see how this process actually allows us to perform some really fun visualizations including "Deep Dream" which can produce infinite generative fractals, or "Style Net" which allows us to combine the content of one image and the style of another to produce widely different painterly aesthetics automatically.   Session 5: Generative Models 
The last session offers a teaser into some of the future directions of generative modeling, including some state of the art models such as the "generative adversarial network", and its implementation within a "variational autoencoder", which allows for some of the best encodings and generative modeling of datasets that currently exist. We also see how to begin to model time, and give neural networks memory by creating "recurrent neural networks" and see how to use such networks to create entirely generative text.

Taught by

Parag Mital

Creative Applications of Deep Learning with TensorFlow
الذهاب الي الدورة

Creative Applications of Deep Learning with TensorFlow

بواسطة: Kadenze

  • Kadenze
  • مجانية
  • الإنجليزية
  • متاح شهادة
  • متاح في أي وقت
  • intermediate
  • N/A