Have you ever wanted to build a new musical instrument that responded to your gestures by making sound? Or create live visuals to accompany a dancer? Or create an interactive art installation that reacts to the movements or actions of an audience? If so, take this course!
In this course, students will learn fundamental machine learning techniques that can be used to make sense of human gesture, musical audio, and other real-time data. The focus will be on learning about algorithms, software tools, and best practices that can be immediately employed in creating new real-time systems in the arts.
Specific topics of discussion include:
• What is machine learning?
• Common types of machine learning for making sense of human actions and sensor data, with a focus on classification, regression, and segmentation
• The “machine learning pipeline”: understanding how signals, features, algorithms, and models fit together, and how to select and configure each part of this pipeline to get good analysis results
• Off-the-shelf tools for machine learning (e.g., Wekinator, Weka, GestureFollower)
• Feature extraction and analysis techniques that are well-suited for music, dance, gaming, and visual art, especially for human motion analysis and audio analysis
• How to connect your machine learning tools to common digital arts tools such as Max/MSP, PD, ChucK, Processing, Unity 3D, SuperCollider, OpenFrameworks
• Introduction to cheap & easy sensing technologies that can be used as inputs to machine learning systems (e.g., Kinect, computer vision, hardware sensors, gaming controllers)