Self-Driving Car Engineer

Brought by: Udacity

Overview

In this program, you will learn the techniques that power self-driving cars across the full stack of a vehicle’s autonomous capabilities. Using Deep Learning with radar and lidar sensor fusion, you will train the vehicle to detect and identify its surroundings to inform navigation.

Syllabus

  • Computer Vision
    • In this course, you will develop critical Machine Learning skills that are commonly leveraged in autonomous vehicle engineering. You will learn about the life cycle of a Machine Learning project, from framing the problem and choosing metrics to training and improving models. This course will focus on the camera sensor and you will learn how to process raw digital images before feeding them into different algorithms, such as neural networks. You will build convolutional neural networks using TensorFlow and learn how to classify and detect objects in images. With this course, you will be exposed to the whole Machine Learning workflow and get a good understanding of the work of a Machine Learning Engineer and how it translates to the autonomous vehicle context.
  • Sensor Fusion
    • In this course, you will learn about a key enabler for self-driving cars: sensor fusion. Besides cameras, self-driving cars rely on other sensors with complementary measurement principles to improve robustness and reliability. Therefore, you will learn about the lidar sensor and its role in the autonomous vehicle sensor suite. You will learn about the lidar working principle, get an overview of currently available lidar types and their differences, and look at relevant criteria for sensor selection. Also, you will learn how to detect objects such as vehicles in a 3D lidar point cloud using a deep-learning approach and then evaluate detection performance using a set of state-of-the-art metrics.

      In the second half of the course, you will learn how to fuse camera and lidar detections and track objects over time with an Extended Kalman Filter. You will get hands-on experience with multi-target tracking, where you will learn how to initialize, update and delete tracks, assign measurements to tracks with data association techniques and manage several tracks simultaneously. After completing the course, you will have a solid foundation to work as a sensor fusion engineer on self-driving cars.
  • Localization
    • In this course, you will learn all about robotic localization, from one-dimensional motion models up to using three-dimensional point cloud maps obtained from lidar sensors. You’ll begin by learning about the bicycle motion model, an approach to use simple motion to estimate location at the next time step, before gathering sensor data. Then, you’ll move onto using Markov localization in order to do 1D object tracking, as well as further leveraging motion models. From there, you will learn how to implement two scan matching algorithms, Iterative Closest Point (ICP) and Normal Distributions Transform (NDP), which work with 2D and 3D data. Finally, you will utilize these scan matching algorithms in the Point Cloud Library (PCL) to localize a simulated car with lidar sensing, using a 3D point cloud map obtained from the CARLA simulator.
  • Planning
    • Path planning routes a vehicle from one point to another, and it handles how to react when emergencies arise. The Mercedes-Benz Vehicle Intelligence team will take you through the three stages of path planning. First, you’ll apply model-driven and data-driven approaches to predict how other vehicles on the road will behave. Then you’ll construct a finite state machine to decide which of several maneuvers your own vehicle should undertake. Finally, you’ll generate a safe and comfortable trajectory to execute that maneuver.
  • Control
    • This course will teach you how to control a car once you have a desired trajectory. In other words, how to activate the throttle and the steering wheel of the car to move it following a trajectory described by coordinates. The course will cover the most basic but also the most common controller: the Proportional Integral Derivative or PID controller. You will understand the basic principle of feedback control and how they are used in autonomous driving techniques.

Taught by

Sebastian Thrun, David Silver, Ryan Keenan, Cezanne Camacho, Mercedes-Benz, NVIDIA, Uber ATG, Farhan A., Krishna K., Tim H., Anu A., Shreyas R. and Vishal R.

Self-Driving Car Engineer
Go to course

Self-Driving Car Engineer

Brought by: Udacity

  • Udacity
  • Paid
  • English
  • Certificate Available
  • Available at any time
  • All
  • N/A