Teaching
CS 6756: Learning for Robot Decision Making (Fall 2023)
https://www.cs.cornell.edu/courses/cs6756/2023fa/
Machine learning has made significant advances in many AI applications from language (e.g. ChatGPT) to vision (e.g. Diffusion models). However, it has fallen short when it comes to making decisions, especially for robots interacting within the physical world. Robot decision making presents a unique set of challenges - complexities of the real world, limited labelled data, hard physics constraints, safety aspects when interacting with humans, and more. This graduate-level course dives deep into these issues, beginning with the basics and traversing through the frontiers of robot learning. We look at:
- Planning in continuous state-action spaces over long-horizons with hard physical constraints.
- Imitation learning from various modes of interaction (demonstrations, interventions) as a unified, game-theoretic framework.
- Practical reinforcement learning that leverages both model predictive control and model-free methods.
- Frontiers such as offline reinforcement learning, LLMs, diffusion policies and causal confounds.
CS 4756: Robot Learning (Spring 2023)
https://www.cs.cornell.edu/courses/cs4756/2023sp/
Advances in machine learning have proved critical for robots that continually interact with humans and their environments. Robots must solve the problem of both perception and decision making, i.e., sense the world using different modalities and act in the world by reasoning over decisions and their consequences. Learning plays a key role in how we model both sensing and acting. This course covers various modern robot learning concepts and how to apply them to solve real-world problems. We look at:
- Learning perception models using probabilistic inference and 2D/3D deep learning.
- Imitation and interactive no-regret learning that handle distribution shifts, exploration/exploitation.
- Practical reinforcement learning leveraging both model predictive control and model-free methods.
- Open challenges in visuomotor skill learning, forecasting and offline reinforcement learning.
CS 6756: Learning for Robot Decision Making (Fall 2022)
https://www.cs.cornell.edu/courses/cs6756/2022fa/
Advances in machine learning have fueled progress towards deploying real-world robots from assembly lines to self-driving. Learning to make better decisions for robots presents a unique set of challenges. Robots must be safe, learn online from interactions with the environment, and predict the intent of their human partners. This graduate-level course dives deep into the various paradigms for robot learning and decision making. We look at:
- Interactive no-regret learning as a fundamental framework for handling distribution shifts, hedging, exploration/exploitation.
- Imitation learning from various modes of interaction (demonstrations, interventions) as a unified, game-theoretic framework.
- Practical reinforcement learning that leverages both model predictive control and model-free methods.
- Open challenges in safety, causal confounds and offline learning.
This course focuses on algorithms, lessons from real world robotics and features a strong programming component.
Imitation Learning: A Series of Deep Dives
In this 10-part series, we dive deep into imitation learning, and build up a general framework. A journey through feedback, interventions and more!
Core Concepts in Robotics
An introductory series that revisits core concepts in robotics in a contemporary light.
Interactive Online Learning: A Unified Algorithmic Framework
In this series, we try to understand the fundamental fabric that ties all of robot learning – “How can a robot learn from online interactions?” Our goal is to build up a unified mathematical framework to solve recurring problems in reinforcement learning, imitation learning, model predictive control, and planning.
CSE 490R: Mobile Robots (At UW)
https://courses.cs.washington.edu/courses/cse490r/19sp/
Mobile Robots delves into the building blocks of autonomous systems that operate in the wild. We will cover topics related to state estimation (bayes filtering, probabilistic motion and sensor models), control (feedback, Lyapunov, LQR, MPC), planning (roadmaps, heuristic search, incremental densification) and online learning. Students will be forming teams and implementing algorithms on 1/10th sized rally cars as part of their assignments. Concepts from all of the assignments will culminate into a final project with a demo on the rally cars. The course will involve programming in a Linux and Python environment along with ROS for interfacing to the robot.