My research spans topics relating to motion planning and machine learning for robots solving complex tasks.

2017 - Present


Bayesian Reinforcement Learning

Addressing uncertainty is critical for autonomous systems to robustly adapt to the real world. We formulate the problem of model uncertainty as a Bayes-Adaptive Markov Decision Process (BAMDP), where an agent maintains a posterior distribution over latent model parameters given a history of observations and maximizes its expected long-term reward with respect to this belief distribution. We propose algorithms to solve continuous BAMDPs efficiently.
Papers:  ICLR'19arXiv'18

2016 - 2017


Bayesian Traveler's Problem

Consider a traveler on a graph who must reach a goal (or cover a set of goals) but does not know which edges are traversable. The traversability is only revealed when the traveler attempts the edge (or visits an adjacent vertex). Given a prior on edges, how should the traveler move to minimize expected travel time? Many real robotics applications are instances of this problem, e.g. manipulation in occlusion.