James Alan Preiss

robotics, learning, control

CV | g-scholar | github | linkedin | arxiv


I am an assistant professor at UC Santa Barbara Computer Science. My interests are robotics, machine learning, optimization, control, and their intersections — both theoretical and practical.

My long-term goal is to develop robot learning into a general-purpose tool. This means building algorithms that can learn control policies for diverse tasks and hardware without extensive per-task manual tuning. By making robotics more reliable and accessible, we can expand its scope beyond resource-intensive grand challenge problems to include a "long tail" of creative, unexpected applications.

I am recruiting students! Please read this page for more information.

This page last updated October 2024.

bio

pronouns: he/him/his

I am an assistant professor at UC Santa Barbara Computer Science. Previously, I was a postdoc at Caltech Computing + Mathematical Sciences, working with Yisong Yue, Soon-Jo Chung, and Adam Wierman. I completed my Ph.D. in Computer Science at University of Southern California in 2022, where I was advised by Gaurav S. Sukhatme.

Earlier, I developed software for 3D scanning & processing at Geomagic and interactive statistics at the JMP division of SAS Institute. I completed my B.A.S. with concentrations in Mathematics and Photography at The Evergreen State College.

research topics

Online/Adaptive Control

In the real world we cannot prepare for all possible situations in advance. Our algorithms adapt a control policy to an adversarial time-varying environment with regret guarantees. For parameter­ized policies, our gradient-based method substantially generalizes previous work, including a local result for nonconvex settings. For finite policy sets, our switching-based method can select the best black-box controller even if some are unstable.

Reinforcement Learning

Directly optimizing a policy by strategic trial-and-error can improve performance and reduce effort compared to classical methods. For example, our sim-to-real transfer for quadrotors. However, RL can also be unreliable, and its theory (such as our work on REINFORCE) does not currently extend to the complexity of robots. Developing reliable RL algorithms for robotics requires further theoretical insights.

Planning and Control with Learning Components

Planning and control with learned models can be a final goal or a subroutine of model-based reinforcement learning. For deformable objects (shown), the full state is partially observable, so we learn an abstract state-space model from input-output data using a recurrent neural network. We then apply nonlinear estimation and control techniques to the learned model.

video graphics by David Millard

research philosophy

Problem Structure and Tractability

Current theory on learning to control provides guarantees under strong assumptions that exclude many robotic applications. But heursitic algorithms can work well in practice. To close this gap, we need a more sophisticated understanding of why the problem instances arising in robotics are often tractable.

Animation shows a topological phenomenon from our work on suboptimal coverings, a framework to quantify the "size" of infinite sets of control tasks.

Robotics Foundations

Robot learning algorithms cannot be reliable unless the underlying hardware and software are reliable too. I co-developed (with Wolfgang Hönig) the Crazyswarm platform to control 49+ quadrotors from one PC. Crazyswarm is widely used in universities worldwide for multi-robot research, including our work on multi-quadrotor planning and its extension.

Some of my other open-source libraries are listed below.

publications

* = equal contributors.

Ph.D. Dissertation

open source software

contact

(last name) @ ucsb.edu