SE PhD Final Defense: Eric Wendel

  • Starts: 10:00 am on Friday, June 27, 2025
  • Ends: 12:00 pm on Friday, June 27, 2025

MSE PhD Final Defense: Eric Wendel

TITLE: Stable autonomous visual navigation

ADVISOR: John Baillieul ME

COMMITTEE: Roberto Tron ME; Sean Andersson ME; Bobak Nazer ECE; Chair: Emiliano Dall'Anese

ABSTRACT: Where should a robot look as it moves through an unknown scene? The answer to this deceptively simple question depends on what it can infer about the relative geometry of objects in the scene, and their continuously changing relevance to its navigation task. The robot responds to these changes at discrete times with discrete actions, due to the necessarily digital nature of sensing and control. Of key concern is how the robot avoids collisions as it interacts with the scene. This work contributes a novel design methodology guaranteeing stable and autonomous robotic navigation of unknown visual scenes. Designs are targeted for embedded and distributed computing architectures, and support vision-based autonomy solutions that blend data-driven and model-based methods. Contributions are organized in three parts. The first part positions the problem of stabilizing continuous-time dynamical systems under digital feedback control within the framework of sequential online prediction. This allows us to identify synergistic connections between the design of almost globally stable nonlinear control systems, and the design of forecast policies under the expert prediction protocol. The resulting methodology synthesizes a family of model predictive control strategies into an exponentially weighted system control policy. Necessary conditions for almost global stability under this policy are presented in the form of activation conditions for each predictive control strategy. In a visual navigation context these activation conditions describe exactly when, how, and how often the robot should steer itself in order to avoid collisions over the course of its navigation task. In the second part of this work a robot is equipped with a depth camera, lidar, or similar range-imaging device, and deployed to different scenes within the Isaac Lab simulation environment. Simulation results confirm theoretical stability guarantees for real-time collision avoidance without training. Videos are available online. https://www.youtube.com/playlist?list=PLMCnizjnH21aDq-bdCsVyXsu9vyJxzS2_ In the third part the robot is instead equipped with a conventional digital camera, and a family of experts forecasting the activation conditions for almost global stability. The sequential interactions between system and scene adhere to the standard bandits with expert advice protocol, and we propose a standard exponential weights policy for keyframe-based navigation. Performance at visual odometry and scene reconstruction tasks is characterized in terms of the minimax redundancy of the photometric losses received from the scene. Our final contribution is a new algorithm for real-time, distributed photometric point feature tracking.

Location:
EMB 121
Hosting Professor
John Baillieul ME, SE