Unified Vision-Based Motion Estimation and Control for Multiple and Complex Robots

Sponsor: National Science Foundation (NSF)

Award Number: 2212051

PI: Roberto Tron


The project enables teams of robots to collaborate on physical tasks, such as assembling a building from prefabricated components under the direction of a human worker. In such settings, each robot might be equipped with cameras to orient itself and have some limitations on how it can move. To achieve the robotic team’s goals, each robot needs to know its location, where to find the prefabricated components, and where the final building should be placed. In addition, the robots need to coordinate with each other on how they move to transport and assemble the components, and to inspect the results of their work. This project develops novel mathematical and engineering methods that would enable the robotic teams to work collaboratively to achieve their goals. The developed methods will be generalizable making them applicable to many different situations and obtaining better results than what is possible with existing approaches. To facilitate a broader impact of this work, the research team will develop an easy-to-use software that facilitates the application of the research to different and new problems. The research team will also collaborate with engineers from Autodesk Inc. to ensure that the developed solution can benefit existing professional design and visualization tools currently used in industry.

The robot tasks lay out above imply non-trivial vision-kinodynamic constraints (e.g., rotations, perspective projections) deriving from the intrinsic geometric properties of how the robots move and sense. This project introduces a novel parametrization of the problem, called Shape-of-Motion, that can flexibly incorporate different combinations of vision-kinodynamic constraints (such as close-kinematic-chain, feature matching across multiple images, projection, and field-of-view constraints) as linear constraints on a low-rank matrix. An associated optimization solver, based on the Alternating Direction Method of Multipliers, will find solutions in an elegant and unified manner by iterating between low-rank projections and least-squares steps; different application-specific requirements will then be simply incorporated in the solver as linear constraints. The main intellectual merit of the developed technique is in its versatility and ability to holistically tackle complex problems that are traditionally solved using stacks of separate algorithms (e.g., matching features across images, followed by a 3-D reconstruction of the scene and localization of the robots, followed by planning in the reconstructed map under kinematic constraints).

For more information, click here.