Cooperative Control


Cooperative control deals with the problem of controlling a multi-agent robotic system to fulfill a common goal. The tasks associated with these robotic systems include search, exploration, surveillance, rescue operations and mapping unknown or partially known environments. In real-world applications, the control of multiple robots is often complicated by the following factors

  • Resource constraints on sensing, motion and communication capabilities, on-board computation capacities, and power supplies
  • Unknow, uncertain nature of the environments requires robotic systems to be adaptive to environmental changes, accurate in information acquisition, prompt and smart in decision making
  • Distributed, asynchronous information and computation structures are inherent due to the geographical separation and communication constrains


We are interested in the decision making process in cooperative control applications. We currently concentrate on the cooperative mission control of multiple Uninhabitated Autonomous Vehicles (UAVs) in a battlefield environment. The main objectives include developing the methodology and software that enable

  • Dynamic task assignment
  • Vehicle routing and obstacle-free path planning
  • Distributed and real-time decision making
  • Optimal trajectory generation under nonholonomic constraints
DRH-Addin target

Centralized Implementations

The following videos are made by Wei Li and Xu Ning. They implemented it using Khepera II robots. The control strategy is implemented in a central computer and sent to robots via RF wireless communication.

Distributed Implementations

The following videos are made by Yanfeng Geng. He implemented it using Khepera III robots. Each robot carries a camera to obtain target information, and makes decision (movement heading and speed) by itself. No central computer and no overhead camera are needed.

Case I: Environment fixed

Case II: One target is added into the mission space

Case III: There is an obstacle in the mission space and also another target is added in during the mission. Before going back to the homebase, each robot searches around to see whether there is any target left without visiting.

Maximum Reward Collection Problem:

The following is a video from simulation results for MRCP, The mission space has 25 targets distributed uniformly with 2 agents originally located at the base. Target’s reward value is decreasing by time and as can be seen some targets are vanished before agents get to reach them in this scenario. The results are for a 3-L controller:

The same mission is also used with a 5-L controller in the following simulation: