SE PhD Prospectus Defense of Yasaman Khazaeni
- 2:00 pm on Monday, March 10, 2014
- 4:00 pm on Monday, March 10, 2014
- 15 Saint Mary's Street, Rm 105
ABSTRACT: The focus of this work is the application of receding horizon scheme in the cooperative control of multi-agent systems. The multi-agent systems framework allows us to explore some fundamental cooperative control problems, such as, reward maximization, data harvesting, resource allocation, disaster relief, and cooperative path-planning. In all of these systems, a group of "agents" are gathered to perform a "mission". This requires solutions that are obtained online with possibly limited computation capacity, uncertain environment and real time
constraints. As a general setting, we consider the Maximum Reward Collection Problem (MRCP) which can be applied as a framework to many of the cooperative control problems. In the MRCP, a set of agents are set to collect time-dependent reward associated with stationary targets in an uncertain environment. Each target’s reward is a time-variant function which goes to zero by a limited deadline. The problem environment is assumed to be uncertain with possible new targets becoming available at any time. Moreover, the agents can have a limited sensing range and they may detect targets within a certain capture radius. Obstacles and agent’s failure might be another source of uncertainty.
We present an event-driven Cooperative Receding Horizon (CRH) controller for a set of multi-agent systems. The idea is to solve a finite horizon optimization problem that maximizes the total expected reward collected by all agents at the end of the mission. The finite horizon is then moved forward and a new optimization problem is defined. The controller calculates agent’s trajectories by determining the heading for each agent at certain event times. To be specific,the controller finds the control for a planning horizon and executes that for a shorter action horizon. This action horizon is dynamically changing if a new event happens during the mission. The event-driven nature of the action horizon allows a new optimal control problem to be solved at any time a new piece of information becomes available. This enables us to handle problems in uncertain environment where new information is due anytime during the mission.
COMMITTEE: Advisor: Christos Cassandras, SE/ECE; Ioannis Paschalidis, SE/ECE; Pirooz Vakili, SE/ME; Mac Schwager, SE/ME