SE PhD Prospectus Defense of Shirantha Welikala


ABSTRACT: A distributed cooperative multi-agent system is a collection of interacting agents deployed in a mission space where each agent is allowed to control its local state using only the locally available information so that the fleet of agents collectively optimizes a common global objective. While static optimization problems associated with multi-agent systems intend on determining the fixed set of globally optimal agent states, dynamic problems aim at obtaining the set of globally optimal agent state trajectories. These problems more often have non-convex objective functions, which results in multiple local optima. This prospectus explores systematic techniques that can be deployed to either escape or avoid poor local optima while in search of better optima (still local).

First, for static multi-agent optimization problems, a distributed approach to escape local optima using the concept of boosting functions is proposed. These functions temporarily transform gradient components at a local optimum into a set of boosted non-zero gradient components in a systematic manner such that it is more effective compared to the methods where gradient components are randomly perturbed. A novel optimal variable step size scheme is also proposed to establish the convergence. Developed boosting concepts are successfully applied to the class of multi-agent coverage control problems.

Second, as a means of avoiding convergence to poor local optima, the use of greedy algorithms in generating effective initial conditions are explored. Such greedy methods are computationally cheap and can often offer to exploit submodularity properties of the problem to provide performance bound guarantees to the obtained solutions. For the class of submodular maximization problems, two new performance bounds are proposed and their effectiveness is illustrated using the class of multi-agent coverage control problems.

Third, the non-convex (and static) problem of determining the optimal agent team composition (OATC) for a coverage task is considered. Each agent has a cost value and the available agent pool is finite and heterogeneous. The aim is to maximize the coverage while minimizing the overall cost. It is shown that this OATC problem can be formulated and solved using a distributed gradient ascent algorithm where inherent nonconvexities are addressed by initializing the gradient algorithm using a greedy solution.

Finally, a class of dynamic multi-agent optimization problems: optimal persistent monitoring on graphs is considered where a team of agents is traversing on a set of nodes (targets) interconnected according to a fixed graph topology aiming to minimize a measure of mean overall node state uncertainty. In a prior work, a threshold-based controller is proposed for this problem along with an on-line gradient-based technique to obtain the optimal thresholds. However, due to the associated non-convexities, this approach leads to often poor local optima highly dependent on the initial thresholds used. To overcome this initialization challenge, the asymptotic steady-state behavior of the agent-target system is analyzed and based on the obtained results, a computationally efficient off-line greedy algorithm is developed to systematically generate initial thresholds.

COMMITTEE: ADVISOR/CHAIR Christos Cassandras, SE/ECE; Sean B. Andersson, SE/ME; David Castañón, SE/ECE; Yannis Paschalidis, SE/ECE/BME

Monday, Dec 2, 2019 at 10:00am until 12:00pm on Monday, Dec 2, 2019
Where 15 Saint Mary's Street, Rm 121 (latecomers can enter from kitchen)
Boston University