The multi-agent system framework consists of a team of autonomous agents cooperating to carry out complex tasks within a given environment that is potentially highly dynamic, hazardous, and even adversarial. In general, these tasks entail an exploration of the environment to discover or detect various “points of interest.”
The areas in which our lab is doing influential research on this topic are:
Distributed Control and Optimization
We are facing many new types of problems involving multiple computing entities, often called agents or nodes, distributed geographically and trying to combine their effort to achieve a common goal, with little or no centralized coordination. Compared with the more traditional monolithic computing systems, such distributed computing systems provide greater flexibility and responsiveness to dynamic and uncertain environments and avoid a single point of failure. More importantly, distributed computing systems are also the most natural solution to many problems, either because the system control is asynchronous and spatially distributed or the problem data are intrinsically decentralized or both. As an example, in a vehicle tracking application of sensor networks, the tracking task can not be accomplished from a central location due to limited sensor range and the vastness of the area to be monitored.
Read more …
The coverage control problem aims to find the optimal arrangement for a given set of agents inside a mission space so as to maximize the probability of detecting randomly occurring events. An agent can be thought of as a sensor node, a robot, a lighting device, an antenna or any other similar resource providing device. The importance of the coverage control problem as well as the applicability of the solving techniques are evident. In our research, we develop techniques for solving the coverage control problem under a diverse set of different conditions See platforms. These techniques can be easily adapted to numerous applications. Moreover, the generalizability of the coverage control problems enables us to develop optimization techniques which can be applied to a much broader class of problems, such as for the class of general co-operative multi-agent optimization.
Systems consisting of cooperating mobile agents are often used to perform tasks such as coverage control, surveillance, and environmental sampling. The persistent monitoring problem arises when agents must monitor a dynamically changing environment which cannot be fully covered by a stationary team of agents (as in the coverage control). A result of the exploration process is the eventual discovery of various “points of interest” which, once detected, become “targets” or “data sources” which need to be monitored. This setting arises in multiple application domains ranging from surveillance, environmental monitoring, and energy management down to nano-scale systems tasked to track fluorescent or magnetic particles for the study of dynamic processes in bio-molecular systems and in nano-medical research.
In contrast to a patrolling problem where “every” point in a mission space must be monitored, the problem we address here involves a “finite number” of targets (typically larger than the number of agents) which the agents must cooperatively monitor through periodic visits. We model each target as a queue which value increases if not being covered and decreases if being covered by agents. Our objective is to minimize the accumulated target values over all targets within a given time horizon. Read more …
Once detected, these points of interest become “targets” or “data sources” which need to be monitored. If the targets have dynamics and are mobile, then they also need to be tracked by the agents. Thus, the overall objective of the system may be time-varying and combines exploration, data collection, and tracking to define a “mission”, all in the presence of uncertainties in the processes involved and usually with far more targets than agents. This setting typically arises in mobile robotic applications and sensor networks, but it is surprisingly rich and encompasses a number of other, much less obvious, application domains.
The control and coordination of agents, whether they be autonomous robots, or sensor platforms, in dynamic, hazardous, and possibly adversarial environments is highly challenging since it involves multiple objectives and a considerable amount of information exchange with often severe communication limitations (e.g., in a wireless network, the agents must operate with limited energy resources). Experience has shown that, even in relatively simple problems, the use of ad hoc control policies frequently leads to poorly performing systems. This motivates the use of optimization methods to ensure that well-designed, rational policies are developed that can guarantee satisfactory, if not optimal, behavior. Naturally, such optimization problems rapidly get computationally intractable and their solution is rarely amenable to on-line scalable, distributed implementations.
The types of tasks performed by multi-agent systems include consensus, coverage control, and persistent monitoring. The persistent monitoring problem arises when agents must monitor a dynamically changing environment which cannot be fully covered by a stationary team of agents. Thus, persistent monitoring differs from traditional coverage tasks due to the perpetual need to cover a changing environment.