Distributed Control and Optimization

Introduction of Distributed Computing

Conventionally, when people use a computer to solve a computational problem (e.g., solving systems of equations, optimization, planning or control), it is solved using a single computing entity or a number of parallel machines located close to each other. Recently, with the emergence of new application areas and advances in computer and network technologies, we are facing many new types of problems involving multiple computing entities, often called agents or nodes, distributed geographically and trying to combine their effort to achieve a common goal, with little or no centralized coordination. Compared with the more traditional monolithic computing systems, such distributed computing systems provide greater flexibility and responsiveness to dynamic and uncertain environments and avoid a single point of failure. More importantly, distributed computing systems are also the most natural solution to many problems, either because the system control is asynchronous and spatially distributed or the problem data are intrinsically decentralized or both. As an example, in a vehicle tracking application of sensor networks, the tracking task can not be accomplished from a central location due to limited sensor range and the vastness of the area to be monitored. In addition, it is often too expensive to send all the raw data collected from all these distant locations to a central location for decision making, thus requiring the sensor nodes to perform some local data processing before reporting back.

Our research focuses on agents that are designed to interact with their environments (e.g.., through sensors and actuators) such as a team of autonomous vehicles and mobile sensor networks. Additionally, in a computing world that largely goes mobile, these agents often have to communicate with other agents in the system through costly and unreliable wireless links and are constrained both in their computation capacity and in their energy reserve. Such system characteristics raise many issues such asstate synchronization and energy conservation.

Coverage Control Simulator Portal

Followings are some simulation results of coverage control.

The most up-to-date coverage control simulator with the option to test synchronous or asynchronous iterations.

original_coverage_control_simulator


The most up-to-date coverage control simulator with the option to test synchronous or asynchronous iterations.

density

Coverage control with connectivity preservation. Based on the work submitted to CDC2009.

connectivity_preservation_screen_shot

Coverage control with occupancy grid based target detection and tracking. The sensor nodes’ objective is to monitor the targets (red stars) closely while still maintaining a high coverage of the entire mission space.

target_detection big

Coverage control with target tracking (moving event location). An purple circle (the intruder) will move around the mission space. The sensor nodes’ objective is to monitor the intruder closely while still maintaining a high coverage of the entire mission space.

target_tracking_simulator

Coverage control with map-building. Obstacles are unknown to the nodes initially. They are discovered when nodes are close enough to them. Bold lines indicated discoved portion of the obstacles.

Original map_building_applet_snapshot

No obstacles, so this is a smooth optimization problem. Ideal for testing convergence.

no_obstacle_comm_delay

Here we show the routing tree and cost of sending a bit of data to the base-station and the total communication cost. User can change the importance of saving communication cost by adjusting “Comm. weight”

communication_link_shown

Coverage control Java simulation screen recordings

Joint coverage, data source detection and data collection. First stage: Optimal coverage control with detection probability color map overlay. Second stage: Three moving data sources appear. Joint coverage, data source detection and data collection, with occupancy grid overlay.

We assume that all nodes need to communicate with a base station located at (0,0) and nodes can only communicate over a limited range. Lines between nodes indicate communication links and the dot on a node indicates the direction it wants to move.

Java applet simulation in a maze-like mission space with uniform event density. The blue polygons are obstacles. The mission space is color-coded to illustrate the detection probability: events in the magenta area are detected with probability greater than 0.97; the area with color graduating from cyan to green corresponds to a detection probability in the range [0.5, 0.97]; and the area with color graduating from white to yellow corresponds to a detection probability in the range [0, 0.5]. The white area can be essentially considered as sensor’s blind spot.

Target tracking experiment in a cluttered space. The blue blocks are obstacles and the purple circle appears in the upper-left corner is the target that the mobile sensor nodes are trying to monitor. At the same time, the sensor still need to maintain coverage of the mission space.

Coverage control in a cluttered mission space. Compare the final node locations in this video with the third video in the Khepera III experiment footage section.

Coverage control in a cluttered mission space. Compare the final node location in this video with the fourth video in the Khepera III experiment footage section.

Coverage control Khepera III experiment footage

The videos listed in this section are the implementations of the distributed coverage control algorithms using a team of Khepera III robots. Click on the images to view the videos.

An experiment in CODES. Four mobile omnidirectional sensor nodes are deployed to guard the uniform density mission space (the wooden platform) with no obstacle (so please ignore all the colorful lines on the surface in this video).

An experiment in CODES. This time, we have obstacles (the yellow rectangles). Sensor nodes can not sense through the obstacles. So they have to find strategic spots in order to maximize the coverage.

Experiments in a smaller platform. The color blocks are obstacles. Compare the final node locations in this video with the third video in the Java simulation screen recording section.

Experiments in a smaller platform. The obstacles are arranged differently from the previous video. Compare the final node locations in this video with the fourth video in the Java simulation screen recording section.

Target tracking experiment. First, a static target is dropped in and the three mobile sensor nodes move closer to the target and monitor it. At the same time, they still need to maintain coverage of the mission space in case new target appears. Later, a mobile target appears and the sensor nodes try to follow and monitor it.