DOC PREVIEW
MIT 3 11 - Study Guide

This preview shows page 1-2-3-4-5 out of 14 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 14 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 14 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 14 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 14 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 14 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 14 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

How Smart Does an Agent Need to Be? Scott Kirkpatrick1 and Johannes J. Schneider1,21Hebrew University of Jerusalem, Israeland2Johannes Gutenberg University, Mainz, GermanyAbstractThe classic distributed computation is done by atoms, molecules, or spins in vastnumbers, each equipped with nothing more than knowledge of their immediateneighborhood and the rules of statistical mechanics. Such agents, 10^23 or more ofthem, are able to form liquids and solids from gases, realize extremely complex orderedstates, such as liquid crystals, and even decode encrypted messages. I'll describe a studydone for a sensor-array "challenge problem" in which we have based our approach onold-fashioned simulated annealing to accomplish target acquisition and tracking underthe rules of statistical mechanics. We believe the many additional constraints that occurin the real problem can be folded, step by step, into this stochastic approach. The resultshave applicability to other network management problems on scales where a distributedsolution will be mandatory.IntroductionA very old idea in distributed computing and communication is the network that self-organizes to perform some local function of moderate complexity and thencommunicates its discoveries to some wider world that is interested. In militarycommunications, this might take the form of sensor arrays of simple, inexpensive units“sprinkled” behind a rapidly advancing force in battle to provide communications, oraround a sensitive installation to create a defensive perimeter. We recently participatedin a study of an application of this sort. From our experience, we concluded thatunderstanding the complexity of these self-organizing systems is critical to achievingtheir objectives, and is sorely neglected in the most popular approaches to programmingthem. The study which we joined (in midstream) had been envisioned as an exercise inmulti-agent distributed programming with negotiation protocols. These were to beemployed to resolve a difficult optimization problem, which would frequently requiresettling for locally sub-optimal results in order to meet challenging real time constraintswith acceptable solution quality. The sensors in this array are Doppler radars, rather useless individually but capable oftracking targets if three or more of the radars can lock onto a single foreign object for asufficient time. In possible applications, such as protecting a aircraft carrier duringrefueling, one could imagine having a few thousand such antennas floating in the sea.But for research and demo purposes, controlling four to twenty such antennas wasconsidered quite challenging. Thus the temptation was strong to write very special-caseagents, and evolve very specific negotiation protocols as overloads and other problemswere discovered. One of the other groups in this study was planning to develop a target1selection protocol, with negotiation, that would support 64 antennas observing a dozenor so targets, and simulate it on a 32-processor server cluster, but not in real time.Working with Bart Selman and the Intelligent Information Systems Institute of CornellUniversity, we were asked to determine if there might be phase transitions, or otherthreshold phenomena blocking the way to success in this endeavor, and if so, what to doabout it. We wrote a simulation of the target assignment and tracking problem.Because of our concern with complexity and its scaling, we treated the antennas asspins, not agents, and looked first for a Hamiltonian that would give them the rightbehavior, counting on statistical mechanics and temperature to handle the negotiationpart. It worked rather well, although there were a few inventions required to find a wayto make everything work. What follows is lightly edited from our progress reports.After describing our effort, we conclude with comments on the role of phase transitionsin this sort of large scale engineering problems as they continue to grow in scale overfuture decades.Physics-inspired negotiation for Sensor Array self-organizationWe implemented a family of models which borrow insights and techniques fromstatistical physics and use them to solve the sensor challenge problem under a series ofincreasing restrictive real-time constraints. We describe results with fairly difficultcommunications constraints and moving targets, covering the phases from targetacquisition to tracking. The method can deal with momentary overloads of targets bygracefully degrading coverage in the most congested areas, but picking up adequatecoverage as soon as the target density decreases in those areas, and holding it longenough for the target’s characteristics to be determined and a response initiated. Ourutility function-based approach could be extended to include more system-specificconstraints yet remained efficient enough to scale to larger numbers of sensors.Our model consists of 100s of sensors attempting to track a number of targets. Thesensors are limited by their ability to communicate quickly and directly withneighboring sensors, and can see targets only over a finite range (which probablyexceeds the range over which they can communicate with one another). We take thecommunication limits to be approximately the values that arise in the Seabotdeployment by MITRE. This is modeled probabilistically. Initially we assumed thatthere is 90% probability of a communication link working between two neighboringsensors, 50% probability at twice that distance, and 10% probability at three times thatdistance, using a Fermi function for the calculation at an arbitrary distance.Subsequently, we explored other characteristic cutoff ranges for communications, usingthe same functional form for the probability of communication. At the start of eachsimulation we use this function to decide which communication links are working. Therange at which a target can be tracked is a parameter which we vary from a minimum of1.5 times a typical neighbor distance to 4 times this distance. Sensors are placed on aregular lattice, with positions varied from the lattice sites by a random displacement ofup to 0.1 times the lattice spacing. We considered three lattice


View Full Document

MIT 3 11 - Study Guide

Documents in this Course
Load more
Download Study Guide
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Study Guide and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Study Guide 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?