New version page

A Formalization of Equilibria for Multiagent Planning

Upgrade to remove ads

This preview shows page 1-2 out of 6 pages.

Save
View Full Document
Premium Document
Do you want full access? Go Premium and unlock all 6 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 6 pages.
Access to all documents
Download any document
Ad free experience

Upgrade to remove ads
Unformatted text preview:

A Formalization of Equilibria for Multiagent PlanningMichael Bowling, Rune Jensen, and Manuela VelosoComputer Science DepartmentCarnegie Mellon UniversityPittsburgh PA, 15213-3891AbstractPlanning has traditionally focused on single agent sys-tems. Although planning domain languages have beenextended to multiagent domains, solution concepts havenot. Previous solution concepts either focus on planningfor teams of agents with a single goal, or on a singleagent in an environment containing other agents withunknown and unpredictable behavior. In reality otheragents are usually acting to achieve their own explicitgoals, which may not be the same or even related. Ingame theory the notion of an equilibria provides a frame-work for thinking about self-interested, utility maximiz-ing agents. We define a formalization of the multiagentnondeterministic planning problem and introduce a no-tion of equilibria inspired by the game theoretic con-cept. As far as we know, this is the first solution frame-work that explicitly accounts for the various goals of allagents. In addition to the formalization we also demon-strate how this framework applies in a number of differ-ent domains.IntroductionTraditionally, planning assumes a single agent forwhich a planner needs to find a sequence of actionsthat can transform some initial state into some statewhere a given goal statement is satisfied. But in gen-eral, “planning” can be viewed as being concernedwith the general action selection problem. The plan-ning framework has extended the classical deterministicstate-action plan generation focus to many other dimen-sions, in particular nondeterministic actions. With theintroduction of nondeterministic actions, the presenceof an environment and other agents becomes a consider-ation. In fact, actions may have nondeterministic effectsnot only because of the uncertainty of their own execu-tion, but also due to the uncertainty of the model of theenvironment where the actions are executed, or due touncertainty in the actions of other agents. The possi-ble presence of other agents as executors is viewed asthe substrate of “multiagent planning.” The interest inthis area has been steadily increasing and many issuesremain open.Copyrightc 2002, American Association for Artificial Intel-ligence (www.aaai.org). All rights reserved.We note that appropriate languages have been con-tributed for the representation of domains with nonde-terministic actions including an explicit statement of theexistence of other agents (Jensen & Veloso 2000). Otherefforts offer modular architectures to represent multia-gent planning domains (Wilkins & Myers 1998). How-ever, we believe that there has not been a formal dis-cussion of the space of multiagent plans or solutions.In this work, we do not focus on the problem of plangeneration for multiagent planning. Instead, we focuson the interesting question of analyzing and compar-ing solutions for multiagent planning. Our motivationcomes from making an analogy with game theory andthe notion of equilibria (Owen 1995). In simple terms,in game theory an equilibrium is a joint solution for allthe agents, such that there is no reason for any agent tochange their own choice of actions given their desire tomaximize some real-valued utility function.Inspired by game theory and extending previous for-mal definitions of single-agent planning (Cimatti et al.2000), in this paper, we introduce a formal definition ofequilibria for multiagent planning. We begin motivat-ing through an example that multiagent planning mustaddress domains where agents have different goals.This is a rich space of scenarios including planning forteammates, agents with similar goals, adversaries, andthe large spectrum in between. We then present a for-malization of the multiagent planning problem that in-cludes all multiagent goal situations. We also define theconcept of multiagent planning equilibria as a unifyingsolution concept for these varied scenarios. Throughoutthe paper, the theoretical definitions are illustrated ina number of carefully designed examples. Finally, wediscuss some questions raised by this work along withfuture directions and conclude.Multiagent Plans Depend on GoalsFrom the beginning, plans have been contingent uponthe agent’s goals. Plans are usually evaluated as towhether they achieve the goals; sometimes consideringhow quickly, with what probability, or from what initialstates. In addition, goals are often the driving mecha-nism for finding good plans through Means-Ends Anal-B AGOALFigure 1: A soccer-like domain.ysis (Newell & Simon 1963). In multiagent domainsplans are of course still contingent on goals. There isan additional dependence, though. Good plans also de-pend on the plans of the other agents, which as we havestated, depends heavily on their goals. We will illustratethis with an example to demonstrate the importance oftaking into account the other agents’ goals.Soccer DomainConsider the simple soccer-like domain simplified fromLittman’s two-player grid game (1994), diagrammed inFigure 1. There are two agents A and B each of whichsolely occupies one of the four unshaded squares on thefield. Agent A begins in possession of the ball. Theshaded squares are out-of-bounds for our simplificationof the domain. The agents have operators or actionsassociated with each of the compass directions (N, S,E, and W) or can wait, holding its position (H). Thetwo agents in this domain select their actions simultane-ously, but in execution there is an undetermined delaybefore these actions are carried out. So, the executionis serial but it is nondeterministic as to which agent’saction is carried out first. The effect of an action is tosimply move the agent in the specified direction or re-main in its position as long as the target square is unoc-cupied. If the target is occupied then the agent does notmove, but if the agent was carrying the ball it loses theball. For agent A, losing the ball terminates executionas a failure. The goal for agent A is to move into eitherof the two labeled goal squares, which also terminatesexecution.Figure 2 shows two examples of how the agents op-erators affect the state. From the initial state, if bothagents choose their south operator (S,S) they will bothsimply move south. But if agent A selects south andagent B selects East (S,E), then there are two possi-ble outcomes depending on their order of execution: (i)agent A moves south first, and then agent B moves


Download A Formalization of Equilibria for Multiagent Planning
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view A Formalization of Equilibria for Multiagent Planning and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view A Formalization of Equilibria for Multiagent Planning 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?