DOC PREVIEW
CMU CS 15892 - Better with Byzantine: Manipulation-Optimal Mechanisms

This preview shows page 1-2-3-4 out of 12 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 12 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 12 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 12 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 12 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 12 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Better with Byzantine:Manipulation-Optimal MechanismsAbraham Othman and Tuomas SandholmComputer Science DepartmentCarnegie Mell on University{aothman,sandholm}@cs.cmu.eduAbstract. A mechanism is manipulable if it is in some agents’ best in-terest to misrepresent their private information. The revelation principleestablishes that, roughly, anything that can be accomplished by a manip-ulable mechanism can also be accomplished with a truthful mechanism.Yet agents often fail to play their optimal manipulations due to computa-tional limitations or various flavors of incompetence and cognitive biases.Thus, manipulable mechanisms in particular should anticipate byzantineplay. We study manipulation-optimal mechanisms: mechanisms that areundominated by truthful mechanisms when agents act fully rationally,and do better than any truthful mechanism if any agent fails to act ra-tionally in any way. This enables the mechanism designer to do betterthan the revelation principle would suggest, and obviates the need topredict byzantine agents’ irrational behavior. We prove a host of possi-bility and impossibil ity results for the concept which have the impressionof broadly limiting possibility. These results are largely in line with therevelation principle, although the considerations are more subtle and theimpossibility not universal.1 IntroductionMechanism design is the science of generating rules of interaction—such as auc-tions and voting protocols—so that desirable outcomes result despite partici-pating agents (humans, companies, software agents, etc.) acting in their owninterests. A mechanism receives a set of preferences (i.e. type reports) from theagents, and based on that information imposes an outcome (such as a choice ofpresident, an allocation of items, and potentially also payments).A central concept in mechanism design is truthfulness, which means that anagent’s best strategy is to report its type (private information) truthfully to themechanism. The revelation principle, a foundational result in mechanism design,proves that any social choice function that can be implemented in some equi-librium form can also be implemented using a mechanism where all the agentsare motivated to tell the truth. The proof is based on simply s upplementingthe manipulable mechanism with a strategy formulator for each agent that actsstrategically on the agent’s behalf (see, e.g., [1]). Since truthfulness is certainlyworth something—simplicity, fairness, and the removal of incentives to invest2in information gathering about others—the revelation principle produces some-thing for nothing, a free lunch. As a result, mechanism design research has largelyfocused on truthful mechanisms.In this work, we explore what can happen in manipulable mechanisms whenagents do not play optimally. Is it possible to design mechanisms with desirableoff-equilibrium properties? There are several reasons why agents may fail to playtheir optimal manipulations. Humans may play sub-optimally due to cognitivelimitations and other forms of incompetence. The field of behavioral game the-ory studies the gap between game-theoretic rationality and human behavior (anoverview is given in [2]). Agents may also be unable to find their optimal ma-nipulations due to computational limits: finding an optimal report is NP-hardin many settings (e.g., [3–6]), and can be #P-hard [4], PSPACE-hard [4], oreven uncomputable [7]. One notable caveat is that an agent’s inability to find itsoptimal manipulation does not imply that the agent will act truthfully. Unableto solve the hard problem of finding its optimal manipulation, an agent maysubmit its true private type but she could also submit her best guess of whather optimal manipulation might be or, by similar logic, give an arbitrary report.A challenge in manipulable mechanisms is that it is difficult to predict in whichspecific ways agents will b e have if they do not play according to game-theoreticrationality. Byzantine players, who behave arbitrarily, capture this idea.In this paper, we explore mechanism design beyond the realm of truthfulmechanisms using a concept we call manipulation optimality, where a mechanismbenefits—and does better than any truthful mechanism—if any agent fails inany way to play her optimal manipulation. This enables the mechanism designerto do better than the revelation principle would suggest, and obviates the needto predict agents’ irrational behavior. Conitzer and Sandholm [5] proved theexistence of such a mechanism in one constructed game instance, but this workis the first to explore the concept formally and broadly.2 The general settingEach agent i has type θi∈ Θiand a utility function uθii(o) : O → <, whichdepends on the outcome o ∈ O that the mechanism selects. An agent’s typecaptures all of the agent’s private information. For brevity, we sometimes writeui(o). A mechanism M : Θ1× Θ2× · · · × Θn→ O selects an outcome based onthe agents’ type reports.The mechanism designer has an objective (which can be thought of as mech-anism utility) which maps outcomes to real values :M(o) =nXi=1γiui(o) + m(o),where m(·) captures the designer’s desires unrelated to the agents’ utilities, andγi≥ 0. This formalism has three widely-explored objectives as special cases:– Social welfare: γi= 1 and m(·) = 0.3– Affine welfare: γi> 0 and m(·) ≥ 0.– Revenue: Let outcome o correspond to agents’ payments, π1(o), . . . , πn(o),to the mechanism. Fix γi= 0 and m(o) =Pni=1πi(o).Definition 1. Agent i has a manipulable type θiif, for some report of the otheragents’ types θ−i, t here exists θ0i6= θisuch t hatui(M(θ0i, θ−i)) > ui(M(θi, θ−i))Note that a type that is manipulable for some reports of the other agents, butnot for other reports of the other agents, is still manipulable.Definition 2. Types θiand θ0iare distinct if there exists some report of otheragents θ−i, such that the best response for type θiis to submit t and the bestrespo nse for type θ0iis t0, w hereM(t, θ−i) ≡ o 6= o0≡ M (t0, θ−i), anduθii(o) > uθii(o0), and uθ0ii(o0) > uθ0ii(o)Put another way, types are distinct only if there exists a circums tance underwhich agents with those types will be motivated to behave distinctly, causingdistinct outcomes that provide distinct payoffs.Definition 3. A mechanism is (dominant-strategy) truthful if no agent has amanipulable type.Definition 4. Let f and g be functions mapping


View Full Document

CMU CS 15892 - Better with Byzantine: Manipulation-Optimal Mechanisms

Documents in this Course
Lecture

Lecture

35 pages

Load more
Download Better with Byzantine: Manipulation-Optimal Mechanisms
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Better with Byzantine: Manipulation-Optimal Mechanisms and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Better with Byzantine: Manipulation-Optimal Mechanisms 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?