USF CS 682 - Distributed Software Development Self-interested Agents

Unformatted text preview:

{small lecturenumber - hepage :} Engineering systems vs Engineering agents{small lecturenumber - hepage :} Preferences and Utility{small lecturenumber - hepage :} Rationality and protocol design{small lecturenumber - hepage :} Example: Clarke tax{small lecturenumber - hepage :} Clarke tax{small lecturenumber - hepage :} Example{small lecturenumber - hepage :} Example{small lecturenumber - hepage :} Solution concepts{small lecturenumber - hepage :} Solution concepts{small lecturenumber - hepage :} Stability{small lecturenumber - hepage :} Nash equilibrium{small lecturenumber - hepage :} Nash equilibrium{small lecturenumber - hepage :} Nash equilibrium{small lecturenumber - hepage :} Nash equilibrium{small lecturenumber - hepage :} Nash equilibrium{small lecturenumber - hepage :} Selecting between equilibria{small lecturenumber - hepage :} Bilateral and multilateral negotiation{small lecturenumber - hepage :} Auctions{small lecturenumber - hepage :} Auctions{small lecturenumber - hepage :} English auctions{small lecturenumber - hepage :} First-price sealed-bid auction{small lecturenumber - hepage :} Dutch auction{small lecturenumber - hepage :} Vickrey auction{small lecturenumber - hepage :} Example{small lecturenumber - hepage :} Proof{small lecturenumber - hepage :} Proof{small lecturenumber - hepage :} Vickrey Auctions in real life{small lecturenumber - hepage :} Advantages and disadvantages{small lecturenumber - hepage :} Common and correlated-value auctions{small lecturenumber - hepage :} Winner's curse{small lecturenumber - hepage :} Winner's curse{small lecturenumber - hepage :} Combinatorial auctions{small lecturenumber - hepage :} Winner-determination problem{small lecturenumber - hepage :} Winner-determination problem{small lecturenumber - hepage :} Winner-determination problem{small lecturenumber - hepage :} Winner-determination problem{small lecturenumber - hepage :} Current research issues{small lecturenumber - hepage :} SummaryDistributed Software DevelopmentSelf-interested AgentsChris BrooksDepartment of Computer ScienceUniversity of San FranciscoDepartment of Computer Science — University of San Francisco – p. 1/??23-2: Engineering systems vs Engineering agents•Recall that at the end of Thursday’s class, we were talkingabout ant algorithms.◦By specifying a simple set of rules, we can achieveinteresting large-scale behavior.•Ant-type approaches lead us to think about how we can buildsystems that produce the effects we want.•“Given that agents will act in a particular way, how can weconstrain the environment to achieve a desirable outcome?”•This method of problem solving is best applied to problemsinvolving self-interested agents.Department of Computer Science — University of San Francisco – p. 2/??23-3: Preferences and Utility•Agents will typically have preferences over outcomes◦This is declarative knowledge about the relative value ofdifferent states of the world.◦“I prefer ice cream to spinach”•Often, the value of an outcome can be quantified (perhaps inmonetary terms.)•This allows the agent to compare the utility (or expected utility)of different actions.•A rational agent is one that maximizes expected utility.•Self-interested agents each have their own utility function.Department of Computer Science — University of San Francisco23-4: Rationality and protocol design•By treating participants as rational agents, we can exploittechniques from game theory and economics.•Assume everyone will act to maximize their own payoff•How do we structure the rules of the game so that this behaviorleads to a desired outcome?•This approach is called mechanism design.Department of Computer Science — University of San Francisco – p. 4/??23-5: Example: Clarke tax•Assume that we want to find the shortest path through a graph.•Each edge is associated with an agent.•Each edge has a privately known transmission cost.◦Agents might choose to lie about their transmission cost.•How can we find the shortest path?Department of Computer Science — University of San Francisco – p. 5/??23-6: Clarke tax•Rule:◦Accept each agent’s bid.◦If they are not on the shortest path, they get 0.◦If they are on the shortest path, they get:•Cost of next shortest path - (cost of shortest path withouttheir contribution).Department of Computer Science — University of San Francisco23-7: Examplestartfi nish5 (f)3 ( g)4(d)4( e)7(h)2 (a)2 (b)2( c)•Assume each agent bids truthfully.•Agents A, B, and C are each paid 8 - (6 - 2) = 4◦This is their contribution to the ’best solution’•Other agents are paid nothing.Department of Computer Science — University of San Francisco – p. 7/??23-8: Example•Why is truth-telling a dominant strategy?◦What if a underbids?•A bids 1: paid 8 - (5 - 1) = 4. No benefit.◦What if A overbids?•A bids 3: paid 8 - (7 - 3) = 4. No benefit.•A bids 5. No longer on the shortest path, so A gets 0.◦What if d underbids?•D bids 3: no change.•D bids 1: paid 6 - (5 - 1) = 2. But his cost is 4.◦D overbids: no change.Department of Computer Science — University of San Francisco – p. 8/??23-9: Solution concepts•So how do we evaluate an algorithm or protocol involvingself-interested agents?•Some solutions may be better for some agents and worse forothers.◦Example: cake-cutting problem•We know that each agent will try to maximize its own welfare•What about the system as a whole?Department of Computer Science — University of San Francisco23-10: Solution concepts•There are a number of potential solution concepts we can use:◦Social welfare - sum of all agent utility.◦Pareto efficiency•Is there a solution that makes one agent better off withoutmaking anyone worse off?◦Individual rationality•An agent who participates in the solution should be betteroff than if it hadn’t participated.◦Stability•The mechanism should not be able to be manipulated byone or more agents.•It’s not usually possible to optimize all of these at the sametime.Department of Computer Science — University of San Francisco – p. 10/??23-11: Stability•Ideally, we can design mechanisms with dominant strategies◦A dominant strategy is the best thing to do no matter whatany other agent does.◦In the previous example, truth-telling was a dominantstrategy.◦We would say that the mechanism is non-manipulable.(lying can’t break it.)•Unfortunately, many problems don’t have a dominant


View Full Document

USF CS 682 - Distributed Software Development Self-interested Agents

Documents in this Course
Load more
Download Distributed Software Development Self-interested Agents
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Distributed Software Development Self-interested Agents and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Distributed Software Development Self-interested Agents 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?