Unformatted text preview:

Networks: Lectures 22-23IntroductionIntroductionRecap of HerdingMyopic LearningA Richer Model of Non-Bayesian LearningBayesian Social Learning over NetworksBayesian Learning What Heterogeneous PreferencesBayesian Communication Learning6.207/14.15: Networks Lectures 22-23: Social Learning in Networks Daron Acemoglu and Asu Ozdaglar MIT December 2 end 7, 2009 1Networks: Lectures 22-23 Introduction Outline Recap on Bayesian social learning Non-Bayesian (myopic) social learning in networks Bayesian observational social learning in networks Bayesian communication social learning in networks Reading: Jackson, Chapter 8. EK, Chapter 16. 2Networks: Lectures 22-23 Introduction Introduction How does network structure and “influence” of specific individuals affect opinion formation and learning? To answer this question, we need to extend the simple example of herding from the previous literature to a network setting. Question: is Bayesian social learning the right benchmark? Pro: Natural benchmark and often simple heuristics can replicate it Con: Often complex Non-Bayesian myopic learning: (rule-of-thumb) Pro: Simple and often realistic Con: Arbitrary rules-of-thumb, different performances from different rules, how to choose the right one? 3Networks: Lectures 22-23 Introduction What Kind of Learning? What do agents observe? Observational learning: observe past actions (as in the example) Most relevant for markets Communication learning: communication of beliefs or estimates Most relevant for friendship networks (such as Facebook) The model of social learning in the previous lecture was a model of Bayesian observational learning. It illustrated the possibility of herding, where everybody copies previous choices, and thus the possibility that dispersely held information may fail to aggregate. 4Networks: Lectures 22-23 Recap of Herding Recap of Herding Agents arrive in town sequentially and choose to dine in an Indian or in a Chinese restaurant. A restaurant is strictly better, underlying state θ ∈ {Chinese, Indian}. Agents have independent binary private signals. Signals indicate the better option with probability p > 1/2. Agents observe prior decisions, but not the signals of others. Realization: Assume θ = Indian Agent 1 arrives. Her signal indicates ‘Chinese’. She chooses Chinese. Agent 2 arrives. His signal indicates ‘Chinese’. He chooses Chinese. Agent 3 arrives. Her signal indicates ‘Indian’. She disregards her signal and copies the decisions of agents 1 and 2, and so on. 1Decision = ‘Chinese’2Decision = ‘Chinese’3Decision = ‘Chinese’5Networks: Lectures 22-23 Recap of Herding Potential Challenges Perhaps this is too “sophisticated”. What about communication? Most agents not only learn from observations, but also by communicating with friends and coworkers. Let us turn to a simple model of myopic (rule-of-thumb) learning and also incorporate network structure. 6�� Networks: Lectures 22-23 Myopic Learning Myopic Learning First introduced by DeGroot (1974) and more recently analyzed by Golub and Jackson (2007). Beliefs updated by taking weighted averages of neighbors’ beliefs A finite set {1, . . . , n} of agents Interactions captured by an n × n nonnegative interaction matrix T Tij > 0 indicates the trust or weight that i puts on j T is a stochastic matrix (row sum=1; see below) There is an underlying state of the world θ ∈ R Each agent has initial belief xi (0); we assume θ = 1/n in =1 xi (0) Each agent at time k updates his belief xi (k) according to nxi (k + 1) = Tij xj (k) j=1 7Networks: Lectures 22-23 Myopic Learning What Does This Mean? Each agent is updating his or her beliefs as an average of the neighbors’ beliefs. Reasonable in the context of one shot interaction. Is it reasonable when agents do this repeatedly? 8� � Networks: Lectures 22-23 Myopic Learning Stochastic Matrices Definition T is a stochastic matrix, if the sum of the elements in each row is equal to 1, i.e., � Tij = 1 for all i. j Definition T is a doubly stochastic matrix, if the sum of the elements in each row and each column is equal to 1, i.e., Tij = 1 for all i and Tij = 1 for all j. j i Throughout, assume that T is a stochastic matrix. Why is this reasonable? 9in the figureNetworks: Lectures 22-23 Myopic Learning Example Consider the following example ⎛ ⎞ 1/3 1/3 1/3 T = ⎝ 1/2 1/2 0 ⎠ 0 1/4 3/4 Updating as shown 10Networks: Lectures 22-23 Myopic Learning Example (continued) Suppose that initial vector of beliefs is ⎛ ⎞ 1 x (0) = ⎝ 0 ⎠ 0 Then updating gives ⎛ ⎞ ⎛ ⎞ ⎛ ⎞ 1/3 1/3 1/3 1 1/3 x (1) = Tx (0) = ⎝ 1/2 1/2 0 ⎠ ⎝ 0 ⎠ = ⎝ 1/2 ⎠ 0 1/4 3/4 0 0 11Networks: Lectures 22-23 Myopic Learning Example (continued) In the next round, we have ⎛ ⎞⎛ ⎞ 1/3 1/3 1/3 1/3 x (2) = Tx (1) = T 2 x (0) = ⎝ 1/2 1/2 0 ⎠⎝ 1/2 ⎠ 0 1/4 3/4 0 ⎛ ⎞ 5/18 = ⎝ 5/12 ⎠ 1/8 In the limit, we have ⎛ ⎞ ⎛ ⎞ 3/11 3/11 5/11 3/11 x (n) = T n x (0) ⎝ 3/11 3/11 5/11 ⎠ x (0) = ⎝ 3/11 ⎠ .→ 3/11 3/11 5/11 3/11 Note that the limit matrix, T ∗ = limn→∞ T n has identical rows. Is this kind of convergence general? Yes, but with some caveats. 12Networks: Lectures 22-23 Myopic Learning Example of Non-convergence Consider instead ⎛ ⎞ 0 1/2 1/2 T = ⎝ 1 0 0 ⎠ 1 0 0 Pictorially 13Networks: Lectures 22-23 Myopic Learning Example of Non-convergence (continued) In this case, we have For n even: ⎛ ⎞ 1 0 0 T n = ⎝ 0 1/2 1/2 ⎠ . 0 1/2 1/2 For n odd: ⎛ ⎞ 1/2 1/2 0 T n = ⎝ 1 0 0 ⎠ . 1 0 0 Thus, non-convergence. 14Networks: Lectures 22-23 Myopic Learning Convergence Problem in the above example is periodic behavior. It is sufficient to assume that Tii > 0 for all i to ensure aperiodicity. Then we have: Theorem Suppose that T defines a strongly connected network and Tii > 0 for each i, then limn T n = T ∗ exists and is unique. Moreover, T ∗ = eπ�, where e is the unit vector and π is an arbitrary row vector. In other words, T ∗ will have identical rows. An immediate corollary of this is: Proposition In the myopic learning model above, if the interaction matrix T defines a strongly connected network and Tii > 0 for each i, then there will be consensus among the agents, i.e., limn→∞ xi (n) = x∗ for all i. 15Networks: Lectures 22-23 Myopic Learning Learning But consensus is not


View Full Document

MIT 6 207 - Lecture notes

Download Lecture notes
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Lecture notes and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Lecture notes 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?