Stanford POLISCI 353 - Re(: )Measuring Political Sophistication

Unformatted text preview:

Correlations of Variously Scored Placement Measures with Criterion VariablesTable 2* p < .05, + p < .10Note: Entries are bivariate r’s for “Correct” and multiple R’s for “Incorrect and DK.” J-tests (Davidson and MacKinnon 1981) were used to determine whether incorrect and DK counts offered a significantly better fit. Superscripted symbols (* and +) denote the cases in which they did.Table 4Table 5a: Neither M* nor guessing corrections can be computed for open-ended items.b: Symmetic distributions on binary items must also be uniform.Table 6ARe(: )Measuring Political Sophistication*Robert C. LuskinUniversity of Texas at AustinJohn BullockStanford UniversitySeptember 22, 2004It hardly needs saying that “political sophistication,” defined roughly as the quantity and organization of a person’s political cognitions (Luskin 1987), is central to our understanding of mass politics. The variable claims main or conditioning effects on opinions, votes, and other political behaviors (as in, e.g., Bartels 1996, Delli Carpini and Keeter 1996, Zaller 1992, Althaus 1998, 2003; cf. Popkin 1991 and Lupia and McCubbins 1998). The highly sophisticated and highly unsophisticated are different—in how they process new information, in what policy and electoral preferences they reach, in their level of political involvement (Zaller 1992, Delli Carpini and Keeter 1996, among many other relevant studies). We speak of “sophistication” but should note that “expertise,” “cognitive complexity,” “information,” “knowledge,” “awareness,” and other terms referring to “cognitive participation in politics” (Luskin 2003a) are closely related. Expertise and, under some definitions, cognitive complexity are equivalent. So, consistent with much of his usage, is Zaller’s (1992) awareness.1 All refer to organized cognition. Information, which is cognition regardless of organization, and knowledge, which is correct information, are not quite equivalent but, especially in practice, veryclose. The quantity of political information a person holds is highly correlated with both how well he or she has organized it and how accurate it tends to be. “Large but disorganized belief systems, since long-term memory works by organization, are almost unimaginable. Large but delusional ones, like those of the remaining followers of Lyndon LaRouche, who believe that theQueen of England heads a vast international drug conspiracy, are rare” (Luskin 2003b). The operational differences, these days, are smaller still. Most early “sophistication” measures zeroed in on the organization rather than the quantity of stored cognition, focusing either on the individual-level use and understanding of political abstractions, notably including “ideological” terms like “liberal” and “conservative,” or on the aggregate statistical patterning ofpolicy attitudes across individuals, done up into correlations, factor analyses, multidimensional scalings, and the like. Campbell et al. (1960) and Converse (1964) set both examples. But measures of these sorts are highly inferential. Referring to someone or something as “liberal” or “conservative” is a relatively distant echo of actual cognitive organization, a correlation between,say, welfare and abortion attitudes a still more distant (and merely aggregate) one (Luskin 1987, 2002a, 2002b). The problem is less with these particular genres than with the task. Measuring cognitive organization is inherently difficult, especially with survey data. Thus the trend of the past decade-and-a-half has been toward focusing instead on the quantity of stored cognition—of “information”—that is there to be organized (Delli Carpini and Keeter 1996, Price 1999, Luskin 2002a). “Information,” in turn, has been measured by knowledge, it being far easier to tally a proportion of facts known than the number of (correct or incorrect) cognitions stored.2 Empirically, knowledge measures do appear to outperform abstraction-based measures of cognitive organization (Luskin 1987). Speak, in short, though we may of “sophistication,” “information,” “expertise,” or “awareness,” we are just about always, these days, measuring knowledge. But how best to measure it? Knowledge may be more straightforwardly measured than information or cognitive organization, but knowledge measures still do not construct themselves. Every concrete measureembodies nuts-and-bolts choices about what items to select (or construct) and how to convert theraw responses to those items into knowledge scores. These choices are made, willy-nilly, but seldom discussed, much less systematically examined. Delli Carpini and Keeter (1996) have considered the selection of topics for factual items, Nadeau and Niemi (1995), Mondak (1999, 2000), Mondak and Davis (2001), and Bennett (2001) the treatment of don’t-know (DK) responses, and Luskin, Cautrès, and Lowrance (2004) some of the issues in constructing 2knowledge items from party and candidate placements à la Luskin (1987) and Zaller (1989). Butthese are the only notable exceptions, and they have merely broken the ice. Here we attempt a fuller and closer examination of the choices to be made in scoring, leaving the issues in selecting or constructing items to a companion piece. In particular, we consider the possibility of quantifying degrees of error, the treatment of DK responses, and the wisdom of corrections for guessing. For placement items, we also consider the special problems of whether to focus on the absolute placements of individual objects or the relative placements ofpairs of objects and of how to score midpoint placements in the first case and equal placements in the second. We use the 1988 NES data, which afford a good selection of knowledge items. We focus mostly on consequences for individual-level correlation (and thus all manner ofcausal analysis), where the question is what best captures the relationships between knowledge and other variables. But we also consider the consequences for aggregate description, where the question is what best characterizes the public’s level of knowledge. Counterintuitively, the answers are not necessarily the same. What improves the measurement for correlation may either improve or worsen it for description, and vice versa. As we shall see.IssuesFor the measurement of knowledge, the scoring issues concern the mapping of responses onto


View Full Document

Stanford POLISCI 353 - Re(: )Measuring Political Sophistication

Download Re(: )Measuring Political Sophistication
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Re(: )Measuring Political Sophistication and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Re(: )Measuring Political Sophistication 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?