DOC PREVIEW
UMass Amherst COMM 122 - Audience Measurement

This preview shows page 1-2 out of 7 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 7 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 7 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 7 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

COMM 122 1st Edition Lecture 11 Outline of Last Lecture I. Net neutralityII. Audience MeasurementIII. Ad TimelinessIV. Ratings V. Coalition for Innovative Media MeasureVI. Weighting VII. SWEEPSVIII. Shares Outline of Current Lecture I. Audience MeasurementII. Arithmetic of RatingsIII. Data Collection Methods IV. New Methods Gradually DevelopedV. Meter VI. People MetersVII. PPMVIII. Media MeasurementIX. Statistical Estimates not AccurateX. Sample ErrorCurrent LectureAudience Measurement continued:These notes represent a detailed interpretation of the professor’s lecture. GradeBuddy is best used as a supplement to your own notes, not as a substitute.Issues with Ratings and controversies:- How many? Who?- Cultural democracy? – not accurate cultural democracy because not everyone’s vote counts equally (age), do a lousy job representing minority groups - WeightingReported for both Households and people:People:- RATING: based on total number of people living in tv households- SHARE: based on PUT (Persons Using Television)Radio—CUMES (cumulative audience per week)Arithmetic of Ratings:4 channels and nothing else:Ratinga) Ted Cruz Dance Party 12b) Aren’t People Stupid 6c) Dancing With the TAs 30d) Amherst 01003 5What is the HUT percent? - Sum of ratings: 12+6+30+5= 53 households are watching tv at that timeWhat is share?- Do individual rating/HUT percent – sum of the shares always equals 100%Example 2: Ratinga) Original Sample Drawn 2500 HHb) Usable Data Obtained 1232 HHc) Watching TV Weds 10pm 646 HHd) Watching “UMASS Idol” 287 HHRating of UMASS Idol:- D/B= 287/12323= 23.3Share of UMASS Idol- D/C= 287/646= 44.4Rating/Share= 23.3/44Data Collection Methods Early efforts:- Ask listeners to send in postcards- Telephone Coincidental (Call people while they are listening—cant follow people)- Telephone Recall (call up and ask what they watch in the past couple of daysNew methods gradually developed:- Paper diaries: they first send you a letter saying congrats you have been selected to be part of the radio audience survey—diary coming so you can fill it out to say every tv and radio station you watched o Problems: People say they watched things that didn’t exist, people didn’t want todo it (many usuable, refuse, unreadable, stop)o Effects of diaries: well I’m not going to watch shows I normally watch—ill watch shows that are more popular (Watch less and watch different)o The act of filling out the diary affects the behavior they are trying to study (neutral, un-intrusive pure measureo Random: one person says a little more, one person says they watch a little lesso Systematic error: if certain groups are not o Multiple devices: creates low response rates The diary isn’t per person, it is per household- Record who is in the audience- Write down viewing behaviorMeter: keeps traCK of all the set tuning done on that television- Record when the tv is on/off- How long the tv is on each channel - Record accurately with no human error (exact measure of what the tv set is doing)o Disadvantage: It doesn’t tell you if anyone is actually there  No demographics (age or sex)- Traditional: storage instantaneous Audiometerso Doesn’t tell you who is watching, nor who is in the room?People meters: intended to be a meter that can tell you who is there and what they are watching - Trying to make this activity passive: push buttons, cards, electronic IDsTimeline of data collection devices:- First Audimeter (1936)- The Recordimeter (1954)- Storage Instantaneous Audimeter (1973)- People Meter (1987)o National sampleo Top 25 DMAs - Set Meters: o DMAs 26-56- Diaries:o All other DMAsArbitron came out with PPM—Portable People Meter (replacing diaries)- Wherever you are in earshot of a radio, it knows whether or not you are listening and paying attention - Sends signal automatically - Introduced in 2009- Now in top 50o Problem: Massive controversy: Massive under sampling of Hispanic and Latin American programs (stations especially lower)- 2012: accredited by Media Ratings Council only in nine markets—hearings in Congress/ FCC, lawsuits (Suing Arbitron and saying that their sample is bad) - 2014: Still not accredited in New York, Washington DC, Boston, Seattle, Salt Lake City, Sacramento, Las Vegas, Austin, Orlando, San Jose, Columbus, Indianapolis, Raleigh-Durham, Providence, Jacksonville, Memphis, Hartfordo Disclaimor: “PPM ratings are based on audience estimates and are the opinion ofArbitron and should not be relied on for precise accuracy or precise representativeness of a demographic or radio market”—they are asking stations to pay hundreds of millions for this data, also asking advertisers to waste all this money just based on their opinionNielson: media measurement: in home, out of home, personal devices, broadband, streaming, DVR, VOD etc.- Cross Platform Report- Total Audience Report (evolution—trying to keep up with the explosion of technologies)Data Collection Methods: - Diarieso Advertisers are now getting upset with the reliance of the diary- People Meter- Portable People MeterFor all of these types of data, all ratings are statistical estimates, not “accurate”- Data are always taken from a sample of the populationo Every time you have a sample, you have a sample error (difference between the data from the sample and error) Real population mean GPA=3.7 (Sample of 20 people)—of you take a sample of about 20 random people—probably very close to the truth- You can only do this at a certain level of confidence  If sample mean= 3.6, and if 95% confidence level sample error= 0.2 (margin of error)  95% sure ‘true’ is within +/- .0.2 of ‘sample’ (3.6),… that ism between 3.4 and 3.8 Sample error: - Error increased by low response rate- Statistically, error greater for lower ratings- Assumes perfectly representative probability sample, with no missing data (never happens)- Takes large increases in sample size to reduce error—exponential increases in sample size, not linear Sample Error: Magic Error:- About 65% sure thato TRUE= SAMPLE +/_ 1 SE- About 95% sure that o TRUE= SAMPLE +/- 2 SEIf there was a rating (SAMPLE)= 10.0, and SE= 1.5- 65% sure between 8.5-11.5- 95% sure between 7 and 13 Example: Blacklist Rating= 2.9- If SE=0.5 o Then there is a 65% chance that the true us between 2.4 and 3.4o There is a 95% chance that the true is between 1.9 and 3.9Nielson found errors in the National Network Ratings then had to redo the ratings


View Full Document
Download Audience Measurement
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Audience Measurement and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Audience Measurement 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?