With thanks to Richard Harker we strongly suggest you read the following in detail. It’s a point of view worth consideration.
With the facade of Nielsen’s PPM reliability crumbling, it’s time to question the accuracy of other Nielsen claims.
A big selling point of PPM was granularity, the ability to see a station’s ratings performance down to the minute.
But if the encoding/decoding process is faulty, if there are drop-outs and lost listening because of encoding or decoding failures, how can PPM be accurate down to the minute?
Take a look at the image below provided by a station. This is a screen capture of one hour of the morning show’s encoding as displayed by Voltair.
This is not a dummied-up brochure shot. This is a photo of an actual broadcast and how PPM encoded the hour’s broadcast.
It shows the effective level of encoding in one minute intervals through the 7:00 a.m. hour.
Green means there is a high probability that the decoder will be successful identifying the radio station–assuming a favorable listening environment.
Yellow indicates a lower probability of success, and red means a low probability of success.
Voltair enables one to simulate different listening environments, and in this case the assumption is that listening is in a fairly quiet environment.
In other words, this is a best case scenario, not a typical real-life 7:00 a.m. environment.
In the first quarter-hour, five minutes are green, four are yellow, and six are red. And that was the best quarter-hour in the seven o’clock hour.
The next half hour is almost all red. In other words, it is likely that the station got little or no credit for a half hour right in the middle of morning drive.
At 7:45 the station recovers somewhat, but it still spent seven minutes in the red. Note that five minutes were in the green.
Were I working for Nielsen I might try to put a happy face on the matter noting that PPM editing rules grant a station the quarter-hour if there is listening for at least five non-continuous minutes within the quarter-hour.
In this case the station would earn two quarter-hours out of the four in the 7:00 hour, the first and the last.
Unfortunately, the station also lost two quarter-hours.
Even if we accept 50% as an acceptable PPM capture rate for the hour, there’s still the issue of PPM’s granularity claims.
What do we make of the minute by minute in this hour?
Were we not aware of all the red bars in the middle of the hour we might question the content during this time.
What were we doing wrong at 7:15 for all our listeners to leave?
What did we do at 7:46 to bring them back?
Not knowing that there were encoding problems we might draw conclusions about our product that have nothing to do with what we were doing at those times.
In written testimony (PDF) before Congress Michael Skarzynski, then president CEO of Arbitron asserted this:
The PPM service produces very specific data on panelist exposure to radio stations, such as precise tune-in and tune-out times, which can be correlated to the station’s programming on a minute-by-minute basis, and therefore can be used as a proxy for determining the attractiveness of particular program content.
Given what we now know about encoding issues and the apparent inability of the decoder to capture all listening, it seems as though Mr. Skarzynski was rather optimistic about PPM’s capabilities when he appeared before Congress.
PPM is neither as granular nor precise as he testified.
Perhaps Nielsen should come forward and set the record straight: PPM needs a little help from Voltair to achieve the kind of granularity that Mr. Skarzynski claimed to Congress.
Audience Development Group is republishing this column with thanks to Harker Research.