Media Matters goes beyond simply reporting on current trends and hot topics to get to the heart of media, advertising and marketing issues with insightful analyses and critiques that help create a perspective on industry buzz throughout the year. It's a must-read supplement to our research annuals.
As the discussion about new currencies for TV ad buys drags on, we continue to be surprised at the resistance to measuring attentiveness to whatever is on a screen, including ads. What’s really going on? Let’s take a closer look.
Currently, Nielsen tabulates national TV show ratings based only on those minutes when commercials are the main content on-screen. However, because Nielsen doesn't really know whether anyone is watching during a given minute of content—unless its system is notified otherwise—it assumes that the person who claimed to be "viewing" when the channel was first selected is still "watching" when the commercials come on screen. People mistakenly accept these average commercial minute ratings as representative of actual commercial "exposure" or "viewing," but it is clear from observational research that substantial proportions of program viewers absent themselves during commercials while many who remain but pay no attention. As for local TV, the ratings are even more inflated as they are for quarter-hour "audiences," not average minute, so commercial zappers are counted as if they "watched."
The problem is how do you define "attention?" If you take the IAB's definition and count any viewer as "attentive" if they had their eyes on the screen for at least two seconds, this will favor short commercials over longer ones and will not account for probable levels of message communication. How long does it take for a typical :15 viewer to get the message? Is the learning threshold 5 seconds? What about :30s? Is the average learning time threshold 8 seconds, 10 or 15? There are some who suggest that time spent be used as this avoids this issue. Under the time spent concept, every second of attentiveness has equal value. It's simple and workable but in our view far too simplistic. Many seconds of attentiveness at the low end of the scale have less value than those at the higher end, as they flesh out the story telling function of most commercials for those who have chosen to watch the entire message. Unfortunately, we don't have answers to any of the basic questions about what the data means and how it might be used. And advertiser CMOs are notably absent from such discussions and, to be truthful, from the entire debate about what should be measured by a new national TV rating service. So, we will wind up with nothing more than a seller-orchestrated, "big data" service where we have a huge sample that provides "granular" data based on device usage with no idea what the information means or how to interpret it.
We recognize the terror that any shrinking of the reported "audience" holds for sales folks. We've seen it in print media where the push for first "total audience," including pass-along "readers," generated bigger numbers and, later, when the recent reading methodology produced much higher “readership" numbers than the through-the book, visually aided, recall method. Publishers thought that bigger numbers meant greater ad revenues, yet exactly the opposite happened. When advertisers wanted, say, 50 GRPs per month via magazines, and bigger audience figures showed them that this could be attained with fewer insertions, they simply directed the money "saved" elsewhere, mainly to TV. The same point applies to TV with attentiveness metrics. Suddenly the numbers of commercial viewers would drop significantly from what's being used as the current commercial viewer "currency." And this would apply to all forms of "TV,” not just linear. Would this cause advertisers to desert the medium and switch their spending to radio or magazines? Of course not. Far more likely, once the shock of the truth hit them, many advertisers would consider increasing their "TV" ad spend. Why? Because most branding advertisers are wedded to TV-style communications and they will not desert it, even if their CPMs are calculated on a more meaningful basis.
All we’re proposing is that we get accurate program and commercial viewing information via attentiveness measures, instead of using inaccurate estimates. Otherwise, the new information is nothing more than a correction of the ratings the industry has been using, not a change in direction. As to how the information is tabulated, that can be left up to an industry debate about reporting every commercial separately, showing break by break data, or averaging them by length. Some will favor counting all attentive seconds as worthwhile and using time spent as the metric, while others, like us at Media Dynamics, Inc., would suggest the creation of attentiveness time thresholds for each commercial length, such as counting only those who saw at least 8 seconds of a :15 and 15 seconds of a :30. But all of this would be determined, hopefully in a logical manner, once the hurdle of creating attentiveness measures was actually overcome.
While we’re among the biggest proponents of attentiveness measures that you will find, we see little chance of it being included in the next round of national TV rating services, including Nielsen's new, "big data" service. The sellers won't allow it. But even if they did and we had access to attentive audience projections for every commercial, all that would happen would be the creation of a new GRP currency, that would produce much lower GRPs than the old one, in aggregate.
But back to the basic question, we stand strongly behind the need for attentiveness measures in any new national TV rating service, and this applies for CTV as well as digital video. However, the industry isn't ready for such a vital improvement because the sellers, who will foot much of the bill for any new rating service will, no doubt, veto such a refinement. They simply want big numbers. As advertiser CMOs don't seem to care and are unwilling to spend any money for better ratings, we will continue to be stuck with set usage as the basic "audience" measurement metric, melded with “big data" panels to generate huge samples for "granular" analyses. But we will still not know who watched the commercials.
As a companion piece to our commentary above, we would like to share our latest white paper with our Media Matters subscribers. One of the hottest topics in media concerns how national TV audiences should be measured. Are you up to speed on this important subject? Want to know more, including the pros and cons of proposed approaches? This complimentary whitepaper cuts to the heart of the matter with some interesting insights and information you probably haven’t been acquainted with before. Download the white paper here.