Media Matters goes beyond simply reporting on current trends and hot topics to get to the heart of media, advertising and marketing issues with insightful analyses and critiques that help create a perspective on industry buzz throughout the year. It's a must-read supplement to our research annuals.
Whenever the subject of intermedia comparisons comes up, people throw up their hands and cry, “That’s like comparing apples and oranges!” In other words, it just can’t be done. This is, of course, merely an excuse for avoiding the responsibility of evaluating varying media mixes on a quantitative or qualitative basis.
Sadly, most media mix decisions are made arbitrarily. If an advertiser’s established media preferences call for 70-90% of its ad dollars to go to television, along with a smattering of magazine ads and some dabbling with digital platforms, this is what next year’s media plan will probably look like, as will the year after that, and so on, albeit with some minor tinkering (including perhaps a modest increase in digital spending to exploit video commercials and targeting opportunities). In such situations, radio, newspapers and out-of-home media will not even be considered.
Despite the natural inclination of client and agency executives to avoid making intermedia comparisons, there is no reason why an objective evaluation of various alternative media mixes can’t be made.
When an advertiser hires a new marketing director, this often inspires something of a shakeup and, in some cases, a brief return to “zero-based” thinking. Suddenly new questions are asked, like, “Why are we spending so much money in primetime TV?” or “What about digital?” Invariably, the brand managers and agency account execs can’t answer effectively. So it falls to the agency media planners to come up with satisfying explanations, which usually have to be manufactured retroactively, since these issues were never raised by prior client marketing administrations or dealt with in their media plans.
At first glance, the media planner who is asked to make a zero-based review of all of an advertiser’s media options faces an imposing and intimidating task. While lots of data—particularly of the audience and cost efficiency variety—are available, the basic question has always been whether evaluating magazine “reading” vs. TV “viewing” or radio “listening” information is comparing apples to oranges. Compounding this are concerns about the audience definitions themselves, their relevance as indicators of ad exposure or impact and, more fundamentally, their inherent accuracy. Can equal confidence be placed in the findings of meterized network TV “viewing” studies, radio PPM projections or diary listening recollections and magazine “total audience” research conducted via personal interviews based on respondent recall?
The full report, part of our Media Insights & Data Service, describes the comparability issues between the ways media audiences are measured and especially with definitions of "ad exposure" such as Nielsen's average commercial minute viewer ratings for national TV. A large body of evidence on ad recall norms is also evaluated, culminating with our independent estimates of how the reported audiences for each medium probably translate into comparable ad exposure and recall levels.
Over the past decade or so, a few research firms have offered media mix guidance via so-called return-on-investment (ROI) analyses based on computerized models. These take a variety of inputs, including ad spending, GRPs by media, media weight by month or quarter, ad awareness findings and sales data to determine how the variables interact and, ultimately what the advertiser’s ROI was relative to norms that have been established from prior studies.
Needless to say, this sort of thing appeals to people who seek formula-style answers and, especially, those who are awed by the use of computers and mind boggling terms like “multi-variate regression analysis.”
Not surprisingly, media sellers and promotional organizations have jumped on the ROI bandwagon, and a growing number of media-sponsored ROI studies have been making the rounds, each purporting to demonstrate how they and competing media compare in terms of impact and/or sales relative to ad spending. These evaluations are invariably based on past experiences that the research company claims to have had with unnamed clients or on new research merged with data on sales and other metrics supplied by an advertiser or sponsoring medium. In one series of ROI studies conducted for the Magazine Publishers of America (MPA), magazines topped TV by an astounding 10:1 margin in many product categories. In a radio ROI study, radio held a modest edge. For the Mobile Marketing Association, mobile ads bested the broadcast TV networks as well as the Internet, magazines and newspapers.
The major issue with most ROI studies is the simple fact that few people can fathom how they work or what their findings mean. Most of the presentations we have seen describe the internal machinations of their models in highly generalized terms. The idea seems to be to impress the audience by citing computerized “findings,” which, by amazing coincidence, always jibe exactly with the sponsoring medium’s sales pitch. If one starts to ask hard questions, the presenters are often not very forthcoming.
To be frank, most ROI studies that are sponsored by a specific medium often stack the deck in their favor when choosing the kinds of advertisers whose performance is evaluated. For example, many of the anti-TV studies select “typical” TV advertiser situations—by which they mean cases where the advertiser allocates the bulk of its dollars to TV, and their medium represents a much smaller share of spending. In effect, such advertisers tend to be those who overspend in TV and underspend in other media. Consequently, the advertiser’s total TV ad dollars are used in a less efficient manner due to the inherent redundancy of such campaigns and the diminishing return effects thereby engendered.
So can ROI studies direct our media mixes? The answer, so far, seems to be probably not, though there are some interesting directional findings in these studies that are worth noting. The full version of this article—part of our ongoing Media Insights & Data Service—examines these aspects in detail, as well as provides an overview of key studies produced in the past decade, including the basic pitfalls seen in the methodologies of such studies.