Transparency and 3rd Party Reporting: Why Stop at Digital Media?
Last week, I had the good fortune to hear Procter and Gamble’s Chief Brand Officer, Marc Pritchard, speak on building a better, more transparent digital media supply chain at the Association of National Advertisers Media Conference. Mr. Pritchard recently made headlines for a late-January speech at the Interactive Advertising Bureau’s Annual Leadership meetings when he excoriated the industry for all-too-often creating “crappy advertising accompanied by even crappier viewing experiences.” He went on to outline specific steps P&G plans to take to clean up its digital media supply chain, and in last week’s speech to the ANA he addressed how marketers should respond to specific objections or “head fakes” which vendors and agencies might raise.
It’s very encouraging to see P&G take a leadership role in driving the industry forward on persistent problems in the digital paid media ecosystem – issues like viewability, transparency, self-reporting of so-called “walled gardens” and fraud. To be fair, many organizations (like the IAB, the Media Ratings Council and the Trustworthy Accountability Group) have been working diligently on these issues. But at the end of the day, money talks, which is why P&G’s “put up or shut up” approach is so important and newsworthy.
But why stop with digital? What many advertisers may not know or understand is that there are a growing number of TV solutions (you may be using some of them right now) which sell based on self-reported or custom-calculated performance data which they cannot (or will not) substantiate via full occurrence-level transparency.
Transparency Lapses in TV
Although we don’t work with P&G, many of our clients by their very nature also take a leadership role in the paid media supply chain and have been on the leading edge of driving accountability. For example, we have been asked in some cases to dig in to various non-traditional TV offerings, and to determine whether their methodology is transparent, and if so whether it is sound.
We won’t name names. There are multiple providers in this space, and they each have slightly different offerings (delivery systems, measurement, reporting). In general, they are aggregating local cable and/or satellite inventory (addressable or non) and selling it in a targeted or a remnant fashion on a national equivalent basis.
There is certainly value in these various solutions. Depending upon objectives, advertisers can purchase inventory for ostensibly efficient rates, while in some cases also employing advanced targeting techniques. The potential merit of these media channels and the rationale for their recommendation by agencies is not in question.
However, in many cases the degree of their merit for a specific advertiser’s media efforts (i.e. their efficiency, their ability to hyper-target, etc.) is called into some question by their inability or unwillingness to provide transparent, occurrence level data and/or the absence of externally reported 3rd party research. As media auditors, our position is fairly straightforward. Show us the schedule (network, day, date, time). Document the geography. Explain the math. Point to the 3rd party research. If we as an expert 3rd party can’t replicate the actual delivery being reported, then we have a transparency issue.
- Some vendors sell large footprints of households by rolling up individual cable and satellite zones, which they aggregate from various multiple-system operators (MSOs), however they do not provide the geographic specificity (i.e. so-called cable “sys codes”) necessary for anyone other than them to calculate schedule delivery.
- In others, buys are reported against various targeted genre bundles, but not against individual occurrences (i.e. networks), making it impossible to independently assess delivery.
- In still others, requisite occurrence level detail simply is not provided. Some of these vendors work with an independent measurement service to create proprietary audience data, but it is impossible for a 3rd party to independently validate performance.
- In both local and “national” TV, there are various sports properties which air on a variety of outlets (unwired-style networks or various local and regional configurations), and where documentation does not provide full transparency into delivery. Affidavits do not provide requisite station or other geographic information to allow these vehicles to be posted independently. Agencies are left accepting vendor posts or “over-riding” their local posts without appropriate documentation.
Further, in many cases agencies don’t appear to be asking a lot of questions about the performance being reported back to them from these media vehicles (because if they were, they would be better equipped to answer ours). As the industry moves towards more true programmatic TV solutions, this occurrence level transparency will be ever more crucial if advertisers are to understand what they received. Without occurrence level details, it is impossible to independently validate delivery or compliance (such as programming restrictions). As Mr. Pritchard pointed out regarding the digital ecosystem (and which also applies here), advertisers need full transparency and consistency of measurement so that they can move to the next step and understand the value being received for their media investments.
How Can I Tell If We Are Doing Business With These Vendors?
It isn’t always that easy to determine whether you are using one or more of these TV vehicles, if your agency or audit firm has not called it to your attention. If your buys feature properties where an individual buy with a single vendor results in audiences being delivered on multiple networks, stations, cable zones, or other vehicles that might be your first clue. Assuming that you see vendor invoices, and your invoices don’t provide full detail as to where inventory is clearing (i.e. network, station, sys code, etc.), then again you have to wonder how the “actual” audiences are being calculated.
If you are unsure, it is worth asking your agency or your media audit firm. There are really a couple of key questions with respect to proof of performance for any media vehicle, and they certainly apply to these vendors:
- Occurrence level detail. When and where was it delivered (day, date, time, station, network, geography, etc.).
- Is there accepted 3rd party measurement available to provide reliable audience delivery information for the corresponding occurrence level detail? If not, how is delivery being calculated?
Media Audit & Transparency Implications
First off, not every media auditor will even call this activity out to clients and agencies in their reporting. If your media auditor is simply regurgitating agency or vendor-supplied delivery information anyway, then they may not really get in to methodological questions such as these. If that is the case, though, you might as well just go ahead and take the vendor’s word for it, right?
Frankly, even for a media audit firm that does work at this level of detail, it is going to take client support and pressure (such as P&G is exerting in digital) to increase transparency in these areas. For example, some of the vendors cite restrictions in their deals with the actual media owners (MSOs) from which they purchase inventory that preclude them from reporting on specific geographic or other details. These may be in place to protect the MSOs' own direct sales efforts. In any event, they create a gap in transparency.
Without appropriate detail surrounding occurrences and measurement, it is impossible to provide 3rd party validation for these entities. Further, if specific network or program level detail is not provided, it is also often not possible to provide validation that compliance expectations (such as restricted networks or programs) have been adhered to.
For many advertisers, these types of TV vendors may only represent a small portion of overall activity. Many tend to be tactical, niche suppliers (unlike in digital, where it has been some of the largest vendors who have insisted upon self-reporting). In many cases, these vehicles are being used to add efficiency, to provide targeting opportunities, or to extend reach rather than as the primary component of the TV buys.
Although they might not represent huge components of many advertisers’ TV buys, however, these vehicles should still be subject to transparency expectations. It isn’t enough to be told how a media channel delivers your audience “in concept”. They should be able to provide full, transparent proof of performance, and like digital – that is likely going to require some pressure from advertisers.
This will happen if advertisers demand it, as P&G have done with their digital media expenditure. Like P&G, many of our clients are leading the charge on media transparency and accountability, and in many cases they have helped to move the industry forward. Some have been among the first to post and secure recovery for audience under-delivery in local radio (yes, it can be done, regardless of what your agency and its trade association may be telling you). Most have pushed for and received quarterly, station-level posting guarantees, with varying thresholds reflective of the measurement technology employed in each market – despite being told by some agencies and sellers that this was “not the industry standard.” Well, it is now. There are multiple other examples of measurement and accountability requirements shifting over time due to buy-side pressure. Because ultimately, the “industry standard” becomes whatever buyers (i.e. advertisers) are willing to pay for.
Photo credits: ? Bj?rn Hovdal | Dreamstime
The opportunity to further contribute to dialogue on this and other pertinent industry issues on behalf of our clients is precisely why MMi Has Joined the Association of National Advertisers.
As someone who led the design and execution of a platform that enables this level of transparency, I can say that the biggest hurdle is the inability of many MVPDs to deliver clean and accurate post-log data (AKA "as-runs") consistently from each of their zones. We built algorithms to account for this, but in the end it is the source data that must be improved.