Is optimizing for 0% bots a good idea?

Is optimizing for 0% bots a good idea?

The short answer is no. But why?

Even if you limit your ad campaigns to an inclusion list of good publishers, those publishers still have some bots visiting their websites. Of course, for a good publisher, this will usually be a low percentage of their total traffic. But sometimes the red can be higher, like "good publisher 4" below. These publishers are not deliberately buying traffic, as fraudulent sites would. But any public website can be hit by any bot at any time.

Search crawlers (in yellow) like the Googlebot visit the site to index the publisher's pages for search engine results. These are good bots that the publisher wants to visit their page.

Declared bots (orange) visit for sometimes useful purposes. Some scrape the site for good reasons, while others may visit to scrape the site for malicious reasons like stealing content. Either way, these bots declare themselves so that site owners can decide whether to block them or not.

Bad bots (red) attempt to disguise that they are bots. Bad bots visit good publisher sites to get cookies and make themselves appear as part of a specific audience to make more money. For example, they visit medical journals so they can appear to be part of high-value audiences like doctors. This is useful to the bot because media buyers will pay more for impressions that reach audiences.

Bad bots also can attack sites by deliberately loading their pages. We've seen this happen to good publishers' sites. When bad bots attack, Google or another ad platform will detect the surge in bots and ding that site for bot traffic. Google might also withhold ad revenue and in some extreme cases kick out the site from their network. Some legit publishers have been harmed like this by bad actors using bot traffic.

To protect advertisers, good publishers implement measures like bot filtering and avoiding ad calls for bots, even for well-intentioned ones like search crawlers and declared bots. Advertisers can use FouAnalytics to verify if their campaigns are impacted

So the idea of optimizing towards 0% ad fraud is unrealistic, even when you're buying from real publishers. It's even more absurd to insist on 0% fraud when buying ads from programmatic channels and using legacy fraud detection vendors? Why? Those vendors can barely catch 1% of the bots and fraud; bots easily bypass their detections. So if you insist on 0% fraud, you are basically increasing your own exposure to fraudulent sites that use bot traffic that gets marked as 0% IVT by legacy vendors.

So What?

Instead of insisting on buying 0% IVT ads, just make sure you have a postbid javascript tag like in-ad FouAnalytics tags to measure where your ads actually went and if bot activity or other forms of fraud caused the ad to load. This way, you can quickly identify the sites and apps that are cheaters and turn them off -- e.g. by adding them to a block list or removing them from your inclusion list. This way you can monitor and ensure that IVT and bots are as low as possible in your campaigns. They don't have to be zero for your campaigns to work.





Michelle Strom

Principal, President, Media at Strom & Nelsen Strategic Marketing, Inc.

7 个月

Good stuff! Thank you for sharing.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了