Humanness Score - Open-Standard, Peer-Reviewed for Digital Advertising
Background
Having spent the last 20 years in New York's Silicon Alley and seen the entire story-arc of how the Internet has changed advertising forever, I have the privilege and the duty to publish and submit for peer review a Humanness Score(tm) for Digital Advertising. This was born out of the need to unify the industry in the war against all forms of digital ad fraud and against an enemy that operates in the shadows remotely.
It is my conviction that digital advertising is the most effective form of all marketing activity and is poised to become the central and dominant means by which advertisers reach and interact with human customers, something not possible with historic, one-way media. To expeditiously reach this ambitious future state, every actor in the digital ecosystem must act with firm resolve to eliminate fraud and malpractices and to earn and retain the trust of peers.
An ecosystem that persists must balance the needs and motivations of all of its members -- in our case, the users, the publishers, and the advertisers -- and the value-added practitioners that serve each of these. Any imbalance will fail; any selfish gains will be short-lived. In digital advertising, there are countless inputs, metrics, and things to optimize. But none is more important than ensuring the ads are viewed by humans, and not by bots -- the fraudulent agents used by criminals to create ad impressions and clicks.
Why We Need the Humanness Score
The reason for the Humanness Score (HT @BradBerens) is that there is no central, standard database of verified humans that can be used in digital advertising. We simply have users who visit websites, in most cases anonymously because they are not required to log in -- think weather, news, sports, magazine sites. We do already derive data points that yield better targeting -- e.g. list of sites users visit, social media chatter, search keywords, and even items added to shopping carts and then abandoned. But due to the prolific activity of bots over the entire history of digital advertising, none of these parameters alone can guarantee "humanness;" every single parameter has been documented to be faked by bots. With the Humanness Score, advertisers can choose to optimize their ad spend by favoring those entities (publishers, exchanges, etc.) that show higher humanness.
The Humanness Score(tm) for Digital Advertising
The Humanness Score is an indexed number from 0 (non-human) to 100 (human) which shows the relative "humanness" of the users that visit websites and cause ad impressions to load. It can be measured on-site (on a website) and in-ad (in an ad impression). The score is indexed relative to peers, using then-current data for the calculations. For example, publishers are indexed against other publishers, while ad impressions (e.g. from ad networks) are indexed against each other. This ensures that as the entire "sea level" rises, each certified member must continue to innovate to maintain higher scores.
A higher Humanness Score means a higher proportion of confirmed humans and good policies and disclosures that go along with ensuring a human audience. A higher Humanness Score should also lead to higher premium CPM or CPC -- this rewards premium publishers for their good work and "playing by the rules" and rewards advertisers with better performance for their ad spending.
Score Syntax: < J9NGT6H3 | 65 (i^9) >
Definitions: J9NGT6H3 = client/entity identifier; 65 = humanness score - 0 (bots) to 100 (human); i = in-ad | o = on-site; ^9 indicates the order of magnitude of the data set - e.g. billions.
The Inputs for Calculating the Humanness Score
The score is based on the three categories of inputs, as specified below and takes into account policies, continuously audited data, and the willingness to disclose data for verification. The Humanness Score of entities of the same type (e.g. ad exchanges) can be directly compared.
Policies - 10% of score
- does the entity purchase impressions or source traffic of any kind?
- does the entity have published policies protecting users' privacy and does it consistently act according to these policies (see the EFF's Privacy Badger Initiative)
- does the entity sell data -- e.g. cookie matching, cookie profile, collected or derived data
Disclosures - 20% of score
- whether the entity provides full transparency to peers by providing access to visit level data so that peer can verify parameters like placement, viewability, and other metrics
- whether the entity provides access to auditors of the sites on which the ads were run, the sources of traffic, and the recipients of media payments.
- whether the entity generates and shares threat data with peers
Data - 70% of score
- continuously measured data points on ad impressions and visits to websites - a minimum of 1 billion in-ad data points required for certification; and X million on-site data points for websites (depending on natural traffic volumes).
- how often data like website and cookie blacklists are updated
- whether the appropriate anti-fraud vendors are used and how the technology is deployed
Example of Initial Scores
Different ad networks show vastly different percentages of confirmed humans and confirmed bots. And these calculate into the score to show that ad networks that have far higher confirmed humans will score closer to 100; while the ones with more confirmed bots will score closer to 0.
Why It is Practical
The Humanness Score is intended to help advertisers and their agencies compare ad networks and compare publishers and then systematically optimize away from the ones that serve the lowest scores -- literally, lopping off the lowest decile (10%) over and over again to progressively improve the overall effectiveness of the media. It does not put the good guys into a technology arms race against the bad guys.
Sequentially reducing budget allocation to the worst performing ad networks and exchanges in terms of "humanness" allowed us to increase the percent of confirmed humans -- more ad impressions shown to humans as measured by our methodology.
Increasing the humanness -- more ad impressions shown to confirmed humans (dark blue, in the chart below) -- means more goal conversion events achieved on the website.
The following chart shows a publisher site that has one of the highest ratios of confirmed humans, relative to other types of users -- 80% confirmed humans and only 3% confirmed bots on the site. A good rule of thumb for industry wide estimate of "humanness" would be 70 - 80% confirmed humans. The rest would be made up of known search crawlers, known bots (that honestly declare themselves), and "other" (which may be visitors that do not run javascript, etc.)
Ongoing Areas of Research
- research on the correlation between higher scores and CPMs
- research on the correlation between higher scores and conversion metrics (for categories that measure online conversions)
Peer Review Process
Everyone in the digital advertising community is invited to provide public input and constructive feedback in the comments below so that the Humanness Score can be improved continuously. Since you must be logged into Linkedin to make a comment, the feedback will also be attributed and credited to the person that contributed it. This will also serve as a public transcript of the evolution of the Humanness Score, in our collective effort to accelerate its usefulness and adoption.
Executive Director, Head of Commercial Technology
9 年Augustine, this is really important stuff. If this existed 10 years ago, the paid advertising landscape would look very different. I hope that this standard gains traction.
领导力发展培训师和执行教练 | 职业生涯管理认证教练
9 年Thanks for sharing, very interesting facts