Why Google’s Fix for Customer Angst Isn’t Fixing It
Keith Ferrazzi
#1 NYT Bestselling Author | Keynote Speaker | Executive Team Coach | Founder, Chairman, & CEO, Ferrazzi Greenlight
I talk a lot of about relationships being the currency of productivity, achievement, and business results, especially in a world of work driven more and more by interdependency. YouTube’s recent troubles, which have extended to Google’s larger advertising platform presents a good example. Why did it take the boycott by major U.K. advertisers, with U.S. ones quickly following, to make Google address their very real concerns?
As Jan Dawson at Recode points out, the fundamental flaw creating the problem lies with Google’s programmatic advertising model, “where … computers make the decisions subject to policies set by site owners and advertisers.” Google positioned this level of advertiser control as a net positive, especially when combined with artificial intelligence (AI) that could better detect potentially offensive content, which it is. Users upload roughly 400 hours of new content per minute, and such a high volume would be impossible for humans to review: our brains are too slow and limited, while computers never get bored or tired. The AI wasn’t the problem. The algorithm that separates the good from bad content is a breakthrough, but it still relies on our brains to tell it what to look for. Without being fully conversant in the language used, in all its global interpretations and meanings, AI couldn’t prevent ads from running alongside content those brand audiences would find offensive. (Plus, for the record, people are still smarter, or more cunning, than AI, as we witnessed when the Twitter community transformed Microsoft’s Tay, an AI chatbot, from a supposedly curious teen into a hate-spouting monster.)
Google offered apologies and vowed to do better, and it is making its Google Preferred list available to advertisers upon request, so they can monitor the channels where their ads may run. Where I thought Google could do better was in its way of addressing the human side of the boycott and the reasons for it, especially as major advertisers like Chase have drastically cut the number of sites where they advertise with little loss of impact. (Even if we see clients like Audi or Marks & Spencer as large corporations, they’re still made up of people who will always have human responses that AI will never be able to gauge.) When the company promised to hire a “significant number of people” to address the problem, those people would not be assigned to solve the problems directly with customers. From what I can tell, those people are working to develop new procedures and processes that will curb offensive content and give advertisers greater control over where their content appears.
To be fair, Google is still in the early days of responding to its advertisers, and I applaud the company’s willingness to implement ideas quickly, so that they can iterate along the way. My issue is that applying more technology to fix technology isn’t the whole answer, particularly when offensive content—offensive to human sensibilities—is at stake.
Technology was always meant to improve the ways in which we work, not replace us. Since I graduated college, I’ve seen mail replaced by faxes, which gave way to email, followed by file-sharing, direct-messaging, and text. And now there are times that even the latest forms seem slow. That’s how quickly we adapt then rely upon a new technology to help us communicate and solve problems better, faster, and more effectively. But AI can’t do that. It doesn’t seek out new information, it isn’t curious, it can’t empathize with, or express regret to, someone who’s been harmed by its mistakes. It can only move on and not make that mistake again. There are times when technology problems need human solutions for protecting the value of mutual understanding, empathy, and intelligence. So in that spirit, I offer companies, including Google and its YouTube, a couple of guiding thoughts:
Respect human judgment before intelligence.
I appreciate the value of AI, but it should enrich human interaction, not take its place. I grappled with this in building my own SaaS platform, Yoi, which gives organizations data that helps them support and coach their new hires so they can quickly become thriving, engaged teammates. Embrace the many ways AI makes your coworkers more productive, but don’t put it above them or think it can do every function the human brain can, because it can’t. Even if our brains are grossly inferior to processors, they’re still the gold standard at listening to what our clients want and having a conversation about how to serve them better than anyone else.
This listening is even more critical when, as with Google and YouTube, customers aren’t feeling heard. Recently, I wrote about the importance of generosity to healing damaged relationships, which is just as applicable at an organizational level as it is at the personal level. If major advertisers feel they need to pull their ad buys to have their issues addressed, it may already be too late. When customers feel wronged, whether it’s personally or professionally, they want to feel empathy from their vendor. They need to hear the promise to do better and be asked what they need to be made whole. That can only be done through personal contact.
Use your mistake to strengthen your customer relationship.
Bringing your clients into the resolution process will not only help them feel heard, it will also likely speed you to an effective outcome. Clients know best how they’ve been affected, and their point of view will undoubtedly add a needed perspective on the best fix for the error.
When you create a process for this collaboration that doesn’t burden the customer—using teleconferencing, file-sharing, and instant messaging—you communicate to your customers how much you value their time and their contribution. When a company acknowledges its mistake and engages its customers to be sure they get the fix right, it goes a long way to creating the kind of transparency that ultimately deepens customer relationships.
Image courtesy of Ben Nuttall.
For more from Keith you can follow along on Facebook, Twitter and YouTube or at Keith's homepage.
MCIPR - Public Affairs | Investor Relations | Crisis Comms
7 年Any digital marketer with BASIC training knows how to tweak the campaign settings so that the ad is targeted at only the publishers and YouTube channels that are a brand-fit. If the so-called digital marketer is ignorant, they will spray the ad at everyone, relevant or not. The four steps needed to do this take 2 minutes. The boycott was pointless: https://e27.co/protect-brand-extremist-content-4-steps-20170405/
Marketing Manager at Negnoir
7 年Thank you for clarifying this issue.
SVP Global Business Development - specialized in multilingual consolidations of services across the globe
8 年The question is not about the programmatic advertising being bad or not in this environment. For sure right now Google is doing its best to rectify the situation for its clients and users. The true question is how to prevent this content getting on the platform in general so that Google or its clients do not have to deal with these problems in the future. Ultimately the discussion will be around what content can be uploaded and what not. It is clear that some content is globally unacceptable for all cultural standards. But here is the biggest challenge in the grey areas of evaluation – not for machines but also for humans. We have different standards on what we experience as sensitive or offensive in each culture and country, what is free speech and what is offensive to others. Google/Youtube and other content platforms will have to control this information before it gets on their platforms to provide a safe space for users and advertisers. That also means that these companies are the gate keeper for public information. An even more challenging situation to have in the future...
Design Thinking - udvikling af b?redygtige, ansvarlige forretningsmodeller & -projekter - kommunikation, netv?rk & event
8 年I always cheer a little inside when someone uses the word "empathy" in a professional context. It's grossly underestimated just how much professional empathy means. Google, as a company, has always has this problem - that they under-appreciate the human factor. If I may, I argue along these lines in this article, which I wrote back when they started letting Google+ go: https://www.dhirubhai.net/pulse/what-google-thinking-jesper-wille With regards to programmatic advertising, I simply think the model is fundamentally flawed. The Chase example clearly indicates that it mostly doesn't even deliver - and all of this is before we've started talking about the reams of ad fraud that goes on in the pipeline.
web site at eicraltd.com
8 年hi