Ethics for Insights - How Facebook and Congress point toward an emerging responsibility for insights leadership in innovation.
Is the next big challenge for insights leaders to help engineers and humanists build a common language around ethics and data?

Ethics for Insights - How Facebook and Congress point toward an emerging responsibility for insights leadership in innovation.

A few months ago, I traveled from my home base in San Francisco to Washington DC to meet with clients. It’s a trip that we make often, in support of a movement emerging around innovation and customer experience in the public sector.

That day, however, we weren’t the only visitors from Silicon Valley. Mark Zuckerberg was testifying in front of Congress in the wake of the Cambridge Analytica data breach.

This was like our Super Bowl! Big Tech vs Big Government! Social Media vs the American People. Big Tech was the away team, traveling into a hostile environment. Would it stick to the game plan it can play on its home turf? Would Congress have trick plays up its sleeves?

When our meetings finished, we ran over to the local bar, ready to cheer and jeer with our Capitol Hill hosts. And when we got there...crowd was watching baseball. I guess DC has a few more things on its mind. We found a booth and got the bartender to turn on another TV. In our little corner, Mark Zuckerberg and Congress were putting on quite the show.

There were pointed questions - like Senator Dick Durbin asking “Mr. Zuckerberg - would you be comfortable sharing with us the name of the hotel you stayed in last night?” 

There were informed challenges - like Senator Brian Schatz asking about Facebook’s Terms of Service and its Data Policy that we all know no customer really reads. 

There were issues raised that didn’t apply to Facebook, but would have other tech firms squirming - like Senator Dean Heller’s questions about selling data. Facebook sells targeting as a service - not the data itself. But other tech firms absolutely use that business model.

And sure, there were some lines of inquiry that were a bit inarticulate - like Senator Deb Fischer asking about the data points and data categories that Facebook stores.

In all of this, what I found most informing were not the questions. It was the way Congress asked their questions. In most cases, these were real people trying their darnedest to understand technology so they could explain it to everyday people they serve and protect.

Senator Gary Peters hit on the zeitgeist when he asked Mark whether Facebook “listens in” to conversations when they’re not on the service. “I hear it all the time - including from my own staff.” Mark responds that “You're talking about this conspiracy theory that gets passed around that we listen to what's going on, on your microphone and use that for ads.”

Facebook doesn’t work that way - but people are worried that it does, or might, or could. Where should Facebook draw the line on what it should and shouldn’t do? How do the coders, the marketers and the data scientists in Silicon Valley make those decisions?

There isn’t a ready answer - but there's a growing movement of customers, employees and leaders who feel urgently it's a problem our organizations and institutions need to work on.

Mark said the same thing. He said that it’s time for a new level of consideration and discourse around how companies like his make decisions around data, technology and the design of their products and marketing. What he talked about was responsibility:

“Overall I would say that we’re going through a philosophical shift in how we approach our responsibility as a company. For the first 10 or 12 years of the company, I viewed our responsibility as primarily building tools. That, if we could put those tools in people’s hands, that would empower people to do good things. What I think we’ve learned now, across a number of issues, not just data privacy, but also fake news and foreign interference in elections, is that we need to take a more proactive role and a broader view of our responsibility. It’s not enough just to build tools. We need to make sure that they’re used for good.”

Mark is pointing to exactly the issue. People leading innovation and technology are beginning to realize they aren't clear what their responsibility is, or how to talk about it.

In Weapons of Math Destruction, data scientist Cathy O’Neil agrees. She talks about a growing “separation between technical models and real people - and about the moral repercussions of that separation. In fact, I saw the same pattern emerging that I’d witnessed in finance [before the 2008 crash] - a false sense of security is leading to widespread use of imperfect models, self-serving definitions of success, and growing feedback loops.”

In other words, it’s time for us to consider an ethics for insights.


Why insights? Why ethics?

Insights are the building block of everyday and strategic decisions made all throughout organizations. It covers the people, functions and work involved in gathering information (aka data) and turning it into new types of knowledge (aka insights). This includes the algorithms, products and businesses based on those insights.

While organizations don't always manage "insights" as a broad umbrella concept, this framing is important because it allows us all to wrestle with the intention of the choices we make about data and technology. Our goal is always to learn something new, to share findings and to act on what we learn.

Leaders and functions who have formal responsibility for insights – like marketing, product, customer/user research and corporate strategy – have a critical stake in convening their peers and stakeholders around these issues. These insights functions can enable cross-disciplinary dialog around infrastructure and data for less technical and more applied audiences.

Ethics can be thought of as practical guidelines about responsibility that professionals refer to when they’re making decisions about what to do. It does not need to be a high-minded exploration of morals or the classical ethics of Aristotle. Kate Crawford of NYU’s AI Now research institute described this as filling the “real air gap between high-level principles– that are clearly very important—and what is happening on the ground in the day-to-day.”

I don’t claim this to be a complete and rigorous evaluation of how to apply ethics in business and technology. But I spend my day job advising insights professionals and customer experience leaders on how to do their best and most impactful work. I’m looking for a starting point for all of us insights leaders about the responsibility we hold.


Why now?

Given that insights functions like market research and loyalty programs have been well established for decades - collecting data on customers and adjusting advertising to match - one might ask, why now? Why do we need to renew this dialog around ethics for insights?

There are three major forces and shifts converging that make the dialog essential.

  1. WHO: Expansion in who’s involved in insights work.
  2. WHAT: Exponential growth in the data we put into our insights systems.
  3. WHY: A series of breakthroughs in what insights can teach us.

Let’s take them on, one at a time.


WHO: The number of people involved in insights work is expanding.

There’s an evolution going on inside the insights functions of major organizations. There are changes to both who DOES the work of insights and who USES the work of insights.

When they were first established, insights functions were mostly market research divisions inside of marketing departments. Their goal was to improve advertising effectiveness.

Insights staff worked closely together under the same bosses. The members of these teams shared common academic and professional backgrounds. They used similar methodologies. They shared a common language.

Today, insights influence everything from tactical decisions about pricing to strategic decisions about product design and new market entry. This makes it responsible for the complete customer experience.

There are more professionals working on insights than ever before. They're everywhere in a organization where data is collected and models or algorithms are created: financial analysts, marketing analysts, operations analysts, data scientists, enterprise sales, product marketers, UX designers, user researchers, R&D engineers, statisticians, customer success managers, customer service specialists...

Note the diversity of the educational and professional backgrounds. These folks come from the social sciences, engineering, computer science, mathematics, management and the arts.

There’s much to be excited about with increased diversity in insights. But critical to the idea of ethics, it’s also true that diversity increases the potential for conflict about how companies should consider and reckon with questions of “right” and “wrong.” What should we be able to do, collect, create and act on? How do we debate our responsibility?

A group of professional managers who studied business ethics in their MBA programs might share a single common language and rule set. But a mathematician, a social scientist and a product designer are less likely to share a common problem-solving approach.

Beyond managing these different perspectives, it is rare to find a common venue within a company to have conversations about responsibility in insights. Data scientists and engineers often report to the Chief Technology Officer. Social scientists might work for the Chief Marketing Officer. And cyber-security pro’s might report to a Chief Security Officer.

Unfortunately, the result is that only two main questions tend to bridge across these different teams: Is it possible? Is it profitable?

For a long time, startups and established organizations have found these two questions to be adequate for most benign decisions that businesses make around insights. But now, there are two other forces that bring this foundational business framework to its limits.


WHAT: The data we put into our insights systems is growing exponentially.

From the beginning, insights functions tracked research data like censuses, surveys, focus groups, SEC filings and press releases. Later, when loyalty programs and CRM’s added more personalized information, insights groups began to seek 3rd party data sources to add to their data pools. It was a lot to manage. But it was nothing compared to today.

Most people carry connected devices with them at all times. More communication, shopping and living takes place through digital channels. More sensors and cameras increase the types of data that can be collected. As data storage gets infinitely cheaper, more of this data gets stored for use later. And more companies are relying on this data - and data sharing agreements – to support their business models.

The charts starting on page 177 in Mary Meeker’s annual report show the trend very simply – the scale of data just keeps going up. And it keeps growing at faster rates.

With this new data comes new complexities for responsibility: the real-time location of individuals; the movement of groups of people; a person’s physical and mental health; communications between friends and lovers; unspoken interests explored through searches.

This data isn’t just collected on a periodic basis like every time a retail customer actually walks into a store. It’s collected persistently, in real-time, in microscopic detail.


WHY: What we can learn and predict is evolving beyond human capacity.

Meanwhile, a customer-centricity movement is reaching maturity. After a few generations of other approaches being the dominant guides to corporate decision making, it is now a standard practice in many functions to organize around the customer’s needs.

Design teams have become user-centered design groups. Inside sales has evolved into a customer success function. Chief Customer Officers and customer experience leaders are popping up across Fortune 500 companies, federal agencies, local governments and startups.

This is something to celebrate! It's evolved thinking for organizations that were traditionally more product-oriented or sales-oriented. It adds “what does the customer want?” to “is it possible” and “is it profitable” to create a richer framework to guide decisions.

Gather customer data and feedback from many sources. Generate insights. Create great experiences for specific customers (like Amazon recommendations). Spread a real understanding of customer needs from the front line to the boardroom. Every function - advertising, technology, design, customer service - can take advantage of these insights.

Yet operating with “the customer is always right” mindset will likely become an inadequate framework for the next level of insights we generate. Yuval Noah Harari explores these limits in 21 Lessons for the 21st Century. We like to think of customers (and ourselves) as rational, but they're often not. Emotion and biochemistry play critical roles.

When customers keep using a service like Facebook day-in and day-out, it is because they like it and value the connections they’re making? Is it because it’s a simple joy and some mindless entertainment? Or, are they becoming addicted and we just don’t know it?

The next generation of insights platforms will reach beyond rational customer decision making to tap into emotion and biochemistry in ways that far exceed what’s ever been possible. Which presents new types of ethical questions that we have consider. At what point does using these insights start to look like "hacking" humanity? How should we use artificial intelligence that can generate useful insights that we can't even understand?

Take Google’s AlphaZero. As Harari describes, this AI program recently taught itself how to play chess. It beat established programs with 3,000 years worth of chess knowledge. It started from zero knowledge of chess and taught itself to beat the best - in four hours.

At scientific and academic levels, there’s so much concern about these developments in insights that efforts are organizing to build XAI - “explainable AI.” If we need an entire field called “explainable AI,” you know we’re entering into sketchy territory for insights.

Customers appreciate the benefits of our insights on their experiences. They accept there are conditions involved - like sharing information. But is there a limit? Will they be able to articulate the limit to us? How? Just because they continue to use services as the insights expand in reach, that doesn’t mean they are doing so with clear awareness or consideration.


Where do we go from here?

Is it possible? Is it profitable? What does the customer want? Most importantly, what is our responsibility?

These are the tough decisions that insights professionals, leaders, employers and regulators all need to wrestle with. But fear not! These challenges are not infinite or inaccessible.

Too often, we don’t get started wrestling with the biggest issues because we have a hard time scoping or defining the challenge. If we think too narrowly, we miss interconnections with other issues. If we think of it too broadly, we can’t organize many people together to get working on it. And if we think of it too abstractly, we can’t actually go build solutions.

When you really dig into it, there are five basic needs for us to solve together: privacy, protection, ownership, prediction and preference. There's a fantastic body of effort around many of these issues individually. Some have more traction than others. What's most needed is a collective effort among insights leaders and our stakeholders to build on that momentum across this entire set of needs.

This has been an introduction to what's going on in insights and what responsibilities are around the corner. In the next post, we'll dig into each of these questions. We'll also see what lessons insights leaders can learn from other fields who have wrestled with professional responsibility. And we'll look to another grassroots effort - Creative Commons - to see how it became a movement with traction at the level of scale that we need to be shooting for.


The Fourth and Inches comic above was originally published on 2/26/02 by Tom Keeley in Notre Dame's daily newspaper, The Observer. Copyright Tom Keeley. Re-posted with permission from the artist.

Kathy Baxter

VP / Principal Architect, Responsible AI & Tech at Salesforce

6 年

When you refer to "insights" do you mean "inferences" that are made about an individual? These words are often used interchangeably but we should be specific.?For example, a bank can see if a customer is using their credit card to pay for gas at the pump or inside the store. There is a high correlation between paying for gas inside the store & buying cigarettes or lottery tickets. Some research shows a high correlation between smoking & high loan default as well as gambling & high loan default. So the bank *infers* the customer is smoker and/or gambler and denies their home loan request. Lots of inferences are made about people by companies like Google using their DoubleClick cookie. Is *fair* to deny loans to smokers or people who buy lottery tickets? What is the inference is WRONG? How many times have each of been forced to pay for gas inside the store because credit card reader on the pump is broken or you are concerned that the scanner isn't legit?? There is no law (including GDPR) that requires companies to reveal the inferences it has made about individuals or to correct them when wrong. When we ask "What does the customer want?" the answer in this case would be "to understand what you have collected about me (insights), what you have concluded (inferences), how that information is used, and to have a means for redress & remediation when it is wrong & harms me."?

要查看或添加评论,请登录

Jay Newman的更多文章

社区洞察

其他会员也浏览了