The Human Side of Data
Notre Dame's Technology Ethics Center hosts an event in San Mateo, CA. Thursday February 27, 5:30-7:30pm. Interested? Come!

The Human Side of Data

There are increasing tough decisions that our insights professionals, leaders, employers and regulators are wrestling with. 

I wrote a while back about The Ethics of Insights: How Facebook and Congress point toward an emerging responsibility for insights leadership in innovation. And I revisited it at the end of 2019 with pride that we’re seeing the change get traction: A Groundswell Moment for the Ethics of Data. 

Next week, Notre Dame’s Technology Ethics Center is hosting an event at Jump Associates in San Mateo to keep this dialog moving forward: The Human Side of Data.

We’ll be discussing how future-focused leaders and organizations are navigating the "Doing Good" vs "Do No Harm" when considering the great potential that new approaches to data and insights offer. The opportunities we'll consider range from new product innovation and customer experience to very personal opportunities in health, learning, relationship building and human flourishing. 

As I look at it, there are two lenses to a dialog around the human side of data. One is about what data can do for us - and the risks of what it can do to us. And the second is just as fascinating and human - how do we as individual people and leaders, in large organizations with new types of tools at our fingertips - coordinate and make decisions about what to do? Decision making is a very human responsibility. 

Too often, we don’t get started wrestling with the biggest issues of our time because we have a hard time scoping or defining the challenge. If we think of the problem too narrowly, we miss the interconnections with other issues. If we think of it too broadly, we can’t organize many people together to get working on it. And if we think of it too abstractly, we can’t actually go build solutions.

From the perspective of insights leaders, we’re honing in on five basic questions we can gather around: privacy, protection, ownership, prediction and preference. And they’re not just questions - with a more human lens, we might call them need spaces.

There's a fantastic body of effort and thinking around many of these issues and needs individually. Some have more traction than others in offering insights leaders and our stakeholders some momentum forward.

My previous posts were an introduction to what's going on in insights and what's beginning to gain traction, now. In this post - and likely at our event next week - we'll begin digging into what people need from our insights and how future-focused leaders across large companies, startups and policy-makers have begun making these decisions for their organizations.


Five insights need spaces - and the decisions we make.

  1. Privacy: How do we make decisions about the degree to which the real people behind the data and insights expect, deserve or have an inalienable right to remain anonymous and unknown to others - including researchers, designers, programmers, executives, malevolent actors and the general public?
  2. Protection: How do we make decisions about securing the data and insights we work with to ensure that the rights and expectations of the real people who are our customers and stakeholders are fully respected by us, our partners and even entities who don’t hold the same ethics we choose?
  3. Property (aka “ownership”): How do we make decisions about the legal and exclusivity rights to possession and reproduction of data, insights and the related or derivative works that rely on those elements?
  4. Prediction: How do we make decisions about the ways our inferences and insights can or should be used to look into the future and present a judgment about what might be possible or likely to occur, what’s the role of transparency, and under what norms and rules do we chose to act on predictions?
  5. Preference: How do we make decisions about how we deploy limited resources like our focus, attention, energy, time, investment, marketing and advocacy in generating insights and acting on them?


Of these five need spaces - and the questions that we're faced with - three of them (Privacy, Protection and Property) get the most airplay right now. The dialog is good - even if the coordinated approaches aren’t there yet.

The challenge of Preference is beginning to get some real attention. Many of last year’s hearings between Facebook, Google, Twitter and Congress focused on potential civil rights and bias issues for our most massively adopted technologies.

The challenge of Prediction is probably the least often considered as an ethical issue. At best, it’s typically considered a question of quality control – how well are our algorithms designed? Are they doing what we want them to do? 

It’s just as critical for our teams to be considering boundaries around prediction – even when managing quality. As the underlying technologies and data sets gain more scale, it’s Prediction that could begin to affect people’s choices. It’s not that much of a stretch to imagine freedom and self-control at stake. In fact, this is the central issue that thought leaders like Yuval Harari and Tristan Harris have been contributing to the movement.


Setting the dialog.

I'm looking forward to next week's discussion because these are decisions that our entrepreneurs and our insights leaders are guiding their organizations through every day. The people that our work serves are paying more and more attention, and with increased specificity. Just as one example, here's a link to Google Analytics that shows how search terms related to these questions have been growing across the globe.

<script type="text/javascript" src="https://ssl.gstatic.com/trends_nrtr/2051_RC11/embed_loader.js"></script> <script type="text/javascript"> trends.embed.renderExploreWidget("TIMESERIES", {"comparisonItem":[{"keyword":"data ethics","geo":"","time":"today 5-y"},{"keyword":"ai ethics","geo":"","time":"today 5-y"}],"category":0,"property":""}, {"exploreQuery":"date=today%205-y&q=data%20ethics,ai%20ethics","guestPath":"https://trends.google.com:443/trends/embed/"}); </script>

Queries about data ethics have been growing. And queries about AI ethics have been growing even faster.

What I love, and is highly related to the "doing good" side of our discussion, is that queries about human flourishing have been growing at the exact same time.

<script type="text/javascript" src="https://ssl.gstatic.com/trends_nrtr/2051_RC11/embed_loader.js"></script> <script type="text/javascript"> trends.embed.renderExploreWidget("TIMESERIES", {"comparisonItem":[{"keyword":"data ethics","geo":"","time":"today 5-y"},{"keyword":"ai ethics","geo":"","time":"today 5-y"},{"keyword":"human flourishing","geo":"","time":"today 5-y"}],"category":0,"property":""}, {"exploreQuery":"date=today%205-y&q=data%20ethics,ai%20ethics,human%20flourishing","guestPath":"https://trends.google.com:443/trends/embed/"}); </script>

And it's that intersection between "Doing Good" and "Do No Harm" that Mark McKenna from Notre Dame's Technology Ethics Center is going to guiding our panelists through discussing next week. How we pair optimism and potential for impact with decision making about risks when we're considering these needs that people have. How we're making the decisions for ourselves - and how are we building future-focused organizations that approach these questions in more collective ways.

If you're in the Bay Area and free on Thursday, February 27th, please consider joining the discussion.

Details of the event and the RSVP form are here: https://techethics.nd.edu/human

Shalu Attri

FinTech | Global Payments | eCommerce | Financial Services | American Banker's Most Influential Women in Payments, 2023 Honoree

5 年

Hi Jay - I may point one of my colleagues to the event.

要查看或添加评论,请登录

Jay Newman的更多文章