Own the Unknown? with Olivia Gambelin

Own the Unknown? with Olivia Gambelin

Welcome to the next issue of Further's Own the Unknown? LinkedIn newsletter. It is time to introduce you to a new thought leader. Twice monthly, we'll share some of the knowledge we've gained from following, reading, and interviewing some of the most insightful and influential thought leaders on LinkedIn.

This month, we’ll be discussing the thought leadership of Olivia Gambelin , author and AI Ethicist. One of the first movers in Responsible AI, Olivia Gambelin is a world-renowned expert in AI Ethics and product innovation whose experience in utilizing ethics-by-design has empowered hundreds of business leaders to achieve their desired impact on the cutting edge of AI development. Olivia works directly with product teams to drive AI innovation through human value alignment, as well as executive teams on the operational and strategic development of responsible AI.

Responsible AI

Olivia is the author of the book Responsible AI: Implement an Ethical Approach in Your Organization. Responsible AI guides readers step-by-step through the process of establishing robust yet manageable ethical AI initiatives. As she describes it, it is a very practical process utilizing applied ethics. As she describes in the book:

The route to successful AI seems deceptively clear. All that is needed is quality data and a strong algorithm developed for either productization in a defined market or deployment in a compelling internal use case. As long as your technology is sound and your business case is thoroughly developed, the major blockers seem to have been addressed and you would expect to see a high rate of success.

As consultants at Further, we know it is never as easy as that. The process of helping a client define a project and identify its goals has an undeniable subjective quality. There are many threats to successful AI deployment, a topic that we’ve discussed with virtually every thought leader in this series. But Olivia places failing to address Responsible AI among the biggest threats.

As any true leader in AI will know, the success of an AI project depends at its core on the foundational values and responsible practices it has been built on. In other words, you can’t have sustainable AI success if you do not have Responsible AI and Ethics.

The Subjective Quality of Ethics

Something that we’ll be certain to ask her about is the perceived subjective quality of ethics. It can sometimes be a barrier to embracing Responsible AI; that it is “only” subjective. She makes a powerful point. Data Science and AI projects, in general, are subjective. This is something that any skilled consultant knows, and that becomes a challenge when a client wants to postpone discussing how to measure value until after a prototype is built. It can’t be postponed because even model effectiveness has to be measured against the model’s purpose. Values in ethics are much the same.

“Ethics is subjective” is a common objection you will encounter to the viability of ethics as a reliable tool in AI practices. By highlighting the contrasting nature of subjective versus objective factors, what this objection is attempting to do is undermine the validity of decisions based on ethical reasoning. A common belief that has only been reinforced with the rise of modern technology is that something is true if and only if it is based on objective facts.
Simply put, ethics is just as subjective as data science. And if we accept the validity of data science despite its elements of subjectivity, then why should we not do the same for ethics? Although it may seem counterintuitive at first, data science does in fact have elements of subjective reasoning that are comparable to those in ethics. Data science is based on objective data points, however it is in the evaluation metrics that we find the elements of subjectivity. When it comes to evaluating for success in data science, there are three primary metrics to choose from: accuracy, precision, and recall.

The Techno-Value Blindspot

Olivia has systemized her approach using a tool called the Values Canvas. By working through this with clients she tries to avoid the Techno-Value blindspot. Responsible AI is central to our practice at Further, too. In fact, we have two full time AI Strategists. So we’ll be sure to ask Olivia to walk us through her approach and the components of the Values Canvas.?

Here’s how she describes the Techno-Value Blindspot:

The techno-value blindspot occurs when you treat an ethics problem like a technical problem and only focus on developing a technical solution to fix things. When you are experiencing an ethics problem with your AI, logically you will be tempted to focus your attention on the AI system and develop a technical solution in the hopes of fixing the ethics problem. However, these technical problems are only the symptoms of the deeper ethics problems you face.

In order to avoid this blindspot, Olivia recommends examining the Three Pillars of AI and using The Values Canvas.

The Pillars are:

  • People: “Who is building your AI?”
  • Process: “How is the AI being built?”
  • Technology: “What AI are you building?”

The end goal of Responsible AI is to create AI systems that reflect our values, and as we have been discussing in this chapter, that must be built on three different pillars in order to be successful. The Values Canvas enables you to visualize all three pillars to your Responsible AI strategy simultaneously, providing a high-level comprehensive view over the details necessary to execute ethical decision making at scale and align your technology with your values.

Olivia has provided a link to a description, including a blank Value Canvas that you can explore. We’ll be sure to ask Olivia about what her experiences working with clients through this process have taught her about successful AI projects.

Looking Ahead

Olivia splits her year between San Francisco, where she is an active member of Silicon Valley’s ecosystem of startups and investors, and Brussels, where she advises on AI policy and regulation. Don’t miss our conversation with Olivia on February 12th.

There are more opportunities to learn from the Further team coming up soon. Nicolas Decavel-Bueff will be teaching his masterclass at TDWI Vegas. Cal Al-Dhubaib will be speaking at the upcoming ODSC Event in Boston. And Jason Tabeling will be presenting Understanding? Your Brand Presence in the Age of AI on Feb. 18th. Finally, Brent Schneeman from PMG and Further’s Lauren Burke-McCarthy will be speaking at the upcoming Data Science Salon.

Keith McCormick

Teaching over a million learners about machine learning, statistics, and Artificial Intelligence (AI) | Data Science Principal at Further

2 周

Great conversation today with Olivia. Here is the recording: https://www.dhirubhai.net/events/7293685311186845696/comments/

回复
Keith McCormick

Teaching over a million learners about machine learning, statistics, and Artificial Intelligence (AI) | Data Science Principal at Further

2 周

Olivia's book is filled with insights that resonate with me as a consultant. Responsible AI is central to what we do at Further so I'm particularly looking forward to this conversation. I hope you'll join us.

要查看或添加评论,请登录

Further的更多文章

其他会员也浏览了