In conversation with Olivia Gambelin
In this issue of Further's Own the Unknown? LinkedIn newsletter, we will highlight the conversation between Olivia Gambelin and Further’s Keith McCormick , recorded on February 12th. The conversation used Olivia’s book, Responsible AI, as a starting point but covered diverse topics from Oliva’s training as an ethicist and her work with clients.
Feeling pressure to keep up with the perceived progress of competitors
The conversation began with a discussion about what her clients have been struggling with and how they find her. Keith asked specifically if they were feeling pressure to “catch up.”
We still have a lot of companies even just trying to figure out use cases. So you may feel like you’re behind, because a lot of the marketing is pointing towards, ‘We’re doing all these things with AI, AI, AI.’ But in reality, the actual application and option-building of these systems is far further behind than you would feel just reading LinkedIn feeds.
Organizations are dealing with that pressure to catch up and yet concurrently are feeling stuck. They don’t know where to begin. Olivia commented on what she observes when she sits down with clients and what they need help with.
"Well, lately, it feels like I am helping people that are both stuck on AI ethics, responsible AI, but also AI in general. And honestly, I would say it’s a couple of different factors that have led to that feeling of almost frustration or inertia—of ‘Where do I even move forward from here?’
On the AI side, we’re at a point in time where AI can impact every single aspect of a business. And so when you can do everything, you get stuck. You sit there and go, ‘Well, what do I need to do?’ And so that’s a lot of the inertia that I find.
But then, on the responsible AI and ethics side, it comes down to this fear of, ‘I need to be perfect from the start. If we don’t have this perfect right away, if we don’t know exactly what we’re doing right away, then we are opening ourselves up to liabilities.’ It’s like zero or 100—there’s no in-between at times. And so, again, that creates this inertia to do anything out of fear. And so I’m coming in and saying, ‘Okay, you actually start at zero and work your way up. It doesn’t have to be at 100 right away.’"
Olivia's comments regarding this were also featured in a recent video clip.
Is AI Ethics subjective?
A fascinating observation in Responsible AI is that a potential roadblock to taking action is that AI Ethics is “subjective.” Olivia brilliantly points out that all data science and AI projects have subjective elements.
When we’re looking at the field of data science, let’s use something as simple as data labeling. There’s a lot of subjectivity in there, even though it may seem like, ‘Oh, we have our source of truth, we’re labeling the data, done, there’s no subjectivity there.’ That is the objective truth.
Another layer of subjectivity within data science is that you can pick what kind of metrics of success you’re going for. Let’s say you choose accuracy—well, that is a subjective choice. Yes, you may have reasoning for why you chose that, but it is still a subjective choice that may not have been the same choice another person would make.
Carrying this into the field of ethics, it works in the same way. We have a strong understanding, say, of privacy, but that concept of privacy may change depending on the context that it’s being operated in. It doesn’t mean that the principle itself is subjective—it means that its application requires context. And that’s where ethics and data science are actually very similar.
Facilitating the critical discovery process with clients
Olivia’s initial engagement activity with clients closely mirrors Further’s in many ways. At Further, Data Science and AI engagements begin with a Discovery and Design engagment. Olivia uses her Values Canvas at this stage in her client interactions. Keith asked Olivia who should be in the room while working through the Values Canvas.
I’m a classic consultant—it depends. It depends specifically on where it’s being applied.
If you’re using the Values Canvas to create an overarching strategy—essentially the responsible AI foundations to an AI strategy—then you need leadership and, quite literally, every head of a department in that room. Because it touches on all different departments and all different aspects within the business. You need leadership in the room there.
Now, let’s say, though, you are working on a specific project, and you're trying to embed a specific value into a product. Then, you're working with the product team. And there, you need a designer, you need an engineer, and you need your product manager, at least, in the room, handling it from there.
But you know, a lot of times, something starts in one line of business—let’s say marketing, for instance. If they’re developing something like a chatbot, leadership outside of marketing might see it as just ‘their project.’ But in reality, you also need representation from customer support because they’re the ones who will hear firsthand when the chatbot isn’t meeting users’ needs. And you need IT because they’ll be responsible for maintaining it.
So, the key is thinking beyond just the immediate team. You need the decision-makers who shape the outcome and the people who will be directly affected by it in the room from the start.
Further’s Cal Al-Dhubaib was recently on the very influential Super Data Science Podcast, discussing his career, and he describes the origin of the Discovery and Design engagement and why it is so important.
AI Ethics was also a topic that we discussed with Matthew Lungren MD MPH back in January.
Coming Soon
Further’s Head of Solutions, Jason Tabeling , will join PMG colleagues Matt Allfrey and Sam Callendar to discuss Further’s Presence Score. It’s coming up soon, on March 4th.
March 15th, Krista Bowman, who heads up Further’s healthcare practice, will be speaking at Google International Women’s Day (IWD) 2025. Check out the full line-up and more details here.?
And May 13-15, Cal will be returning to ODSC East in Boston.
Teaching over a million learners about machine learning, statistics, and Artificial Intelligence (AI) | Data Science Principal at Further
3 天前Olivia was a great guest … and I loved her book.