LinkedIn Live Recap: Conversations about AI with Jordan Levine and Cortnie Abercrombie
Credit: Micha Huigen for The Verge

LinkedIn Live Recap: Conversations about AI with Jordan Levine and Cortnie Abercrombie

I'm thrilled that thousands of folks worldwide watched Season 1 of my LinkedIn streamcast series “Expect the Unexpected: AI and Bias, the Boardroom, Blockchain and Business.” Season 2 “Under the Hot Rod Hood: The Data Science of AI” starts on April 20! To give you a taste of what's on tap, here's a recap of my Season 1 conversations with Jordan Levine, Partner at Dynamic Ideas LLC and MIT lecturer, and Cortnie Abercrombie, CEO and Founder of AI Truth. The recap was put together by a member of my team and is written in third person. ?

“Is AI Biased?” A Conversation with Jordan Levine

As an AI practitioner and educator teaching graduate students at the Massachusetts Institute of Technology, Jordan brought an incisive pragmatism to our LinkedIn Live conversation. His “brass tacks” approach to thwarting AI bias has three steps:

Step 1: The business must accept accountability: “Accountability for AI lies with the business’ decision maker, such as the leader with profit and loss (P&L) responsibility,” Jordan said. He noted the dissonance of this statement with the current state of Responsible AI, in which “43% of [survey] respondents say they have no responsibilities beyond meeting regulatory compliance to ethically manage AI systems whose decisions may indirectly affect people's livelihoods – i.e. audience segmentation models, facial recognition models, recommendation systems.” “That’s not the way the media sees it,” he said.

“The P&L owner is accountable for the AI decisions his or her business makes.”

Scott agreed. “Driving that conversation is important, because regardless of regulation, there’s accountability. Boards of Directors have a responsibility to make sure their companies have and adhere to a code of standards around model development, that would dictate how it’s done, uniformly. That code of governance spells out the roles and responsibilities of business owners and data scientists.” I added, “Blockchain is one of my favorite technologies for AI governance because it drives accountability, which is a big part of AI ethics.”

Step 2: Determine what biases exist that we know of: Jordan listed his “big four” of biases to be correlation bias, representation bias, measurement bias and disenfranchisement bias. “We need get really crisp on specific bias issues, and then think about tools that exist to address them,” Jordan said. “To me, that’s the way forward, to take a complex and high-level discussion of bias and make it actionable.”

Step 3: Apply data science tools to address bias: Jordan cited Scott’s data science blogs as providing excellent discussion around key data science tools, essential weaponry in fighting bias. These include monotonicity, palatability and observability.

Otherwise, AI will become one of tech’s enduring tropes, “garbage in, garbage out.” “AI is a representation of the data that’s fed to it,” Scott said, and asked Jordan, “How prevalent do you think that [GIGO] is today? Who within organizations is actively thinking about the care and feeding of AI models? Is it in everyone’s purview, given how important AI is?”

Jordan replied, “Garbage in, garbage out persists today. But I would assert that it’s quite addressable; we all have the technology tools to do so. The gap is an understanding between business teams and analytics teams; a tech team can rapidly generate univariate plots, for example, but before going to the model phase a member of the business team needs to review those plots with a red pen to determine what makes sense and what doesn’t,” adding further dimension to his assertion that business leaders need to own the ultimate responsibility for AI.

You can watch Scott’s entire interview with Jordan here.

“What Is Responsible AI? Robust, Explainable, Ethical, Efficient” A Conversation with Cortnie Abercrombie

?Cortnie’s 11-year stint in AI at IBM, as well as her experience as a founding editorial board member of AI and Ethics Journal, gives her a uniquely broad perspective on the evolution of Responsible AI. In her conversation with Scott, Cortnie reflected on the past, present and future of Responsible AI.

Cortnie said because there are “AI ‘pods’ within companies, analytic city-states,” it’s hard to institute a standard around Responsible AI, to provide “strong corporate governance around how we do checks and balances around AI.”

Perhaps surprisingly, she thinks that the US government could help with governance frameworks and auditing procedures. “The Joint AI Council at the Department of Defense is trying to lead by example,” Cortnie said. “But I think a bigger challenge is how much do people actually know? Do our legislators know enough to understand what kinds of laws and regulations should be passed?” She noted that while Europe’s General Data Protection Regulation (GDPR) has been in force for several years, only two states, New York and California, have passed similar laws.

“The environment of AI is like much of the tech industry: ‘move fast and break things,’ and Agile [development] is an unspoken norm; companies expect to see something from their AI teams in six to eight weeks. That’s where 90% of what goes wrong is around data,” Cortnie said, to which Scott quipped, “Data is a liability that sometimes provides some value.” Encouragingly, though, Cortnie noted that “there are conversations occurring in California about risk frameworks.”

What about self-regulation on an industry basis? “I’m very pleased to see the IEEE 7000 standard talking about ethics for developing AI systems,” Scott said, asking Cortnie, “Do you see hope in terms of standard for industries?”

Cortnie had two answers: “It’s complex because, first, how new is the industry we’re talking about, such as self-driving cars? In comparison, financial services is very well understood. Second, how high are the stakes? The stakes of self-driving car safety are obviously very high. So I hope a new industry like self-driving cars will adopt self-regulation but I don’t have a lot of faith because it is changing so rapidly.”

?“I am proud of the level of maturity in self-regulation in financial services, and hope to see more industries engage in sharing at this level,” Scott agreed, asking Cortnie, “where do you see maturity at?” She believes “anything that gets automation applied to it – such as robotic process automation – and anytime you want to take humans out of the equation,” is ripe for scrutiny and thus maturity. “People want to know, ‘What will this thing do when I set it loose and it’s learning?’

“We are talking about ML capabilities, a close cousin to predictive analytics, on steroids,” she continued. “Most companies are immature with their AI and ML capabilities – but we’re all trying!”

You can watch Scott’s entire interview with Cortnie here.

Season 2: Under the Hotrod Hood: The Data Science of AI

Scott’s Season 2 of LinkedIn Live kicks off on April 20 with Agus Sudjianto, EVP and Head of Corporate Risk Modeling at Wells Fargo. Register today to be part of their conversation, “Breaking Down the 'Black Box' of AI with Interpretable Models.”

Follow Scott on Twitter @ScottZoldi and LinkedIn to keep up with his latest thoughts on Responsible AI.

?

Faiz Anwar

Consultant - Revolutionizing Hospitality Operations & Enhancing Guest Experience

1 年

Scott, thanks for sharing!

回复
Monikaben Lala

Chief Marketing Officer | Product MVP Expert | Cyber Security Enthusiast | @ GITEX DUBAI in October

2 年

Scott, thanks for sharing!

回复
Rahul Srivastava

Senior Executive, Engineering and R&D Services at HCLTech

2 年

Great sharing Scott ??

回复
Cortnie Abercrombie

CEO, AI Truth. Expert on AI Strategy and Establishing Responsible AI Culture. Keynote Speaker. Advisor.

2 年

Thanks for the recap!

Alejandro S. Hesselbach

Senior Director @ Opal Group | MBA, Sales Growth

2 年

Thank you for sharing Scott.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了