Does Explainable AI Matter?

Does Explainable AI Matter?

Cognilytica Insights

Explainable AI: An Unattainable Goal?

?? Trustworthy AI is like an onion (?? or a parfait, if you like). There’s many layers that go into making AI trustworthy. It’s not just about one ingredient that addresses one aspect of Trustworthy AI, but rather many different aspects that cover the five main areas we are concerned about:

  • Ethical AI: Making sure AI systems aren’t causing harms or general societal issues.
  • Responsible AI: Just because you can do something doesn’t mean you should.
  • Transparent AI: Giving people visibility into how AI systems are built and operated, including aspects of disclosure and consent, increases trust.
  • Governed AI: Keep the human in the loop and provide processes and procedures for AI system management and maintenance.
  • Explainable AI: Make the “black boxes” of AI less opaque

??Note: If you’re not familiar with these layers of Trustworthy AI, you should take our free Intro to Trustworthy AI course where we cover these aspects.

?? See that last layer down there – Explainable AI? That’s very important. Because how can you debug and fix an AI system that’s not working if you don't know how the system is actually working?

?? But we have a problem. The most powerful AI algorithms and systems use powerful many-layered deep learning neural networks with lots of different configurations, convolutions, and other things that basically make it very difficult, if not impossible, to actually know how the AI system is arriving at its outputs.

? This is what we call the “black box” problem of AI?

We’ve known about this problem for a very long time, definitely before this wave of Generative AI interest, going back to the earliest parts of this latest AI wave.

Indeed, government research funding agency DARPA has pursued Explainable AI (XAI) research for almost a decade now, funding major efforts around XAI across multiple universities and government research agencies.

Yet, here we are in 2024, and the problem of XAI has not been solved. Not even close. In a recent research paper , researchers have cited many potential approaches to XAI that have addressed some of the explainability needs but still face many limitations.?

?That begs the question: if we can’t actually get fully explainable AI, can we still even have Trustworthy AI?

???? Explainable AI: An Ingredient, but Trustworthy AI is Possible Without It

The answer to that question comes in the layers. ?? If you base your entire Trustworthy AI strategy on the dependency that AI systems have to be explainable, you’ll never build a Trustworthy AI system.

Instead, if you focus on the layers and address all the aspects that are immediately possible and implementable today, then Trustworthy AI is definitely a reality, even if XAI isn’t.

More tellingly about XAI is that while AI implementers and regulators are interested in explainable AI, they aren’t requiring that AI systems have XAI to be able to operate in a trustworthy manner. Notably, the EU AI Act that has recently been passed does not mention or require XAI in any substantial way.?

So, while Explainable AI makes Trustworthy AI a stronger reality, and still forms a whole layer of the Trustworthy AI framework, it is not an absolute requirement to make AI systems ethical, responsible, transparent, and governed.?

???To re-emphasize this main point: you can definitely have Trustworthy AI without XAI, but having XAI makes your Trustworthy AI stronger ??, if and when that ever is possible.

Now, if you are wondering how Transparent AI differs from Explainable AI (and they are very different concepts), check out our Intro to Trustworthy AI free course and learn more!

AI Best Practices: Latest Insights

  • ??? {AI Thought Leadership} Cognilytica Managing Partner Kathleen Walch named LinkedIn Top AI Voice - Check out some of the insights Cognilytica partner Kathleen Walch has been sharing on LinkedIn, and stay ahead of the industry. [Connect here ]

  • ?? {AI Open Source} The Increasingly Anti-Competitive World of AI and Open Source AI - What can possibly go wrong when you embed someone else’s AI models in your systems? This episode of the AI Today podcast aims to answer this question and provide alternative Open Source options.[Listen here ]
  • ?? {AI Investments} Sam Altman wants to raise up to $7 trillion. That's, uh, a lot of dough.- In the latest news from OpenAi, Sam Altman is looking to raise $5-7 trillion for the future of chip building. This is part of his quest to finally achieve AGI. Think it’s realistic? [Read More ]
  • ???? {Hyperpersonalization} IKEA unveils new AI assistant on The Open AI GPT Store - Retail giant Ikea released a new tool, available exclusively on the OpenAI GPT Store. It leverages AI to offer personalized furniture and décor recommendations. Finally, bringing hyperpersonalization to the furniture buying experience. [Read More ]
  • ?? {Recognition} AI unlocks ancient text owned by Caesar's family - Ancient? Herculaneum scrolls, too charred to be read by the human eye from the Mount Vesuvius eruption in 79AD, are finally able to be read from the help of AI. Three students have won a $700,000 prize after using AI to read a 2,000-year-old scroll. Will ancient secrets and knowledge be unlocked? [Read More ]
  • ??? {Hyperpersonalization} Amazon launches AI powered shopping assistant 'Rufus' -? Rufus is an AI powered shopping assistant trained on Amazon’s product catalog. Customers will be able to chat with Rufus inside Amazon's mobile app to get help with finding products, performing product comparisons, and getting recommendations on what to buy.[Read More ]
  • ?? {Self Driving Vehicles} Driverless taxi vandalized and set on fire in San Francisco’s Chinatown - The tide is turning on AI. California authorities are investigating this recent attack on Google’s Waymo car as the latest in series of protests targeting autonomous vehicles. [Read More ]

?? Events and Opportunities to Hear from Cognilytica

  • February 29, 2024: Reuters Webinar: Upstream Joins the AI Revolution Virtual / Online webinar, 10-11am CST - [Register Here ]
  • March 7, 2024: PMI Houston, TX Chapter: “Running AI Projects Successfully” Virtual / Online session, 5:30 to 6:30pm Central Time ?- ?[Registration opening soon]
  • March 20, 2024: PMI Metropolitan St. Louis Chapter: “Running Successful AI Projects and Avoiding Failure”: Virtual/Online session, 6:30pm-8pm CT. Participants will receive 1 PDU for this session. - ?[Register here ]
  • March 21, 2024: PMI Pikes Peak, CO Chapter: Winning Tactics for AI Implementation Success” Virtual / Online session - 12pm-1pm MT [Register here ]
  • March 26, 2024: PMI (North) Carolina Chapter: “What you need to know, today, to successfully run and manage AI projects” Virtual / Online session -? 6-7pm Eastern Time, [Registration opening soon]
  • April 8, 2024: PMI Southern New England: “How to Run Successful AI Projects and Avoid Failure” - Session during SNEC-PMI's 16th Annual Conference in person in Hartford, CT. [Register here ]
  • April 9, 2024: PMI Washington DC Chapter: “How to Run Successful AI Projects and Avoid Failure” 7:30-8:30pm EST in Reston, VA [Registration opening soon]
  • April 18, 2024: FORUM: Women Making an IMPACT, Empowering Women in AI - virtual/ webinar, 1pm EST [Registration opening soon]
  • April 29, 2024: PMI Saudi Arabia Chapter: “How to Run Successful AI Projects and Avoid Failure”, Virtual / Online session -? 7PM-8PM Saudi Time (10AM-11AM Eastern Time) [Registration opening soon]
  • May 7, 2024: PMI Delaware Valley Chapter - ‘How to Run Successful AI Projects and Avoid Failure”, Virtual / Online session -? 6:30PM-8PM Eastern Time [Registration opening soon]
  • May 14, 2024: PMI New York City Chapter, “Running Successful AI Projects and Avoiding Failure”, 6:30pm-8pm EST, [Registration opening soon]
  • May 21, 2024: PMI New Jersey Chapter, What you need to know, today, to successfully run and manage AI projects, Virtual / Online session -? 6:30PM-8PM Eastern Time [Registration opening soon]
  • July 10, 2024: PMI Madrid, Spain Chapter: “Best Practice Methods for Successful AI Projects”, Virtual / Online session - 7:00 - 8:00 PM Spanish time? [Registration opening soon]
  • July 10th, 2024: PMI San Francisco Chapter - “Successful approaches to running AI Projects - Avoiding the Top Reasons why AI Projects Fail”, Virtual / Online session -? 6-7:30 pm PST [Registration opening soon]

?? Move Forward with AI Best Practices - Training & Certification

Cognilytica’s AI best practices & Trustworthy training and certification continues to be in high demand. Haven’t yet enrolled in a certification or training? What’s holding you back?

  • ?? CPMAI v7 - Get Certified with comprehensive AI & ML Project Management Training. Includes: AI Fundamentals, AI Applications, Managing Data for AI, Data Preparation for AI, ML Algorithms, Generative AI, CPMAI Methodology, and Trustworthy AI. - [Enroll now ]
  • ?? CPMAI-C v7 - Optimized for Service Providers & Consultants! Includes the core of CPMAI v7 training expedited and focused on AI solution providers. - [Enroll now ]
  • ?? CPMAI+ Plus v7 - Greater Depth: Enhances CPMAI with RPA, Big Data, Data Science. Includes: All CPMAI Training Content, including AI & ML Fundamentals, CPMAI Methodology, plus Fundamentals of Big Data, Big Data platforms, Foundations of Data Science, Foundations & Applications of Robotic Process Automation (RPA), Big Data Engineering, Security & Governance, and more! - [Enroll now ]
  • ?? CPMAI+E v7 - Our most comprehensive training & certification! Adds Ethical & Trustworthy AI to CPMAI+. Includes: All CPMAI+ Plus Training Content, Ethical AI Concepts, Responsible AI Concepts, Transparent AI Concepts, Governed AI Concepts, Explainable AI Concepts, The Trustworthy AI Framework, Putting Trustworthy AI into Practice. - [Enroll now ]
  • ? Trustworthy AI Framework v3? - Most Comprehensive, Vendor-Neutral Trustworthy AI Training & Certification. Learn how to Build and Run Trustworthy AI Systems. Boost your credentials. Keep Your AI Solutions, Organization, Customers, and Stakeholders Trustworthy. Advance your career. - [Enroll now ]

Trustworthy AI: Latest Insights

  • ?? {Ethical AI} - Trustworthy AI Series: Ethical AI Concepts -? When discussing AI ethics, it’s important to have conversations around Right/Wrong and what it means in an AI discussion to “Do No Harm”. The AI Today podcast digs into Ethical AI concepts, which is part of our Trustworthy AI Framework. [Listen here ]
  • ?? {Trustworthy AI} - FCC bans robocalls made by AI - Well, that didn’t take long. With the US elections coming up, FCC is getting on top of AI generated robocalls. The new regulation allows the FCC to impose fines of up to $23,000 per call and empowers state attorneys general to pursue legal action against violators? [Read More ]
  • ?? {Trustworthy AI} - Will AI-Powered Deepfakes Sow Chaos During Election Year? - Election watchers and technology experts say the rise of publicly available AI will present a great threat to the ability of voters to separate truth from fiction as a vital election draws closer. As we keep saying, you can no longer believe what you read, hear, or see. [Read More ]
  • ?? {Pseudo AI}? Elon Musk Posts Video of Optimus Robot, Gets Busted for Fakery - Tesla CEO Elon Musk posted a video of the company's latest-generation Optimus humanoid robot folding a shirt on a table. But there's one big problem: it needed a substantial amount of help. [Read More ]
  • ?? {Ethical AI} - Georgia Joins List of States Looking to Limit AI in Health Decisions - Lawmakers are working to safeguard their constituents against potential biases and set ethical standards around AI. Georgia lawmakers are working to establish clear guidelines for AI use in the medical space, specifically where the technology intersects with health insurance coverage decisions. [Read More ]
  • ?? {Ethical AI} - The Philly sheriff’s good news headlines? AI generated them. - We continue to say you can no longer believe anything you read. Looks like Philly’s Sheriff had to recently come clean and admit that dozens of phony news headlines articles were posted on her campaign website to highlight her first-term accomplishments. These headlines were not real and were written by AI. [Read More ]
  • ?? {Trustworthy AI} - Indonesia Elections 2024: How AI has become a double-edged sword for candidates and election officials - It’s not just happening in the US. From chatbots to childlike images, AI is being used and abused by some for political purposes in Indonesia’s February 14 presidential and parliamentary elections. Analysts say lack of regulation and gimmickry could turn some voters off. [Read More ]

??? What our Community is Saying

“CPMAI has provided me with an industry specific view of how to adapt PMP knowledge for AI and advanced data projects, giving specific detail around DataOps and providing a foundation of AI and ML knowledge. It gives Project Managers more strategic value to organizations as they add AI & advanced data projects to organization portfolio’s. I’d recommend the CPMAI for any PMP or Project Manager who wants to learn how to identify, manage, and successfully deliver projects in this rapidly growing sector” - Krystene “KJ” Jennings? PMP?/ CPMAI /ASQ CSS-GB, Project Lead Business Systems Analyst, Predictive Analytics and Cloud Engineering at Centene

AI Resources

?? Check out our AI & Data Resource List: Dive Deeper! ?

Achieving Trustworthy AI is indeed complex, and while XAI presents challenges, its role in transparency and accountability cannot be understated in the pursuit of ethical AI practices.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了