AI Governance: "The bad implementation of a great idea will always become a great example of a really bad idea!"

A few years ago, during the Q&A section of a presentation I gave in Sao Paoo, Brazil, I was asked by the moderator what I thought were the biggest issues faced by organizations going forward. I indicated that it was probably the lack of respect, diligence, and care around issues of computer and network security and that organizations would end up paying a big price for their negligence in what we might call "technology governance."

What's that? Governance? It's the framework that an organization puts in place to guard against future issues, risks, and challenges. I remember saying on stage that what we were currently witnessing with the arrival of many 'smart devices' was the bad implementation of a great idea - since much of the technology out there was suffering from security flaws and other challenges. Organizations that were bringing them to market didn't pay enough attention to the issue of 'governance' - they didn't have an architecture in place that would guard against such flaws - with the result that a lot of problems were happening.

I was thinking about this yesterday while I was knee-deep in the research I'm doing to prepare for a talk to a group of 60 CEOs for an upcoming event - the topic, of course, being AI. My research was confirming my core belief that going forward, in their rush to implement new AI opportunities, many companies are going to fail with security, privacy, and other issues - and so we will once again see a new flood of bad implementations of a great idea.?

Consider our past - and the Internet of Things, or IoT as we call it. It's a great idea - as the Internet made its way into homes, there emerged big opportunities for smart devices. Webcam alarm systems, smart doorbells, smart thermostats, and more. The concept is fantastic - hyperconnectivity allows the reinvention of previously unconnected technologies into something new. And so we saw the emergence of the Nest thermostat, the Ring doorbell, and other technologies.

Yet, we saw a really bad implementation of this great idea. We saw a?flood of products that were not well architected and did not have a robust, secure architecture, and this led to all kinds of problems - essentially, smart home devices that were not secure and could be easily hacked. Think about it - the last few years have seen smart locks that can be easily bypassed; smart thermostats that can be remotely controlled; and even smart refrigerators that can be used to send spam emails.

I wrote about this a few years back in a long blog post, commenting:

Here’s the thing: most #IOT (Internet of Things) projects today are a complete failure – they are insecure, built on old outdated Linux stacks with mega-security holes, and ill-thought out architectures. My slide on that fact? Simple: a reality which is already happening today. This type of negligence will doom of the future of the products of many of the early pioneers.

The post, "Trend: IOT? It's Early Days Yet, and Most Products So Far Are a Big Failure"?went to the root of the problem in this slide:

No alt text provided for this image

How bad an implementation of a great idea occurred? Things were so bad there a popular social media account, InternetofS***, spends all of its time documenting the security and privacy failures inherent to many smart devices.

In my post, I went further, suggesting that what was needed was an 'architecture' - or a governance framework for device design and implementation - that would guard against the emergence of such risks. The architecture would ensure that any product design paid due respect to a set of core principles - what I call the "11 Rules of IoT Architecture."

So here's the thing: if organizations are going to build a proper path into the hyperconnected future, they need to understand and follow my "11 Rules of #IoT Architecture." Read it, print it, learn from it :?this is what you need to do as you transition to becoming a tech company. Some of the biggest organizations in the world have had me in for this detailed insight -?maybe you should.
My inspiration for how to build the future right comes from Apple's robust Device Enrolment Program architecture, which lets an organization deploy, manage, upgrade and oversee thousands of corporate iPhones or iPads; and Tesla, which is not really a car company, but a hi-tech company.
And so in both of these talks, I put into perspective how Tesla has (without knowing it, LOL) been following my rules.?First, think about what a Tesla really is - here's my slide....
No alt text provided for this image
Going back to my list of the 11 Rules of IOT Architecture, you can see that Tesla has met a number of the conditions ...
No alt text provided for this image


This architecture provides a framework - governance framework - that guards against and mitigates inherent risk. Over time, many organizations began to think about this type of thing; over time, organizations realized they could not rush to market great ideas with bad design.

Which brings me back to AI. Many organizations are rushing to get involved with this shiny new toy - either integrating it into their software platform or implementing fancy new AI tools. But the risks in doing so are real - private, internal corporate documents and information might be used for a ChatGPT prompt and over time, become exposed; private large language model systems might be implemented using the same documents, and a future security or privacy leak might expose those same documents to the public. Other risks abound. At least some people are worried about this:

Ahead of a Salesforce event focused on AI, the company is unveiling security standards for its technology, including preventing large language models from being trained on customer data. "Every client we talk to, this has been their biggest concern," said Adam Caplan, senior vice president of AI, of the potential for confidential information to leak through the use of these models.”
Salesforce touts AI strategy, doubles investment in startups,?
Bloomberg, 19 June 2023

Everything we have done wrong in the past might be repeated in the future if we don't pay attention to the issue - if we don't put in place a structure for AI governance.

Because the bad implementation of a great idea is just a great example of a really bad idea!

Futurist Jim Carroll has never quite understood why so many organizations can make the same mistakes over and over again.

Original post

Guy Huntington

Trailblazing Human and Entity Identity & Learning Visionary - Created a new legal identity architecture for humans/ AI systems/bots and leveraged this to create a new learning architecture

1 年

HI Jim, Agree with you. You might want to skim these three articles: *?“Getting Your Company Ready for AI - What Boards and C-Suites Need to Know” - https://www.dhirubhai.net/pulse/getting-your-company-ready-ai-what-boards-c-suites-need-huntington/ * “The Challenge with AI & Bots - Determining Friend From Foe” - https://www.dhirubhai.net/pulse/challenge-ai-bots-determining-friend-from-foe-guy-huntington/ * ??“Deep Fakes, Privacy, Protection & Misinformation” - https://www.dhirubhai.net/pulse/deep-fakes-privacy-protection-misinformation-guy-huntington/ Guy ??

回复
CHESTER SWANSON SR.

Realtor Associate @ Next Trend Realty LLC | HAR REALTOR, IRS Tax Preparer

1 年

Thanks for Sharing.

要查看或添加评论,请登录

Futurist Jim Carroll的更多文章

  • The Accidental Oracle

    The Accidental Oracle

    "The most extraordinary futures often begin with the unplanned detours that happen along the way!" - Futurist Jim…

  • Wimps

    Wimps

    "Courage is the ability to speak out when no one else will" - Futurist Jim Carroll No one seems to be willing to speak…

    1 条评论
  • The Future of Risk

    The Future of Risk

    "It’s the unknown unknowns that will get you every time!" - Futurist Jim Carroll On February 12, 2002, United States…

  • Robotic Acceleration!

    Robotic Acceleration!

    "Sometimes the future kicks you into tomorrow faster than you expected!" - Futurist Jim Carroll Let's talk about the…

  • Doing a?Redo!

    Doing a?Redo!

    "There's always time to redefine your success from what you have done to what you plan to do!" - Futurist Jim Carroll…

    1 条评论
  • Chasing Big Ideas

    Chasing Big Ideas

    "Too many people focus on 'what is' rather than 'what could be'" - Futurist Jim Carroll I'm in Georgia, where later…

  • About That Future Thing!

    About That Future Thing!

    "Turn your fear into fuel" - Futurist Jim Carroll This photo was taken in October 2019. I'm in the room with a farming…

  • Cognitive Distortions

    Cognitive Distortions

    "An irrational belief in irrational ideas usually leads to irrational results" - Futurist Jim Carroll So let's try and…

  • The Crisis of Grievance

    The Crisis of Grievance

    "Lost trust? A lost future!" - Futurist Jim Carroll Let's talk about trust - or rather, the lack of it. Somewhere along…

  • Make More Moments That?Matter

    Make More Moments That?Matter

    "Intentional kindness: humanity's superpower" - Futurist Jim Carroll, with a shoutout to Drew Sullivan, APB Speakers…

社区洞察

其他会员也浏览了