An Important AI Regulation Update

An Important AI Regulation Update

Welcome to Leading Disruption, a weekly letter about disruptive leadership in a transforming world. Every week, we’ll discover how the best leaders set strategy, build culture, and manage uncertainty, to drive disruptive, transformative growth. For more insights like these, join my private email list .

As we speak, generative AI is walking through our back doors.?

It is a highly democratized technology, and although that comes with many benefits and many opportunities, it also necessitates greater responsibility on the part of leaders to ensure that it is being used ethically inside organizations.

After all: the technology itself is a neutral player. It’s the humans who are developing it and using it that are doing so without any guardrails. To find lasting success, we must balance the ability of organizations—from businesses and technology companies to even governments—to leverage all of these artificial intelligence technologies with the rights of citizens and consumers to be protected from them.

Balancing these competing interests requires regulation—a set of guidelines for good behavior in this space.?

As I see it, there are three fundamental tenets to deriving this good behavior and establishing trust:

  1. Accountability
  2. Transparency
  3. Fairness

In considering regulatory measures, leaders must ask themselves: What will we be accountable for? What will we be transparent about? What is fair in this world of generative AI?

The Three Global Players in AI Regulation

It’s not enough anymore to simply understand the rules and regulations surrounding AI for your region alone. Because so many businesses operate globally, it’s vital to understand what AI regulation looks like across the globe.

There are three major players—China, the European Union, and the U.S.—in the artificial intelligence space. These are global centers of commerce and power, and their usage of this technology carries the most consequence. In many ways, these regions are where regulation will matter most.?

The problem? Each one approaches regulation differently. Each operates at a different level of regulation. Over time, however, this stands to change. Suppose one region takes a robust and clear stance about what is required to operate in that region, for instance. In that case, the other areas will have to norm themselves quickly to those regulations—to conform to those levels of “good behavior.”

Let’s look at the regulatory landscape among these key players.

China

China has formed a group, the Cybersecurity Administration of China , focusing on consumer notifications and rights. The CAC targets those three tenets of good behavior in its systems and wants to make sure that any online recommendations that consumers get are accountable, transparent, fair, and—I love this phrase—“disseminates positive energy.”

Here’s an example: The CAC has set up express regulations against using AI to offer different prices to different customers. The idea that if you know more about a customer, you can potentially offer them a different price than others is not considered fair, so it’s not allowed.?

To that end, China prioritizes the consumer or citizen experience. Although that may be true, they also have an underlying motive to ensure China's state is protected.?

The European Union

With its European Union AI Act , the EU is showing the rest of the world how to tackle this technology correctly.

No alt text provided for this image

In essence, the EU AI Act provides a tiered approach to acceptable risk:

  • Minimal risk AI encompasses completely allowed scenarios, such as email spam filters or AI used in video games to create a better experience.?
  • Limited risk AI—chatbots and the potential for deep-fake videos—is also allowed but should have some regulations concerning transparency. In the case of the chatbot, what sources are being used? For those videos, is it legitimate? Should we use watermarks?
  • High-risk AI is subject to mandatory requirements because they can positively or negatively impact an outcome. Autonomous vehicles and medical devices count toward this, as do tools used to score exams or filter resumes should be subject to regulations that prevent discriminatory practices, for instance.?
  • Unacceptable risk AI is simply not allowed. Prime examples are government social scoring, in which algorithmic data determines what people can receive in terms of medical care or social benefits, and real-time biometric identification. The concern is that if they can identify you, they can also misidentify you.?

The EU AI Act has just been approved and regulatory statutes are already in place. Starting in mid-2024, a year from now, penalties could amount to between two and six percent of an EU organization’s annual revenues. At first, these rules will apply to companies of a specific size operating within the EU. Still, if you are doing any business in the European Union, it’s essential to pay attention and understand the implications.?

The U.S.

During my livestream on this topic, a commenter described the current state of AI governance as too many drivers who still need a license on unfinished infrastructure devoid of road signs. That’s a profoundly accurate description of the U.S. right now. To some, it may seem that the U.S. is lagging, but it’s important to remember that the U.S., traditionally, has been pro-business with its approach to regulation, particularly around technology. It does not want to dampen the ability for organizations to move as fast as possible or to be as competitive as possible in its development of these new technologies.?

Of course, Congress has made some efforts, as have some state legislators—California, for its part, has been pushing ahead on privacy laws—to apply best practices to AI.

In late 2022, the White House released an AI Bill of Rights for consumers concerning artificial intelligence. Although it’s not standardized like the EU AI Act, it truly encapsulates those tenets of good behavior—accountability, transparency, and fairness—and I love what it offers. It states that AI should be safe and effective, that there are protections against algorithmic discrimination and bias, and that solid data privacy practices are in place—so, as a consumer, you have agency over your data and how it’s used.?

No alt text provided for this image

Most interesting is that it states consumers should be able to opt out of a system and employ a “human alternative.” We’ve all been there—when we’re stuck with a chatbot and try to dial “0” to get to an operator who can better assist us.?

Similarly, the National Telecommunications and Information Administration has not yet issued formal guidance on AI regulation, but it has released a high-level approach for organizations:?

  1. Map. Recognize the context and understand the risk involved.
  2. Measure. Assess, analyze, and track those risks.?
  3. Manage. Prioritize how those risks are addressed.

By taking this approach, leaders can create a risk management culture and govern those risks from a unified center inside their organization.?

Of course, in both the case of the Bill of Rights and this three-step approach, there is currently no enforcement—there are no consequences if you don’t follow these guidelines.

There are few restrictions or controls on how AI is used, but that is changing.

Throughout these three global centers and beyond, there is constant chatter about whether AI regulation is good or bad, but the simple truth remains: We need to be ready for it. I, for one, believe that responsible policy and practical training are our first lines of defense.

Your Turn

It’s clear that when it comes to AI, it’s the humans we need to regulate, not the technology. To that end, how are you establishing “good behavior” practices and standards across your organization?

Bill McCabe

Founder | IoT Recruiter | SoftNet Search Partners, LLC | IoT Consulting | AI & ML Recruiter | Consulting for Industry 4.0 and IIoT | Smart Manufacturing Solutions

1 年

As AI continues to evolve and become more integrated into our daily lives, the need for clear and comprehensive regulation becomes increasingly important. Charlene's discussion of the Weight Watchers case study highlights the potential pitfalls of data collection and usage, and underscores the importance of understanding the rights associated with data. As AI technology advances, it is crucial that we strike a balance between innovation and ethical considerations. How can we ensure that AI regulation keeps pace with the rapid advancements in AI technology, and what role should the tech industry play in shaping these regulations?

回复
John Moorhead

? Visionary Marketing Thought Leader with a passion for success; Strategic Media Influencer ?

1 年

This is a major step forward.

回复
Tracie Murray

LinkedIn Influencer | Top Voice | Thoughtful Inspiring Leader. ??♀? Soloprepreneur ??♀? Creativepreneur European Director, Contact Centres. Sales, Mental Health, Creative Writer, Mentor. And so much more! ??♀?

1 年

Friday vibes and your posts are a perfect match! Your laid-back yet informative content is a joy to read. It's like unwrapping a little gift of knowledge to kickstart the weekend. Thank you for making Fridays even better with your fantastic posts. Looking forward to more of your chill and enlightening content!

回复
Stan Dai

Compliance Manager EHS & Professional Engineer

1 年

All solved by giving them a Social Security Number, unless we fear them for their stupidity. https://www.instagram.com/reel/CsTV5OqPaok/?igshid=MzRlODBiNWFlZA==

AI regulation is a critical topic indeed. Understanding its impact on industries and businesses is crucial. What steps are you taking to stay informed and compliant with the evolving AI regulations, Charlene?

要查看或添加评论,请登录

社区洞察

其他会员也浏览了