Ep. 62 This week in the news. Government Regulation of AI

Ep. 62 This week in the news. Government Regulation of AI

In Episode 62,?we want to comment on three thought-provoking articles related to AI and government regulations. This news will have long-term implications for many digital acceleration initiatives.

Let's start!

No alt text provided for this image

The US Congress is debating the need to build rules for AI. Sam Altman, CEO of OpenAI, declares at the Senate.

The?New York Times?and?Euronews?had good reports of OpenAI CEO Sam Altman's declaration at the US Senate, stating the need for regulations to avoid the wrong utilization of AI.

Why is this news relevant?

As we predicted in?Episode 56, government regulations will be required if the AI solution providers fail to provide adequate controls for AI wrongdoing.

Now we know that AI solution providers can't be solely responsible for monitoring the wrongful utilization of AI, as Open Source's AI solutions are impossible to monitor by an ethical team.

I think the European Congress is better prepared to create and enforce AI government regulations due to its previous experience dealing with GDPR. Probably the US Congress will follow.

The European Union Set to Trailblaze Regulating AI

Time Magazine?published this news a week before the OpenAI CEO's declaration at the US Congress.

Why is this news relevant?

Few people envisioned the consequences of OpenAI making public GPT-3.5 and GPT-4.

Immediately after, many thought leaders saw a potential risk in sharing classified data with Chat-GPT or sharing private customer information.

Other risks need to be analyzed. Depending upon choices, some organizations will look for a dedicated instance of their AI engines, linked via APIs, instead of using the public version of these solutions.

But exposing private data is a significant alarm signal for the European Union Congress. For them, enforcing government regulations similar to GDPR may be more straightforward.

More to come.

AI regulations are also being studied at US State Level

Did you know that over 80 AI regulation bills are being looked at US State Level?

The National Conference of State Legislators (NCSL)?published an article?commenting on current events in US state legislatures.

Why is this news relevant?

As we learn more about AI and its capability to analyze data and make decisions without human intervention, there is a natural tendency to avoid (1) the wrong utilization of AI and (2) the potential risk to humans due to a wrong decision taken by AI entities.

Reality is, there are other considerations related to use cases affecting people's privacy, such as:

  1. Using AI entities connected with cameras to identify people or situations.
  2. Crime detection
  3. Using AI entities as responsible for Automated Decision Tools.
  4. Impersonation: forcing AI entities to inform humans they are interacting with a non-human entity before any interaction starts.
  5. Human Rights: Some legislators are proposing that if an AI-Entity is used to evaluate candidates for job positions, the AI-Entity may not consider the applicant's race, or zip code, as a reason for rejection.

I got you, Jose; any final comment?

Two comments

Technology adoption is a consequence of quality and trust.

Whether AI solution providers monitor and detect the wrongful utilization of their solutions or if the government steps up regulating the utilization of AI, we will have a positive effect in the long run.

Lack of quality (depending on the AI solution providers) and lack of trust (the consequence of wrongful utilization of technology) are more dangerous than adequate controls (self-regulation or public sector regulation).

This is not the end but the beginning of the chapter.

The dark side of regulation is protecting the incumbent

Just a thought! Make a choice, who are AI thought leaders protecting?

  1. Humankind
  2. Their leadership position
  3. Both

In general, competition is good for consumers. It forces competitors to keep innovating. On the other side, at times it is required to build rules to enforce social stability.

It is hard to say at this time what the motivation of all the stakeholder groups is.

More to come.

Good enough?

No alt text provided for this image

Thank you for being a follower and for your support.

Please?click 'like'?to provide me feedback and support.

Please?share this episode?if you think this information would be valuable to others.

As always, feel free to DM or comment as appropriate.

Happy to assist!

Juan Carlos Urdaneta

? Digital Strategist ★ I help business owners and marketing leaders develop powerful ai-enabled digital strategies that accelerate revenue growth ? CDMP, PCM?, OMCA?

1 年

Government regulation is always a sensitive discussion but it is healthy to address the potential issues that AI might create. Italy blocked ChatGPT and regulated Bard based on privacy concerns. Is that the way to go Jose Noguera?

CHESTER SWANSON SR.

Next Trend Realty LLC./wwwHar.com/Chester-Swanson/agent_cbswan

1 年

Thanks for the updates on, The Digital Acceleration ?? ?? ?? ?? ?? ??.

Huba Rostonics

I help teams PERFORM for their here and now, while they TRANSFORM for a bright future. System & Soul Business Coach. Best-Selling Author. GTM Strategist, Head of Operations and Channel.

1 年

Looking forward to what is surely going to be a good read! This is such an important topic. We need to get the dial correctly to defend us from any nefarious use, but also loose enough not to stifle innovation. One thing is undeniable, the space is ON FIRE ?? !

要查看或添加评论,请登录

社区洞察

其他会员也浏览了