Let's Talk AI Ethics and Whether We Should Temporarily "Halt" Generative AI

Let's Talk AI Ethics and Whether We Should Temporarily "Halt" Generative AI

Several high-profile names (including Elon Musk) have penned an open letter calling for the?pause of the creation of models more powerful than GPT-4.?

In March, OpenAI unveiled GPT-4, and people were rightfully impressed. Now, fears are even greater about the potential consequences of more powerful AI.?

The letter raises a couple of questions.?

Should?we let machines flood our information channels with propaganda and untruth??Should?we automate away all the jobs, including the fulfilling ones??Should?we develop nonhuman minds that might eventually outnumber, outsmart,?obsolete and replace?us??Should?we risk loss of control of our civilization? -?Pause Giant AI Experients: An Open Letter

The crux of their message is that we shouldn't be blindly creating smarter and more robust AI until we are confident that they can be managed and controlled to maintain a positive impact.?

No alt text provided for this image

During the pause the letter calls for, the suggestion is for AI labs and experts to jointly develop and implement safety protocols that would be audited by an independent agency. At the same time, the letter calls for developers to work with policymakers to increase governance and regulatory authorities.?

It's no secret that I've been a proponent of the proliferation and adoption of AI. I've been the CEO of AI companies since the early 90s, but it was in the early 2000s when I realized what the future had in store. A few years ago, in anticipation of where we are today, I started participating in discussions about the need for governance, ethical guidelines, and response frameworks for AI (and other exponential technologies).?

My personal thoughts? Trying to stop (or even pause) the development of something as important as AI is naive and impractical. From the Industrial Revolution to the Information Age, humanity has always embraced new technologies, despite initial resistance and concerns. The AI Age is no different, and attempting to stop its progress would be akin to trying to stop the tide. On top of that, AI development is a global phenomenon, with researchers, institutions, and companies from around the world making significant contributions. Attempting to halt or slow down AI development in one country would merely cede the technological advantage to other nations. In a world of intense competition and rapid innovation, falling behind in AI capabilities could have severe economic and strategic consequences.

It is bigger than a piece of software or a set of technological capabilities. It represents a fundamental shift in what's possible.

The playing field changed.?We are not going back.?

The game changed.?That means what it takes to win or lose changed as well.

But, that doesn't mean we can ignore the writing on the wall. AI Ethics is a growing issue.

Number of AI-Related Legal Cases Skyrocketing since 2016 from below 20 to above 100
Number of AI-Related Bills - Higher in America
via?2023 AI Index Report

The number of AI misuse incidents is skyrocketing. Since 2012, the number has increased 26 times. And it's more than just deepfakes, AI can be used for many nefarious purposes that aren't as visible.

There are countless ethical concerns we should be talking about:?

  1. Bias and Discrimination?- AI systems are only as objective as the data they are trained on. If the data is biased, the AI system will also be biased. Not only does that create discrimination, but it also leaves systems more susceptible.?
  2. Privacy and Data Protection?- AI systems are capable of collecting vast amounts of personal data, and if this data is misused or mishandled, it could have serious consequences for individuals' privacy and security. The security of these systems needs to be managed, but also where and how they get their data.?
  3. Accountability, Explainability, and Transparency?- As AI systems become increasingly complex and autonomous, it can be difficult to determine who is responsible when something goes wrong, not to mention the difficulty in understanding how public-facing systems arrive at their decisions. Explainability becomes more important for generative AI models as they're used to interface with anyone and everyone.?
  4. Human Agency and Control?- When AI systems become more sophisticated and autonomous, there is fear about their autonomy ... what amount of human control is necessary, and how do we prevent "malevolent" AI? Within human agency and control, we have two sub-topics. First, is?job displacement?... do we prevent AI from taking certain jobs as one potential way to preserve jobs and the economy, or do we look at other options like?universal basic income. We also have to ask where?international governance?comes in, and how we ensure that ethical standards are upheld to prevent misuse or abuse of the technology by bad actors.
  5. Safety and Reliability?- Ensuring the safety and reliability of AI systems is important, particularly in areas such as transportation and healthcare where the consequences of errors can be severe. Setting standards of performance is important, especially considering the outsized response when an AI system does commit an "error". Think about how many car crashes are caused by human error and negligence... and then think about the media coverage when a self-driving car causes one. If we want AI to be adopted and trusted, it's going to need to be held to much higher standards.?

Unfortunately, when you invent the car, you also invent the potential for car crashes ... when you 'invent' nuclear energy, you create the potential for nuclear bombs.?

There are other potential negatives as well.?For example, many AI systems (like cryptocurrencies) use vast amounts of energy and produce carbon. So, the ecological impact has to be taken into account as well.

These are all real and present concerns that we should be aware of. However, it's not as simple as creating increasingly tight chains to limit AI. We have to be judicious in our applications of regulation and oversight. We intrinsically know the dangers of overregulation - of limiting freedoms. Not only will it stifle creativity and output, but it will only encourage bad actors to go further beyond what the law-abiding creators can do.??

If you want to see one potential AI risk management framework, here's a proposition by the National Institute of Standards and Technology - it's called?AI RMF 1.0. It's a nice jump-off point for you to think about internal controls and preparation for impending regulation.?To be one step more explicit ... if you are a business owner or a tech creator, you should be getting a better handle on your own internal controls, as well as anticipating external influence.

Conclusion?

Artificial Intelligence is inevitable. It's here, it's growing, and it's amazing.

Yes, AI ethics is an important endeavor and should be worked on as diligently as the creation of new AI.??There are many issues we need to address as AI becomes more ubiquitous and powerful. But, there is no pause button for exponential technologies like this.

Change is coming.?Growth is coming. Acceleration is coming. Trying to reject it is an exercise in futility.?

Despite America leading the charge in A.I., we're also among the lowest in positivity about the benefits and drawbacks of these products and services. China, Saudi Arabia, and India rank the highest.?

If we don't continue to lead the charge, other countries will …Which means we need to address the fears and culture around A.I. in America. The benefits outweigh the costs – but we have to account for the costs and attempt to minimize potential risks as well.

Pioneers often get arrows in their backs and blood on their shoes.?But they are also the first to reach the new world.

Luckily, I think momentum is moving in the right direction. Watching my friends start to use AI-powered apps, has been rewarding as someone who has been in the space since the early '90s.?

We are on the right path.

Actions have consequences, but so does inaction.?In part, we can't stop because bad actors certainly won't stop to give us time to combat them or catch up.?

When there is some incredible new "thing" there will always be some people who try to avoid it ... and some who try to leverage it (for good and bad purpose).

There will always be promise and peril. If you're only scared of AI, you're not paying enough attention. You should be excited.

What you focus on and what you do remains a choice.?

No alt text provided for this image

Whether AI creates abundance or doomsday for you will be defined largely by how you perceive and act on the promise and peril you perceive. Artificial intelligence holds the potential to address some of the world's most pressing challenges, such as climate change, disease, and poverty. By leveraging AI's capabilities, we can develop innovative solutions and accelerate progress in these areas.

It's two sides of the same coin. A 6-month hiatus won't stop what's coming. In this case, we need to pave the road as we traverse it.?

We live in interesting times!

What do you think?

___________

To read more, you can find my blog?here?or follow me on Twitter?here.

Sign up for my Weekly Commentary?here.

Judson I. Stone

Author, speaker, ordained minister, retired corporate chaplain and pastor

1 年

Human nature makes certain persons seek more power and control over every level of human life. Some of these same people will strive to build their own Tower of Babel to make a name for themselves with AI. At some point they will reap what they sow. Thanks for the article.

Bob Diers

Sr. Delivery Consultant/Executive Recruiting Consultant at Qualigence International

1 年

Objective and level headed approach - well done.

要查看或添加评论,请登录

Howard Getson的更多文章

  • America's TikTok Ban: Perma or Temporary? Helpful or Hurtful?

    America's TikTok Ban: Perma or Temporary? Helpful or Hurtful?

    It's official: TikTok is banned in the United States ..

    1 条评论
  • Getting Educated Before You Vote!

    Getting Educated Before You Vote!

    Early voting has started, and if all goes to plan, we’ll know who will be the President on Tuesday, November 5th…

    1 条评论
  • Pattern Recognition In Trading

    Pattern Recognition In Trading

    The Market has been volatile recently, with unusually large gains and losses as we enter the homestretch of the…

  • Thoughts On Conversing With AI (And My Favorite AI Chatbot)

    Thoughts On Conversing With AI (And My Favorite AI Chatbot)

    I've been experimenting with a new chatbot called Dot for a few weeks. I use it as an AI journal that acts somewhat…

    2 条评论
  • Lessons From The CrowdStrike Outage

    Lessons From The CrowdStrike Outage

    Last Friday, CrowdStrike experienced a significant issue with a content update for its Falcon sensor on Windows hosts…

    2 条评论
  • The Flaw of Averages & The Consequences of Labeling

    The Flaw of Averages & The Consequences of Labeling

    The law of averages is a principle that supposes most future events are likely to balance any past deviation from a…

    1 条评论
  • The State of AI In 2024

    The State of AI In 2024

    Every year, Stanford puts out an AI Index (1) with a massive amount of data attempting to sum up the current state of…

    1 条评论
  • Is Big Tech Faking AI?

    Is Big Tech Faking AI?

    A few weeks ago, news came out that Amazon may be "faking" their "Just Walk Out" Technology. For years, shoppers have…

    1 条评论
  • A Look At AI Tools

    A Look At AI Tools

    At the core of Capitalogix's existence is a commitment to systemization and automation. Consequently, I play with a lot…

    1 条评论
  • Skill Versus Luck: A Sustainable Competitive Advantage

    Skill Versus Luck: A Sustainable Competitive Advantage

    In 2016, I wrote a variation of this article focused on trading ..

社区洞察

其他会员也浏览了