Unleashing the power of AI means regulating it — here’s where we can start
Photo Credit: Ryan T-Swag Turpentine

Unleashing the power of AI means regulating it — here’s where we can start

AI's reach is expanding, unraveling deep possibilities and enigmas. TomTalks ??, a weekly newsletter or podcast by innovation leader Tom Popomaronis, features either Tom's personal insights or engaging dialogues with AI gurus and business magnates about AI's potential, challenges, and integration costs. Keen for a breezy 10-15 minute chat with Tom – form here!


This week’s news cycle is thoroughly covering the fact that?the White House?is strategizing for safe AI use and building on the "Blueprint for an AI Bill of Rights," against the backdrop of increasing calls for federal AI regulation. The government wants commitments from tech companies to address AI-related challenges, tech companies want similar commitments from the government, and so on and so forth.?

This reminded me of an article I wrote a couple of months ago that was never published. As I’m running short of time this week for the newsletter, I’m going to share that article with you along with a few residual thoughts I have about it now that some time has passed — and I’ll start with the latter.?

  1. This take is already a bit dated — in the time since I wrote the draft below, we’ve seen a big push for regulation both in the US and EU (in China also, but “regulation” might be an understatement there). The takeaway though, is that this is how fast things move now. Speed is the most under-the-radar aspect of AI in general; we talk about all the things it can do, but I hear relatively little about the most incredible feat of all — how?fast?it can do those things. The fact that we’re “only” about 3-6 months behind in trying to develop regulation strategies is almost a win. Almost.?
  2. I now think it may have been slightly naive to imagine that there could be a truly “global” approach to AI. Have we ever managed such an approach to anything? But I’m still optimistic — less naively, I hope — about the prospect of regions developing their own effective strategies.?
  3. I stand by?The how?portion of this draft, and in reading about the US and EU’s attempts to get things rolling, transparency is still something I see missing from the equation.?
  4. That’s all I have for now, so I’ll leave you with the article below.

Please let me know in the comments what you think about regulation, where you think it could be headed, and what your concerns are for the impacts on you and your business. As always, thanks for reading!?


We need a global conference aimed at managing AI risks, and we need it now

Prior to 2022, real-life AI was best known for its ability to power recommendation algorithms, which pushed content personalization and discovery to new heights while serving as highly useful business tools. The same algorithms, unfortunately, had problematic side effects that ranged from bias amplification and technology addiction to major election scandals and full-on?radicalization. Because we were unsuccessful at recognizing how AI could lead to such unintended consequences, we missed our chance to develop, deploy, and regulate it thoughtfully.?

Since then, a different set of generator functions has taken over AI research and development, and has spawned a new class of technology that is being widely referred to as “generative AI.” In 2021 we began to see public adoption of generative AI tools, and in 2022 adoption skyrocketed with the release of OpenAI’s ChatGPT-3. Currently, generative AI is improving at?exponential rates?and creating an “AI arms race” that Microsoft CEO?Satya Nadella?has described as “frantic.”

Like recommendation algorithms, the new wave of generative AI simultaneously represents unprecedented advancements and potential societal disruption. However, the benefit of hindsight grants us a new opportunity — a second chance to mark out a path for AI, rather than let it take its own. To take advantage of this opportunity we must organize a multi-stakeholder, cross-disciplinary, open discussion on AI, marked by clear goals and continuous, iterative engagement.?

Here is an example outline of such a discussion:

The who?

In order for a conference of this nature to be effective, it would need to be attended, at a minimum, by leading AI researchers, high-level executives from major technology incumbents like Google and Microsoft, legislative and policy experts, systems thinkers, ethicists, advocates for humane technology use, economists, sociologists, and professionals in the fields of facilitation and mediation. Additionally, it would be constructive to involve representatives from civil society organizations and delegates from underrepresented communities to address their unique challenges relating to AI.

Selecting representatives could be done in a number of ways. For example, it could involve collating nominees from these groups (e.g., 400-500 nominees) and having them each nominate their own top choices from that list (say, 40-50) to participate. Ensuring each participant has ample opportunity to voice their opinions and share their expertise would be crucial for a successful dialogue, which is why the number would need to be limited. Of course, the selection of representatives would require a lot more thought and care — what’s described here is just a basic starting point.?

The what

Facilitation could take a number of forms, including roundtable-style discussion, individual presentations, Q&As, and more — but regardless of the format, concrete goals are needed. They could include the drafting of unambiguous, practical recommendations for AI research, deployment, and regulation, the development of a global AI ethics framework, or a reference document for best practices and specific policy proposals.

Ultimately, society needs a set of methods, strategies, and resources for comprehensively understanding and managing AI-related risks. Maximizing technology’s potential is also important, but for the most part, market forces will take care of that on their own. Where market incentives don’t exist or are unaligned, on the other hand, we need to proactively create or align them.?

The how

The public today places a high value on trust and transparency — two aspects of current systems that face major threats based on generative AI. If there is a conference similar to what’s being proposed in this article, all discussions and decisions it contains should be documented without redaction and made easily available to all members of the public in order to foster trust and facilitate external scrutinization.?

If it takes on the appearance of an elitist cabal making decisions on behalf of humanity — rather than a transparent group of people developing a framework for humanity to make its?own?decisions — it risks being unproductive, or even counterproductive.?

Another consideration could be the establishment of a coordinating body for administering and managing the discussions, tracking progress across meetings, and identifying other ways of fostering collaboration among stakeholders, helping to provide continuity and accountability.?

The when

This needs to happen now. The dual potential of generative AI to solve problems and create mass disruption make it arguably a more important topic than climate change. That may seem like a bold claim, but consider: If AI continues to see exponential improvement in its problem-solving abilities, it may well provide the modeling and innovative capacity to address our most dire climate-related challenges. If, on the other hand, we fail to acknowledge and manage the risks associated with it, we could face the total breakdown of numerous social, political, and economic systems well before 2050 or even 2035 — two of the most cited dates in relation to climate change preparedness.?

On the note of climate change, the UN’s COP conferences provide a more-or-less useful template for how we might carry out a similar set of discussions around AI. And like COP, an AI roundtable would be maximally effective through iteration. Rather than a one-off, a regular, continuous schedule would give representatives (and humanity) a chance to stay abreast of technological changes, monitor progress on goals, evaluate outcomes, and update recommendations.?

The why

In addition to the rationale laid out in this article’s introduction, it should also be mentioned that there is a precedent for having discussions that were effective in helping humanity back from the brink of catastrophe. In Tristan Harris and Aza Raskin’s presentation ‘The AI Dilemma,’ they highlight how the Bretton Woods Conference of 1944 laid the groundwork for a period of relative global peace, how international collaboration successfully led to nuclear deterrence, and how an ABC ‘Viewpoint’ broadcast in 1983 brought various expert opinions about the Cold War into alignment and into the public eye.?

Harris and Raskin also state that the problem of AI is much more intractable, but share their optimism that its risks can be managed successfully and its potential unlocked ethically — if we take the cue and act now. The question is, will we?


Tom is Co-Founder & President of?Massive Alliance?– a thought leadership and ghostwriting agency that sets industry standards by serving a diverse client base, from high-ranking executives in Fortune 100 companies and global 2000 organizations to dynamic leaders in startups and small businesses. As we evolve, we're moving towards being a technology-centric business, developing an innovative AI-driven Language Model that serves as a digital memory bank for our executives, which propels us into new territories of innovation and efficiency (we're also #hiring for a Chief Editorial Advisor).

Maria de Los Angeles Lemus Campillo

SE HABLA ESPA?OL Award-winning writer, journalist, copywriter, editor, media creative & entrepreneur | digital publisher at Heart-Centered Living News | Open to co-creation

1 年

Interestingly, just registered for the online edition of #AIforGood (UN and ITU). Hoping I'll hear presenters address ethical dilemmas. One of my somatic writing mentors talks about "writing at the speed of your nervous system." When it comes to the speed of top-down AI development, humanity is the nervous system.

As usual a very detailed article covering topical items concerning the What If regards AI. Speaking for myself, while Generative Pretrained Transformer, GPT, is indeed a game changer but the thing that really interests me is the amazing life changing AI possibilities that will be available to us when AI models are narrowed for specific applications, such as AI models for heart surgery, civil engineering applications, legal analysis etc etc. When the bias is reduced and models start becoming more specific and not based on generalities - that is when we should really see the power of AI transform our lives ! At least thats my hope !

要查看或添加评论,请登录

Tom Popomaronis的更多文章

社区洞察

其他会员也浏览了