?? The Ghosts of Generative AI ??

?? The Ghosts of Generative AI ??

If it’s Wednesday, it’s Wilkinson Wire! As we step into spooky season, it’s a fitting time to consider the “ghosts” haunting the rapid advancement of generative AI. From calls for global oversight to concerns raised by top industry leaders, there’s no shortage of perspectives on how to navigate this powerful technology responsibly.

Today, we’re exploring some of the sharpest insights into the risks of AI as it reshapes our innovation economy. Below, you’ll find perspectives from Mustafa Suleyman, CEO of Microsoft AI and co-founder of Google DeepMind, insights on how the U.S. and China differ in their approach to AI, and the latest safety concerns arising at OpenAI.

While the journey ahead may have a few spooky twists and turns, I'm optimistic about AI’s transformative potential—it will be perhaps more significant than even the steam engine or electricity.? Certainly as we move forward we have the responsibility to navigate AI's advancements wisely.


Containment for AI

Mustafa Suleyman, CEO of Microsoft AI and previous co-founder of Inflection and DeepMind, is a strong advocate for proactive AI regulation to curb risks tied to its rapid development.?Comparing today’s AI landscape to the Cold War, Suleyman advocates for a global containment strategy to keep high-risk AI applications in check, fostering international cooperation to mitigate risks while unlocking the benefits of AI responsibly. His approach is both a caution and a call to action, urging that while AI’s potential is transformative, its alignment with our shared human values is non-negotiable.?

In his book, The Coming Wave, Suleyman warns of the potential consequences of unregulated AI, from misinformation to deep societal disruptions. Without safeguards, autonomous AI capabilities might evolve unpredictably, potentially misaligning with human values. He calls for “safely interruptible” AI—essentially, systems that allow human oversight with mechanisms akin to a “big red button” for intervention. For those keen to dive deeper into these insights, The Coming Wave is an essential read, offering a clear-eyed view of the challenges and responsibilities of our AI-driven future.

Diverging AI Philosophies

The U.S. is working to steer AI development in a way that aligns with human-centered values, setting it apart from other global powers. Both the U.S. and EU focus on safeguarding individual rights, maintaining transparency, and striking a balance between innovation and accountability. Western democracies emphasize privacy, freedom of expression, and ethics, with frameworks like the EU’s AI Act aiming to protect against misuse, bias, and misinformation.

China’s approach, led by the Cyberspace Administration of China (CAC), takes a different route—centralizing control to ensure AI development aligns with state stability and government-approved narratives. Here, AI must reflect “core socialist values” and is closely monitored to prevent any threats to social order. This strategy prioritizes state stability over individual freedoms, enabling rapid AI growth within strict regulatory bounds.

The contrasting paths highlight a significant global divide: Western democracies anchor AI in democratic values, while China focuses on centralized control. To stay competitive and ensure AI reflects Western principles, the U.S. must advance its AI initiatives, embedding human-centered values into the global race of AI’s evolution.


Balancing Profit and Protection

The recent departure of Miles Brundage, a leading AI safety expert, underscores mounting concerns over OpenAI’s shift in priorities. Originally founded as a nonprofit to ensure safe and ethical AI development, OpenAI’s transition to a for-profit model has raised questions about whether commercial interests are now taking precedence over safety. Brundage, formerly head of the AGI readiness team, expressed frustration over this shift, joining others like former CTO Mira Murati in leaving due to similar concerns. His exit, along with OpenAI’s increasing focus on profitability, shines a light on the tensions between advancing AI responsibly and the pressures of a for-profit approach.


Tip of the Week

AI can be a trick or a treat, depending on how you use it! From hallucinations to misinterpretations, there are a few risks to watch out for. Here are some practical tips to avoid getting tricked by AI:

  1. Ask for sources. Prompt AI to reference specific studies, articles, or publications. Try: “What sources support this information?” This can help verify the basis of its response.
  2. Use expert perspectives. Guide the AI with “according to [specific expert or study]” prompts. This can lead to more accurate, grounded answers.
  3. Verify complex calculations. For math or multi-step explanations, double-check by running the calculations yourself or consulting a reliable source.


?? Prompts (for copy and paste):

  1. “What sources support this information about AI safety guidelines? Please give me five reports or verified articles."
  2. "According to the World Health Organization, what are the most common mental health impacts of high-stress work environments?"
  3. "Can you walk me through the steps of how you forecasted the revenue??


Until next week, stay curious—and have a frightfully fun Halloween! ?????

Camille Preston, PhD, PCC

Business Psychologist, Leadership Expert, Author, Executive Coach

4 个月

Thank you for the great insights. This video was a great addition.

回复
Amy Ovalle

Global Communications Strategist | Brand & Reputation Management

4 个月

Great insights this week, Amy Wilkinson on AI, especially different cultural/sectoral approaches….sharing with my network.

回复
Abdessamad bounau

Head in the clouds feet on the ground

4 个月

Interesting

回复
Leandro Chavannes

Formateur d'adultes - Titulaire FSEA 1 | Stratégie commerciale et marketing | Communication interpersonnelle

4 个月

As always, thank you so much for your great insights Amy Wilkinson Waiting for next Wednesday !

回复

要查看或添加评论,请登录

Amy Wilkinson的更多文章

  • The Future is Now

    The Future is Now

    If it’s Wednesday, it’s Wilkinson Wire! The last few decades of technological advancement have taught us that the…

  • AI’s Next Big Leap

    AI’s Next Big Leap

    If it’s Wednesday, it’s Wilkinson Wire! This week, we dive into a pivotal shift in AI's trajectory: from generative…

  • AI for HR ??

    AI for HR ??

    If it’s Wednesday, it’s Wilkinson Wire! This week, we’re moving from the classroom to the office. From boosting…

  • Women in AI ????

    Women in AI ????

    If it’s Wednesday, it’s Wilkinson Wire! I am lucky to live in Silicon Valley, a place of such innovative spirit, and…

    4 条评论
  • A Cognitive Leap in AI with OpenAI o1 ??

    A Cognitive Leap in AI with OpenAI o1 ??

    If it’s Wednesday, it’s Wilkinson Wire! The buzz surrounding AI just hit a new high last week with the launch of o1…

    4 条评论
  • AI with a Lesson Plan ??

    AI with a Lesson Plan ??

    For the second week in a row, I am looking at AI going back to school. This technology is transforming academia and…

    1 条评论
  • Back to School with AI: Level Up Your Learning! ??

    Back to School with AI: Level Up Your Learning! ??

    If it’s Wednesday, it’s Wilkinson Wire! As the back-to-school season kicks off, I’m excited to share how AI is…

    1 条评论
  • Mark Cuban Bets on the Future Workforce

    Mark Cuban Bets on the Future Workforce

    In 2013, Harvard Business School students Rob Biederman, Peter Maglathlin, Patrick Petitti, and Joe Miller set up a…

    26 条评论
  • Find the Gap: How Jack Ma and Elon Musk See What Others Miss

    Find the Gap: How Jack Ma and Elon Musk See What Others Miss

    Jack Ma was an English teacher of modest means in the southern Chinese province of Hangzhou. He had little business…

    387 条评论

社区洞察

其他会员也浏览了