Giantleap Capital's Davos Takeaways: The 3 Biggest Questions in AI
Amandeep Gill (UN) Anna Makanju (OpenAI), Arisa Ema, (University of Tokyo), Christoph Winterhalter (ISO) and Yoichi Iida (Japanese Ministry)

Giantleap Capital's Davos Takeaways: The 3 Biggest Questions in AI

By Hollie Slade-Ash

GiantLeap was thrilled to join so many thought leaders at the AI House Davos yesterday. Thanks to Yann LeCun , Neil Lawrence , and Seraphina Goldfarb-Tarrant for engaging and thought-provoking discussions.

Here's what we jotted down in our notes app during exciting debates between panelists on risks, opportunities and the not-inconsiderable challenge of regulating a fast-moving, revolutionary technology —?

How will governments regulate effectively without stifling innovation?

With A.I. being one of the few bright spots in an uncertain and fractious global economy, the G7 are carefully calibrating how to make A.I., safe, open and fair without inadvertently creating a “compliance monster” that could drive innovators to more regulatory-lite (and potentially less safe) environments.

  • For now, that means coalescing around soft frameworks like the NIST and NAIAC while harmonizing regulation globally…and fast, as many technical and practical application issues remain fuzzy.

  • Long-term Member of the National AI Advisory Committee Ramayya Krishnan floated the idea of a GAAP-like paradigm for A.I. Christoph Winterhalter spoke about the importance of creating international standards, which could be based on product assurance infrastructure from the analogue world.

What to watch: The EU Artificial Intelligence Act is in its final stage of negotiations. The ink is expected to dry on a consolidated legal text in February/March 2024. Japan took the lead on the G7 Hiroshima Artificial Intelligence Process (2023), which emphasizes the need for international ethical and regulatory collaboration.

Is super intelligence a myth? A.I. as an agent of mankind vs a dangerous and fast-evolving super intelligence.?

  • One of the biggest hot-button issues was around how to characterize A.I and disagreement on its trajectory. Expert panelists vacillated between the need to meet the immediacy of the moment and check threats like the ones recently raised by Anthropic (they found LLMs could be trained to be deceptive), and the search for a more realistic, and less threatening lexicon.?

  • After an exciting (and amusing) debate Meta Meta Chief AI Scientist Yann LeCun, who criticized growing alarmism, and 美国麻省理工学院 professor Max Tegmark agreed that super intelligence is a long way off.

What to watch: LeCun’s twitter feed https://twitter.com/ylecun/status/1718755068348887110?

What do we feed the beast?

  • The thorny issues around what goes into the immense, resource-hungry proprietary datasets that underpin large language models powering A.I. chatbots continued to bubble up in conversations and panels at the A.I. House.

  • The need for more whitebox testing by academics, researchers and scientists (and the philosophical imperatives of transparency and accessibility of open source), often clashed with the foundation model of L.L.M.s, the scale of computational resources needed and the nature of enterprise deployments across public/private infrastructure. What's clear is continued collaboration will be incredibly important.?

What to watch: The New York Times suing OpenAI for copyright infringement?

https://www.nytimes.com/2023/12/27/business/media/new-york-times-open-ai-microsoft-lawsuit.html


要查看或添加评论,请登录

社区洞察

其他会员也浏览了