It’s impossible to escape. These days there is intense interest across the business world in the promise and potential peril of generative AI. Author Verity Harding brought her tech and policy experience to bear to add a dose of realism, demystify some of the hype, clarify the stakes for business and society, and make the case that decisions around this powerful new tool can’t be left to technologists alone. Here are 4 key takeaways from the discussion:
- Companies need to look past the current hype and plan with a long-term frame of mind as they experiment with generative AI. While ChatGPT and its competitors have aroused great excitement and concern in the past 18 months, AI has been around for decades – “The Terminator” was released 40 years ago! It isn’t sentient, even if chatbots provide a teasing simulacrum of humanity with their conversational dexterity and hallucinatory flights of fancy. AI is a tool, and its development and deployment will be the result of human decisions. Companies should think carefully about how to embed AI in their values and the principles, not the other way around.
- Corporate “guardrails” around AI use and appropriate regulation can foster innovation, not hinder it. This is a major theme of Verity’s book, “AI Needs You: How We Can Change AI’s Future and Save Our Own”. She cites the example of what she calls the UK’s strict but permissive approach to human embryology research, which permits experimentation in the first 14 days of an embryo’s existence but bans it afterward. Those rules helped make Britain a leader in in vitro fertilisation and helped foster the growth of the country’s life sciences sector. Companies shouldn’t focus just on the risks of AI, which is a recipe for paralysis and inaction, but instead think carefully about what they hope to achieve with the technology and design internal rules and processes that enable firms to minimise risk while innovating and capturing productivity gains.
- Geopolitical rivalry is likely to impede global regulatory cooperation and leave many decisions to market forces. Xi Jinping’s goal of making China a global superpower in AI helped spark a race with the United States and led Washinton to restrict Chinese access to advanced US semiconductor technology. The two countries are due to hold their first talks to discuss AI risks and safety this week but played down any prospect of an agreement. The US and Europe have adopted different approaches, with President Biden last year signing an executive order setting out high-level standards for safety and privacy but avoiding legislation, while the European Union has adopted an AI Act banning activities posing unacceptable risks and requiring risk assessments for systems used in critical infrastructure and essential public services. This patchwork won’t affect most productivity-focused use cases, which companies are most interested in employing. And most AI companies will seek to limit testing of their models by various national regulators and hew to the more lenient American regulatory approach, especially as most big providers are US companies.
- The big risk may be that we overestimate generative AI’s impact in the near term and underestimate it in the long term. There is an arms race taking place among AI companies to amass more chips and computing power and build more data centers, driven by the idea that scale will produce breakthroughs that change everything. Surging stock prices of leading AI players suggest many investors buying into the idea. But this might be another bubble akin to the dot-com craze of a generation ago. The internet really did change everything, revolutionise media models and unleash major productivity gains, but it took more than a decade for those benefits to materialise.
There is still much we don’t know about generative AI. Will open-source models gain the upper hand and foster greater transparency, or will economics dictate the supremacy of closed, proprietary models?
Google DeepMind
’s decision not to release the full source code of #AlphaFold3, the latest version of its drug discovery tool, may be a significant data point. Will AI enhance the power of Big Tech and reduce the role of governments and regulators? We all have a stake in how these are huge questions are decided, and we look forward to debating them with the Leadership Reimagined community in the months and years ahead.