The US-China AI Race: Time for a New Paradigm?!
Image-Credit: telecomreviewafrica.com

The US-China AI Race: Time for a New Paradigm?!

I don't think anyone has missed the discussions over the past few months, and the shift in the narrative surrounding the development of artificial intelligence has been dominated by talk of an "AI arms race" between the United States and China (sorry EU, not talking to you at the moment). While early discussions focused primarily on economic competition, the conversation has shifted dramatically towards national security concerns, raising important questions about the future of global AI development and cooperation.

The Shifting Landscape of AI Competition

The initial US strategy focused heavily on maintaining its perceived lead through control of advanced GPU computing resources. However, this advantage has proved less decisive than anticipated. Chinese AI models are now achieving comparable results using significantly fewer computing resources, effectively neutralising what was once seen as a key US strategic advantage. DeepSeek and Qwen were great examples of this.

This development comes at a time when the competition is increasingly viewed through the lens of national security and potential conflict, particularly over Taiwan. The US response has largely focused on implementing "choke point" tactics to limit China's access to advanced semiconductors. However, these measures appear to be backfiring, spurring China to accelerate its drive towards technological self-sufficiency.

Rethinking The Approach?

In a notable shift, US Commerce Secretary Gina Raimondo recently admitted that trying to slow China's AI progress through export controls is a "fool's errand". This admission highlights a growing recognition that the current competition framework may be counterproductive.

The real threat, often overlooked in discussions of national competition, comes not from state actors but from malicious non-state entities. So it is not "bad China" or "bad USA" but "bad everybody". Unencumbered by the constraints of nation states, these bad actors pose a more immediate threat through the potential weaponisation of AI technologies. The asymmetric nature of AI capabilities makes this threat particularly difficult to address through traditional security measures. This is also due to the nature of the technology. Unlike a heavy and obvious 70-tonne tank, it is difficult to control code.

Signs of Hope for the World?

The biggest problem with such a race is that both countries could end up in an arms race, which is bad for everyone else. Fortunately, however, some recent developments suggest the possibility of a more cooperative approach. Former President Trump's announcement on 17 January of a renewed dialogue with President Xi Jinping suggests a possible move towards cooperation (let's hope it's not just another provocation).

But it's also worth noting here that Trump has signed an order to ease regulations on AI, in the hope of making it easier for companies to innovate.

I think it would also be good news that people are finally realising that there is no such thing as "almighty AI that destroys nations and brings prosperity to all", as people are slowly figuring out that it's a good tool, it's often a nice gimmick, but it's not this revolutionary thing that just changes everything and is more inclusive. Everyone is using the same data, there is the same hardware everywhere, there are the same limitations, most of the codes and models have been around for a long time and LLMs are not the answer to general intelligence. There is more and more evidence that there has been a lot of smoke and mirrors and misinformation, but it can also be a very useful tool.

Also, as the evidence mounts, it is more likely that as more models become open source, it will be possible for regional specific LLMs to be developed by different countries that are more suited to local customs, local culture, and reflect national culture, as it becomes cheaper to train such models and better to optimise them for localisation. So there is more of a push towards "open democratisation" rather than "proprietary centralisation" - which may be bad news for all those Big Tech investors hoping for the "ultimate lock-in effect".

The Global Perspective

To put this discussion in context, it is important to note the stark contrast in perceptions of AI between developed and developing countries. While Western populations have a 60-70% negative view of AI, developing countries have a 60-80% positive view. This gap reflects different experiences of technological progress and different expectations of the potential benefits of AI. Not surprisingly, many developing countries are actively promoting their ecosystems, with the Middle East in particular investing heavily in energy, infrastructure and, most importantly, attracting talent at a rapid pace. This will add to tensions, as brain drain and talent movement will have implications for what some countries might call "national security".

The Path Forward

For many governments, especially China and the US, the stakes could not be higher. Continued competitive policies and an "AI trade war" create many risks in several areas. Some of these could include

  • Undermining global stability
  • (actively) Stalling scientific progress
  • Escalating dangerous technological brinkmanship
  • Threatening the potential benefits AI could bring to humanity

Some Recommendations for Change

There is the obvious path of "prosperity together". But such a path forward seems to be out of reach. Still there could be some concrete steps:

  1. Decrease the dominance of national security concerns in AI policy discussions
  2. Establish robust bilateral and multilateral AI governance frameworks
  3. Invest heavily in detecting and preventing AI misuse
  4. Create incentives for cross-border research collaboration
  5. Implement trust-building measures between nations
  6. Form a global AI safety coalition inclusive of all major AI powers
  7. Redirect focus toward using AI to address global challenges
  8. Open source more models for easier global access

What to Expect? Looking Ahead

The choice before us is clear: continue down a path of confrontation or pivot toward collaboration. Seems simple - might be simple - but unrealistic at the moment. The latter - collaboration - offers not just the possibility of avoiding conflict but the opportunity to accelerate solutions to humanity's greatest challenges. By choosing cooperation over competition, we can ensure that AI's transformative potential benefits all of humanity rather than becoming another source of global division.

The future of AI development need not be a zero-sum game. Through thoughtful collaboration and shared governance, we can create a framework that promotes innovation while ensuring safety and ethical development. The time for this crucial shift in approach is now.

??Lucia de Luca

Managing Director | Harnessing Outreach, Partnerships a. Stakeholder Relations to create impact for organisations internationally I Equality Champion I Reputation Advisor I G100 Global Chair I 100 Women of Davos

1 个月

Thanks Benjamin Talin you are putting on the table some important topics for discussion. I would have expected some of those ideas to be discussed in Davos 2025, yet everything was framed as a US vs China race. Calling for a robust multilateral AI governance is daring but I hope many will respond to your call. Thanks again for your holistic analysis ??!!! ????

要查看或添加评论,请登录

Benjamin Talin的更多文章

社区洞察

其他会员也浏览了