Game on! The global race for AI domination, WFH hurdles, and David takes on Goliath

Game on! The global race for AI domination, WFH hurdles, and David takes on Goliath

Written by Fola Yahaya


Thought of the week: The global race for AI domination

Sadly, the old Chinese curse “May you live in interesting times” has now come to pass with the whirlwind of misanthropic Executive Orders that Donald Trump has unleashed on the world. From the uncharitable suspension of all development aid to pulling out of the Paris Agreement on climate change, Trump 2.0 is attempting to reshape the world order as a zero-sum game for the US.

Along with encouraging oil companies to “drill, baby, drill”, Trump and his tech bro allies are also hell-bent on pushing AI into every corner of the economy and government and ensuring the US wins the AI race by any means necessary. A key step towards this was the revocation of Biden’s Executive Order on addressing AI risks. This raft of regulations which included very sensible things to reduce the risks posed by AI to consumers, workers and national security has now been replaced with a massive vacuum which Big Tech is planning to fill with as much AI as possible.

Wasting no time, Trump, flanked by the CEOs of OpenAI, Oracle and SoftBank, announced the Stargate Project, which commits up to $500 billion over four years to pepper the US with AI data centres. The goal? To decisively shift the balance of technological power away from China. With this new and unfettered government–industrial complex, it’s clear that the US intends to pursue AI dominance with minimal regulatory friction. Stargate is a signal that America is not just participating in the AI arms race but is committed to winning it, treating AI as both a critical national security asset and a cornerstone of future economic strength.

The US isn’t alone in recognising the stakes. Last week, the UK announced an ambitious AI strategy, laying out plans to establish itself as a global AI leader through investment in research, infrastructure, and policy reforms. Across the Atlantic and beyond, governments are racing to carve out their positions in this new world order, understanding that AI is now as consequential as nuclear weapons once were – a force multiplier for military, economic and geopolitical influence.

We can expect more governments to unveil and update their AI initiatives in the coming months as the global AI race heats up. Countries that fail to act decisively risk being left behind, unable to compete in an era where technological prowess defines influence.


Why worker intransigence over WFH may only accelerate AI-driven job losses

The days of most office workers working from home seem to be numbered. Almost every major corporation has dialled back on people 'shirking from home', with some companies such as JP Morgan and Amazon (not exactly known for being nurturing environments) now requiring employees to spend five days a week in the office. In the UK, there is a backlash in government that has seen civil servants draw up plans to strike if forced to come into the office more than a few times a week.

While I understand the arguments for hybrid working, I worry about that old adage that "out of sight is out of mind". At precisely the time when governments and corporations the world over are looking to use AI to increase the efficiency of their respective workforces (i.e. cut jobs), now is not the time to refuse to show up and demonstrate value. This is especially true for the generation of workers who joined the workforce during Covid and for whom an office is some mythical place from a bygone era. Without the immersive experience of office culture, they may be missing crucial opportunities to develop the soft skills and professional networks that historically have been vital for career advancement. Their resistance to office work, while understandable, could inadvertently make them more vulnerable to AI-driven job displacement.

Ironically, the tasks that remote workers claim are more efficiently done from home, such as writing reports, are manna to AI models. ChatGPT and similar AI models can already draft emails, prepare reports and analyse data but what it’s not so good at are those tasks that thrive in a physical office. From spontaneous collaboration, relationship-building and teambuilding to leadership, being physically present is one of the few domains where human workers still maintain a clear advantage over their AI counterparts.

As organisations struggle with the challenges of managing remote teams, they're increasingly turning to AI-powered monitoring and management tools to ensure they get the most bang for their remote buck. Clearly the next step in this process is to use this data to train the army of AI agents – from call handlers to data analysts – to replace remote workers entirely.

So, the current stand-off between employers and employees over return-to-office policies may ultimately be a pyrrhic victory for remote work advocates. While they fight for the right to work from home, they are unwittingly making the case for their roles to be automated or outsourced. In an era where AI is rapidly evolving, physical presence and human connection might be the strongest path to job security.


Of Davids and Goliaths – the implications of tiny models and Chinese ingenuity

French Prime Minister Georges Clemenceau, following World War I, said war was too important to be left to the generals. In a similar vein, AI is way too important to be left to Big Tech. So, we should all cheer the army of plucky outsiders and researchers who are challenging the incumbent AI mafia. This week it was the turn of the American university UC Berkeley which released its Sky-T1 AI model. Trained at an astonishingly low cost of just $450 – compared with the $80–100 million that OpenAI’s ChatGPT 4 is estimated to have cost – Sky-T1 achieves performance comparable to early versions of OpenAI’s top available o1 model.

Let’s unpack this. The good news is that it lowers the entry barrier and therefore dependency on US Big Tech. This means that countries with fewer resources such as those in the developing world can develop their own capabilities. Secondly, it also democratises AI research, opening the door for smaller players and academic institutions to compete and further accelerate AI development cycles, enabling faster iterations and improvements. It also means that these smaller models, trained with fewer resources, will theoretically reduce AI’s environmental impact (though this will be negated if we’re all deploying and running mini-AI models 24/7).

The not so good news is that it means that powerful AI models are now available to anyone without adequate safety measures or oversight. Anyone can now run their own uncensored ChatGPT on their desktop with only a few clicks of a mouse. Lowering the barrier to entry makes it harder to enforce safety, ethical standards and responsible AI development. As smaller players rise, businesses and governments must prepare for a world where advanced AI capabilities are no longer concentrated in the hands of a few dominant companies. This could spur more diverse and specialised AI applications, but it also demands new governance frameworks to ensure responsible innovation.

A good example of this is the attempt by the US to cripple China’s AI capability by restricting their access to the most powerful (Nvidia) chips. Despite this restriction, DeepSeek, a little-known Chinese hedge-fund-financed start-up, has just released R1, an open-source reasoning model that rivals OpenAI’s o1 and also at a fraction of the cost. Built with typically ruthless Chinese efficiency, R1 costs just 5–10% of o1’s API inference price and echoes Sam Altman’s July 2024 warning that US AI leadership isn’t guaranteed.

DeepSeek’s advancements go beyond R1. Among its releases is R1-Zero, a model trained without human-labelled data, akin to DeepMind’s AlphaGo Zero. While slightly weaker than R1, its alien-like reasoning patterns raise profound questions about the future of AI: are we seeing the early signs of super intelligent systems that are smarter but fundamentally unintelligible to humans? Another key fact about R1 is that it was trained using synthetic data (ironically generated by ChatGPT), thus rewriting the rules of scaling in AI by proving that efficiency can outmatch sheer size.

DeepSeek’s open-source approach also contrasts sharply with OpenAI’s rapid march to milking its AI models for as much money as possible. The Chinese seem to be offering a viable vision of AI as a global resource rather than the preserve of the West. This should be welcomed, though try asking DeepSeek for its views on Tiananmen Square, for a reminder that there is rarely such thing as a free lunch.


AI video of the week – Star Wars reimagined


What we’re reading this week


Tools we’re playing with this week

  • Deepseek.com - really good ‘thinking’ AI model that rivals ChatGPT at a fraction of the cost


That's all for this week. Subscribe for the latest innovations and developments with AI.

So you don't miss a thing, follow our Instagram and X pages for more creative content and insights into our work and what we do.

Sam Callis

All things Story.

1 个月

BOOM! Another mic-drop newsletter. Love it!

回复

要查看或添加评论,请登录

Strategic Agenda的更多文章

社区洞察

其他会员也浏览了