Trend Report Pt. I: The Age of Paradoxes
Laurence Van Elegem
Freelance Content Strategist | Trend Analyst | Thought Leadership Support | Content Curator | Corporate Journalist | Copywriter | Podcaster | Author | Communications Expert
I’m a bit late to the party for the obligatory end of year/beginning of year trend overviews, but the life of a freelancer is intense, as I’ve experienced these past few months. A little break was in order and I took it. But my newsletter “Here and Now” is back in business with a Three Part Trend Report.
This first issue is all about paradoxes, the favorite hobby of Socrates, who once claimed “All I know is that I know nothing” (I can relate, mate.)
The beginning of 2025 turned out to be pretty symbolic for the US. Its Northeastern and Southern regions were plagued by historic snowstorms, while California was ablaze with wildfires. It had, literally, transformed into a land of Fire and Ice, while it was simultaneously unclear who was sitting on its Iron Throne: Trump or the Tech industry Zucking (see what I did there?) up to him.
To me, this stark contrast symbolizes one of the most important trends of the moment: the abundance of paradoxes.
Never were we so connected and lonely and at the same time
Amidst an abundance of social media and other technologies to connect us, we see tragic phenomena rising like:
A National Bureau of Economic Research study - comparing the habits of walkers in New York, Boston, and Philadelphia - found that people moved an average of 15% faster in 2010 than 30 years before. At the same time, lingering in public spaces was down by about half, and fewer people congregated in groups. Simply put, we are moving faster and socializing less. To be fair, the study compared footage from 1980 to 2010, but the findings are in line with other, more recent phenomena/
At the same time, instead of finding ways to bring people together, companies are trying to “solve” a problem that is (partially) caused by technology ... with technology. The most ironic example is perhaps Ev Williams, founder of Twitter, who admitted that he was lonely because he worked too much and created an app, Mozi, to help people foster in-person connections. Replika CEO Eugenia Kuyda, then, is convinced that her company's AI chatbots could be a powerful tool to build new friendships or even provide emotional support. Wearable AI "The Friend", then, is specifically designed for emotional support and companionship. Worn around the neck, it listens and sends messages to help users feel less lonely. Or let’s not forget about the realistic robot dog from Tombot designed to help calm people struggling with dementia.
Yet, even OpenAI admits that users forming connections with GPT4-o - and its human-sounding voice mode - could potentially help benefit lonely individuals, but also have the potential to negatively impact users’ “healthy relationships” as well as social norms. What lonely and unhappy people need is more human connection, not tech solutionism and artificial love and friendship - provided by companies who gain commercially from their attachments - which has the potential of isolating them even further from their networks.
Never was it easier for machines to seem human and more difficult for humans to be recognized as such
2024 year was the year of announcements in (Large) World Models, systems that do not just learn from texts but are trained on photographs, (generated) video and live feeds to understand the physics of our 3D world (read my previous newsletter about that here). Once these systems are fully functional, they will have huge implications on autonomous systems like robots and self-driving cars as well as on all things XR, gaming etc. That will be the point where machines will really grasp our environment.
If you couple that with the fact that AI is also evolving fast with it comes to “soft” human skills like empathy and creativity, it’s clear that we’re getting closer to replicating humans in a very convincing way. And let’s not forget how AI will soon get to really “act” on our behalf, with the much overhyped AI agents moving into the market, which are expected to be able to function autonomously to perform complex, independent actions.
The latest example of that is OpenAI launching “Operator” to automate tasks such as vacation planning and restaurant reservations. But there have been quite a few more launches in the past months - both in the consumer and enterprise space - with Salesforce's Agentforce, Microsoft's agent studio, Cursor's AI Agent, H's (Paris startup founded by Google alums) Runner H, Google's Agentspace, project Astra and project Mariner and similar announcements by Amazon and Anthropic. Agents are supposed to be the next new thing in AI in 2025, but let's first see what happens, shall we.
As machines are leaning closer into humans, real humans will need a way to identify themselves as such, in ways that go beyond a simple captcha. Just think about how the U.S. military now wants to create undetectable fake online personas that are so convincing other AIs can’t spot them. What could go wrong, right?
Sam Altman’s web3 solution for this "proof of human" identification problem is called World (previously Worldcoin) which verifies humans by scanning their eyeballs with a silver metal orb and offering them a unique identifier on the blockchain. In a recent blog post, World explains that its proof of human (PoH) tools will not only distinguish humans from bots, as they do today, but could help people control a network of AI agents online:
By giving individuals a way to digitally authenticate their humanness, PoH will not only make it easier to distinguish between humans and sophisticated AI agents online but it will also provide a mechanism to limit accounts created—and potential misinformation spread—by AI.
As they become more numerous, AI agents are predicted to work together in what are being called agent swarms or networks. With PoH, such networks will be able to be overseen by a verified human, ensuring that a person can retain control over their agents.?
Just to take a little side-track, it’s an absolutely fascinating game to try to combine all the latest investments of Sam Altman or his company to try and figure out his vision of the future. Here are some interesting ones:
Basically Altman envisions a future where all money and energy goes to a powerful artificial intelligence that will think and act on behalf of a really old – but fit – population that receives a basic income to do nothing, if it is able to prove its humanity. Is this exaggerated? Sure. But his investments do point towards a certain direction.
Never was our technology as cost-efficient as it is over-expensive
Though companies like Klarna love to boast about how much AI has helped them save money on personnel cost, it's also clear that the providers of these ‘magical’ algorithms find it very difficult to become profitable. Just to give an example: according to ‘The Information’ OpenAI is not expected to earn annual profits until 2029, while annual losses could strike as high as $14B in 2026. Its CEO Sam Altman recently revealed his company is losing money on its $200-per-month ChatGPT Pro plan due to "higher-than-expected user demand".
Interestingly, an investment banker who once dug into OpenAI’s finances, stated that it bears "intriguing" parallels with Enron, the American energy company that went belly up after it was exposed for perpetrating one of the biggest accounting frauds in history. "All the signs are there," Riley wrote on Bluesky. "High flying VC darling. CEO with known ethical issues. Predictions about the future that continue to spiral in their grandiosity."
The reason that OpenAI is able to persist, though, is a powerful combination of hype and funding from Microsoft and Nvidia who have become very profitable because they sell AI infrastructure. So the infrastructural layer providers beneath AI make a lot of money and the layer on top of the AI (the companies using it) will - supposedly - make more money by becoming more efficient and productive using it. Interestingly, VC Sam Lessin recently said that most investors believe that OpenAI is either going to zero or infinity.
The phrase “The center cannot hold” pops up in my mind when I think about that: will the center be able to support the top without the necessary revenue and, if not, will it take the bottom with it? "The center cannot hold" seems an extremely fitting description because it is both a line from William Butler Yeats’ poem The Second Coming – which describes the atmosphere of post-war Europe – and the title of a memoir from Elyn R. Saks about schizophrenia.
“Things fall apart; the centre cannot hold” - William Butler Yeats
On a macro level, the expensiveness of AI is even more apparent, its incredible ‘hunger’ driving countries and companies to invest in environmentally taxing energy forms like nuclear (Google, Microsoft, Amazon, Texas etc) and coal (Doug Burgum - Donald Trump’s energy and environment nominee - criticized wind and solar energy and said the country needs more “baseload” electricity from coal to drive the AI race). We also see the human cost, with an increased number of people being laid off because of it (yes, AI is not the only reason) and a growing pressure on employees to perform more and faster now that they are “augmented”.
Of course, everyone is now raving about Chinese startup DeepSeek's r1, which beats the industry’s leading models like?OpenAI o1?on capability, openness and cost. The company said it had spent just $5.6 million training its newest AI model (compared with the hundreds of millions or billions of dollars US companies spend on their AI technologies) and it is 13 times cheaper to run. Cost-effectiveness has become the holy grail of AI, with also DeepMind for instance working on “light chips,” which would make it more cost-effective to run the Google models. But the bets they are all taking are very risky indeed.
Never did we take so much friction away from individuals, while adding it on a systemic level at the same time
This is perhaps one of my favorite (well, you know...) paradoxes of today. There has been so much talk over the years about digitization taking away friction and helving save customer time. Everything was supposed to happen faster. And smoother. Now, AI is pushing this to a completely new level.
We no longer need to scan dozens of articles, kindly provided by search engines, to find the information we need. Now, we type a question and immediately get the answer. Of course, the convenient answers are also hallucinated as much as 27% of the time and riddled with factual errors in 46% of the cases. But hey, at least it’s fast and easy.
Gone is the friction in information consumption.
AI chatbots and virtual friends and lovers are just as simple and easy-going. Real friends can have a bad day, be mean, be fed up with your bullshit, move to another country... But virtual ones are high-performing sycophants, telling you everything you wanted to hear and more. Like a warm little blanket offering comfort.
Gone is the friction in relationships.
But friction is how humans learn, how they ingest information and how they get to navigate a real world that is filled with challenges and obstructions. Imagine being young, having virtual friends that agree with everything you say and think and then you go to school and people are, well, ... only human. Seen from that perspective, human relationships are disappointing, and frightening. Perhaps you pull away, and decide to spend more time online. Never has the amount of friction to make real life friends been bigger. To me, this is one of the most underestimated evolutions of our time.
Never was our technology as powerful as our systems are fragile
Amidst exponentially evolving technologies, the past year has painfully illustrated how fragile our systems have become. The most striking (pun accidentally intended) example probably was that of July 2024, when a flawed update from cybersecurity firm CrowdStrike led to widespread crashes across millions of Windows computers. Just one, small incident disrupted critical services worldwide, including airlines, hospitals, and financial institutions. and it showed how fragile our systems were.
I could go on, with more paradoxes like “Rarely has there been more abundance and inequality at the same time”, “Never did we know so much about our (mental) health, yet were we mentally and physically struggling”, “Never have we talked more about the long term future, while investing so little in it” (climate wise, environment wise, society wise, etc.), “Never did we try to regulate technology so much while making it all the more powerful at the same time”, but I’m sure that you catch my meaning.
You may think that this abundance of paradoxes is just a triviality, a little hook to give a trend report an interesting angle, and nothing more. But it also is a true symptom of our time. If two seemingly opposite truths coexist, this both illustrates and creates a lot of tension, complexity and unpredictability. As technology accelerates, creating connections while fostering isolation, simplifying individual lives while complicating systems, and promising solutions that often generate new challenges, an intricate web of contradictions arises.
To me, these paradoxes are not mere curiosities; they are symptomatic of a world straining under the weight of rapid change and unmet expectations. They force us to confront the fragility of our systems, the human cost of technological advancement, and the urgent need to keep our humanity in a system that seems obsessed with efficiency. Understanding these contradictions is the first step in addressing them.
Keep an eye on this space for the next two installments of my trend report!
B2B Marketing Manager | Go-to-Market strategist | Marketing Project Manager
4 周Thanks a lot Laurence, very interesting article once again. We seem to rush toward places without knowing if we even want to be there. The moment AI starts using humans to do boring repetitive tasks really inefficiently is near.
Professor Strategic Management - Academic Director Executive Development Programs
1 个月Great reflective piece ! Insightful. PS: love the documentation. Everybody has opinions, but I always like to see some evidence/data to support that opinion.
I help life sciences companies build and execute more effective HCP engagement models by simplifying complexity.
1 个月Laurence, you nailed it again! Where are those hundreds of thousands of substack subscribers you deserve?
Spot-on, as always, love it!??