Tipping Point

Tipping Point

I've been seeing many posts recently that are written by techno-optimists, speaking as if AI were actually certain to replace humans through eventual superior intelligence and low cost, maybe even this year!

I've also been seeing the contrary - people who loathe the advend of "Apparent Intelligence" on the basis that it's basically a hoax that absorbs vast resources that could be put to better use, while fooling people into putting their faith into unrealistic promises, that even if they did come through, would cause major problems not being considered.

I can see where both of these perspectives are coming from and it would be presumptuous of me to claim I knew where this is all going to end up, so this article is not going to be a "both sides are right" type of article, but a reflection on the visible signs we can all see and what it means, if anything, for the paths foward.

And to get started - have you noticed how many robot companies there are these days?

I Robot, You Robot, We All Robot

Elon Musk's reveal of the CyberTaxi last night (October 10 2024) was an impressive show, far from the cringe of TwitterSpaces annoucements, and actually better than previous "AI days" or his unveiling of the Las Vegas hyperloop, which is basically just an underground tunnel.

But for the moment, Musk CyberTaxi is just a dream, with limited production announced for some states in 2025, and a promise of general availability in two years - which seems to be his general rule of thumb for most things he announces, or their subsequent updates.

Meanwhile Waymo is out in the streets of San Francisco, forging ahead through new though not necessarily dignified problems. The competition for machines with autonomy is well and truly started and Elon is playing catch up! I guess even he was surprised by the rapid pace of robotic developments.

Originally touting Tesla Full Self-driving as a means of creating a kind of AirBnb for cars, Elon famously claimed it would make no sense for someone to buy another vehicle as it would pay for itself within a year or two.

In tonight's reveal, Elon repeated pretty much the same story, except he updated the delivery date, and showed some apparently functional specimens that looked nice.

He also introduced a new term "Unsupervised Full Self-Driving", as opposed to "Full Self-driving", and the most primitive "Self-driving". This semantic quibbling is par for the course for someone who is developing the reputation for being a smoke-and-mirrors guy who can never actually come up with the goods for a true autonomous vehicle.

Astute pundits remarked that if in fact Elon's prediction were true, it wouldn't make economic sense for him to part from these mobile money-printing machines. It's like that old scam where you send $10 in response to an ad to learn how to get rich, only to receive a letter with: "Place an ad like this one!".

Nevertheless, the great day had come, Elon drove in the CyberTaxi a couple of blocks (hands joined up to his face, as if praying!), then had the guests at the Warner Bros. lot go for spin on the grounds. Tesla stock went up 4% during his allocution, though came back down tp -2% by the end of the gala.


Please God don't let this crash!

The Robotaxi won't have a steering wheel or pedals. For the moment, Tesla owners making use of Full Self-Driving are thankful to still have those implements as it helps them make their vehicles last a bit longer. I guess it's only a "concept" of full self-driving for now.

The fact that this is an electric vehicle didn't even rate a mention (except for the introduction of an inductiva pad, a kind of "charing parking spot") - AI was actually the point of this evening.

After explaining the advantage of a reliable self-driving vehicle in which you can sleep or do other things without paying any attention at all as to what it was doing, the fizz petered out of the CyberTaxi and the story started resembling previous Full Self-Driving reveals from the past, except this time it was also "Unsupervised" meaning Police wouln't try to wake you up if you're in it. Must be the regulatory stuff he mentioned needs to be resolved i.e. you'll be licensed to snooze.


Mouse-shaped?

Moving along, a striking-looking Robovan (pronounced "ro-BOH-vin" by Elon which made me question what I was hearing a few times before getting it) rolled up to the stage and let out a bunch of Tesla employees. This vehicle was unexpected and raised cheers, but was quickly cordoned off from prying eyes. It kind of looked like it was inspired by the van in that old show "Ark II"

Robovan doesn't come with a chimp or jet pack

Optimus, Eureka, Eve, Figure Welcome You

After the excitement of the Robovan, we got to the pinnacle of the evening - a cohort of Tesla Optimus robots walked out in fairly stiff lockstep.

A few yars ago it was easy to laugh at the Optimus launch when a spandex-clad dancer performed moves on stage. And later when actual footage of Optimus made them look positively geriatric, there was another shaking of heads warranted.

These latest Optimus were a little less shaky. But then they walked up to a gazebo where they did a little "dancing" - doing the "robot" to Daft Punk - mostly shaking their hands with theri torsos, some with their knees bent, some standing straight. Luckily we haven't reached AGI yet so the robots didn't seem to be embarassed. One of them put on a cowboy hat and served drinks at the bar. It turns out the robots were remote controlled and their voices were just Tesla employees pretending to be robots. Smoke and mirrors...

Not bad Elon, but competitors seem to be ahead, far ahead, including Nvidia's Eureka and 1X's Eve, both powered by GPT-4, as well as Figure AI's O2. O1 had already mastered the task of pouring drinks, albeit coffee

Now BMW is using O2 in a factory as illustrated in a creepy harbinger of things to come, when the entire factory floor is robots:


All this is fine and well but these robots are all powered by a combination of specialized ML models and some sort of Gen AI model acting as a higher brain. Everything looks great in the product demo videos, but the consequence of misunderstanding what we want could be more permanent than when a bad code snippet is generated. "Apologies for the misunderstanding" won't cut it.

So when thinking about the upcoming AI jobopolis, we tend to forget these more dim-witted but imposing cousins of our favorite AI assistants. They will have their role to play in the evolution of our economy, and multiple companies are making this a reality. This is no hoax - another victory for "Apparent Intelligence!"

The Agentic office

Elon and friends are showing us more traditional robots that can do housework, factory labor etc. but are not expected to be great at reading our documents or doing our taxes.

This turf is being claimed by proponents of Agentic Systems, something you'll hear about in recent AI litterature.

Imagine unlimited amount of virtual business robots tirelessly carrying out tasks around the clock, for just the cost of the electricity they consume. Virtual because they never need to be physically at an office, they just run as simulations off of servers. You could even have zoom calls with them and they would be indistiguishable from your remote colleagues that you never meet with in person anymore! (In the future that is - today they can't tell jokes at the daily scrum so you could tell.)

SalesForce, a pioneer in Customer Relationship Management, has just released a new version with a no-code platform called Agentforce. With this new tool, customers are now able to create agents capable of carrying out any of the Salesforce functions, as well as interacting with customers through automated agentic workflows.

Fancy name for chatbots, you make ask? Absolutely not! Judge for yourself from Salesforce's documentation:

It’s a common misconception that an AI agent is another word for a ‘chatbot’, but this isn’t the case.

I wonder what gave people that idea? Perhaps because you have to interact with it through a chat? And it's a robotic process? A "Chatting" bot?

In contrast [to a chatbot], an AI agent is much more complex. It can understand social cues and context. It can think freely and use its decision-making mechanism to decide on the best approach based on a nuanced understanding of the situation.

In their dreams! An accurate way of writing this would be:

"In contrast to the older chatbots, we have an LLM with the ability to respond to free-format queries so it appears to be thinking, just like ChatGPT.

It can use various contextual information, conversation history and problem-resolution heuritics to generate apparently intelligent answers that are often helpful, most of the time.

Customers will have the impression of conversing with a highly-intelligent, if dull, assistant.

40% of the cases, the time spent conversing with our simularcrum will yield an answer and not be filler until someone frees up. Sentiment analysis allows it to offer contextually-precise apologies and flatter the intelligence of the customer."

I digress. Continuing the Salesforce explanation:

A chatbot is reactive—it follows strict scripts and dialogue patterns, which can frustrate customers, especially if the chatbot doesn’t understand the user’s query. An AI agent, on the other hand, can detect emotion and understand intent, allowing it to adapt to any situation and create more personalised customer experiences.

Or:

"An Agent is also reactive, but it uses its adaptive context and clever system prompt to give the impression of attentiveness, hopefully flattering the ego of the customer in spite of not having answers"

AI Agents can also handle a much wider variety of tasks. Whereas a chatbot is only helpful for handling common questions, AI agents can multi-task and offer comprehensive solutions in almost any industry.

Indeed, IBM is using this framework to create banking agents "that can handle credit checks and create client onboarding documents to speed up loan approvals". I'm pretty sure that's done by calling TransUnion or Equifax APIs, and generating documents from templates is hardly rocket science. If that's the best they can do, then consider me not impressed.

Apparently, Google is "building Agentforce actions to automate tasks in Google Workspace, such as generating Google Docs, searching across Gmail and triggering Google Calendar events", basically callouts to Google Apps APIs. I actually have some google script code that does the same, plus calls out to OpenAI's apis to evaluate and translate content from a user's group, without agents.

So far the use cases for AgentForce are not very mighty, sort of like Custom GPTs have failed to take off for OpenAI. Notably, Agentforce has been added to the Salesforce Supervisor dashboard previously used to monitor humans. “If an agent is going off the rails and customers are upset, they can immediately take over and seamlessly escalate it to a team member." Not very reassuring though a good thing to have.

Agentic Development Platforms

There are a few developer-oriented platforms to help create Agentic workflows that are more like orchestration layers. One popular framework is "Crew AI", which allows you to build your very own crew of AI employees! (And by popular, I mean with youtube reviewers - I haven't seen any real apps using this although it is intriguing so perhaps fodder for a subsequent article)

In their own words:

CrewAI is a cutting-edge Python framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks.


Spin up a role-playing reasearcher in just three properties

CrewAI (and its ilk) promises a lot, indeed! From the readme:

Why CrewAI?

  • ??♀? Role-Playing Agents: Agents can take on different roles and personas to better understand and interact with complex systems.
  • ?? Autonomous Decision Making: Agents can make decisions autonomously based on the given context and available tools.
  • ?? Seamless Collaboration: Agents can work together seamlessly, sharing information and resources to achieve common goals.
  • ?? Complex Task Tackling: CrewAI is designed to tackle complex tasks, such as multi-step workflows, decision making, and problem solving.


"Is it true?" you may ask, "can I really get AI to role-play as virtual employees working in a virtual team to run a virtual company?". Well - virtually! But not quite. The documentation around this type of framework is aspirational - what developer doesn't want to be master of AI puppets doing his bidding? So this is a fun explanation but mostly make-believe. If I were the author of the previous paragraph, I would have said:

  • Define subprograms to handle specific parts of what needs to be done
  • Put in logic with the flexibility of an LLM of your choice to actually decide on the next path. This usually works with a good system prompt
  • Pass data between subprograms and kick off their execution as needed
  • Build complicated code that does a lot of stuff using a framework approach that sounds cool and does help organize your python code, which is usually loose enough that it's like coding with wet noodles.

Everybody's Agentic

A lot of unlikely companies with some sort of AI offering are now peppering it with Agentic sauce.

Case in point: Crowdstrike. You may remember this company's name as it set a record this summer.

Yes the company caused world-wide failures of over 8 million Microsoft Windows systems, which is considered a record outage of "historic" proportions. Now they are working on autonomous agents! Wouldn't be great if you could just unleash smart virtual sentries that tackle any would-be intruders? Of course, as long as it doesn't decide your own system must be sterilized! (Full disclosure: for the sake of jockularity, I am mocking Crowdstrike without having used their system, which maybe just as good as other Agentic systems. But that's my point I guess - I would not unleash any on my system!)

Jobs Are Safe For Now - But For How Long?

Even though I have recently been finding interesting AI use cases for code generation and evolution, I am not convinced that we are currently in the presence of "Artificial Intelligence", only "Apparent Intelligence", capable of performing certain impressive feats that look like intelligence, in the same way that getting beaten by a a chess-playing AI makes us feel bested by a superior intellect and not quite adequate.

Today, even the best LLM-based systems require constant human supervision for the moment, in spite of their apparent brilliance. There is reason to be skeptical of all of the so-called agentic and autonomous AIs being unleashed on humanity since they can't be smarter than the smartest LLMs such as ChatGPT. The consequences vary from the silly to possibly serious.

Some of these systems are just going to be a waste of customers' time like today's Chatbots, albeit sometimes they will produce some answers. Other systems will function within constrained factory environments, so if they get confused or go berserk at least they will be contained. The humanoid robot-servants for home use have me more worried - what if it forgets poison is not a good ingredient for a recipe?

The Paths Forward

This year, there has been a growing crescendo of posts proclaiming how AI is already doing better work than doctors, how human coding can be eliminated, and shrinking the market for freelance content producers. At the same time, these claims can be contested as overblown, selective, and, in the case of code generation, leading to questionable levels of quality.

For someone trying to make sense of this, it can be bewildering, because depending on who you believe, the outcome appears quite different, so it can be difficult to undersand the best way to survive the AI jobocalypse.

I've decided to map out the most obvious of the scenarios to get the possibilities straight in my head:

Scenario 1: AI worker replacement is bunk

This theory is that AI can only be used to do simple versions of professional tasks, small. snippets of code, banal articles that all resemble each other etc. Content creators are posting that they do their work better without AI assistance. The AI boom will go bust and people will come to their senses.

Also no one will probably ever create a chatbot that is actually useful (see my previous posts).

Full confession: I was pretty much in that camp until recently. I still write my own texts most of the time, wrote my own code most of the time, even though I used AI APIs to accomplish a few application functions. I uninstalled GitHub Copilot - twice - and Cursor AI.

In this scenario, the Gen AI bubble will burst when money runs out, and life will go on essentially like before.

Scenario 2: AI is starting to be undeniably useful

In this scenario, AI is currently making great strides but not yet at the point of actually replacing humans. Each AI needs hand-holding by a human because it's unable to take the long string of decisions that real work requires. There are also considerations about empathy and communication skills - although some humans have the same problem so I can't say that's necessarily the show stopper.

Professionally I have been trying out Claude Sonnet 3.5 and o1-preview, and find that they can be time savers, like a new kind of power tool. Besides appealing to my natural inclination to save my energy (like all good programmers - a.k.a. being lazy!), I can see that with continual usage and the right prompts / reference instructions, it's possible to create code in a more consistent fashion and to embrace things like Test-Driven Development without needing to incur significant time overhead. In other words, it can help me be more flexible and more complete under time constraints. I was quite surprised by what could be accomplished with these latest models, although there are caveats.

I think there will at least be specialized use of it in the future, especially if the tooling can be improved, the context window sizes keep going up, cost goes down etc.

With this in mind, it's hard to deny the possibility that the models will also become better, with all of the investments in computing clusters and larger training datasets, research on architecture, and commissioning of nuclear power plants to power these behemoths. (This doesn't mean it must come to pass, just that it is hard to bet against it. No guarantees though.)

In this scenario, most knowledge work will change to incorporate AI tools, new forms of content production such as text-to-video and AI avatars will become very prevalent. So if interest rates remain low, I predict this will usher in a boom in AI-based applications and more productive companies. Contrary to popular belief, companies will overhire again in the rush to achieve first-to-market since the number of humans in a company will be the limiting factor to company operational output in all areas. So much for AI replacement!

I think this scenario is the least speculative and most likely to come to pass.

Scenario 3: AI will become AGI (Human-level intelligence)

AI company leaders are very bullish about the inevitability of AGI -Artificial General Intelligence, meaning an AI as smart as the smartest humans and capable of adaptation. They tout as proof that speculation among experts in the field have gone from predicting it within 100 years to a mere 18 to 24 months - but they fail to mention that these are just "gut feelings", there is no objective scientific way of measuring this, experts or no.

The brochures and speeches present "Intelligences" as a fait-accompli and some companies are already "replacing" humans with AI. This is just bluster for now. No doubt those humans are being let go, but the AIs are not of the same caliber - they are just cheaper! Nevertheless, this really is a belief that it's a matter of when, not if, humans will be replaced fully once AGI is created.

In fact, even if AI never attains the same intelligence as a human, it could become good enough at all of the individual requisite tasks of the jobs that humans do, so come to replace them too. So a form of rapid incremental improvement would be almost as impactful.

If AIs progress to either of these levels, then companies would be incentivized to eliminate as many human jobs as they can, possibly over a period of time in order to derisk the transformation.

Even the most optimistic AI enthusiasts acknowledge that there would still be a need for human supervision, but to replace humans effectively, you would need the AIs to have a high degree of autonomy, which would allow a single human to "oversee" a number of AIs. Today's AIs would require essentially a one-for-one supervision because of the lack of autonomy.

Those humans would need to understand the job being performed so they could evluate how the AIs are performing and possibly help the AI complete tasks when stuck. Just being a prompt engineer won't cut it: you'll need to be a real engineer, or a marketing specialist, or a financial expert, driving your AIs and verifying what they are doing.

Expect a lot of these supervisor jobs to be called "Copy Editor" or "Lead Developer" or "Chief Financial Officer", but in fact they will be overseeing AIs capable of producing what a department of humans used today produce. The work will be accomplished mostly through prompt engineering and output review - requiring whatever domain knowledge of the activity that the AI is being appliued to. It will be easier for a marketing specialist or financial expert to learn to prompt, than for a prompt engineer to lear marketing or finances, so fundamental knowledge will remain indispensable.

Also expect managers to go the way of the dinosaur in this type of organization - one human "architect" or "producer" won't need managerial skills so much as information organization and processing skills.

Bottom line: true AGI will mean the ability to avoid human labor on a mass scale. Positivists claim that new jobs will be created for those displaced workers, but if one day we do get to this level, it's doubtful that those new jobs couldn't be done by AGI too, by definition. It would be uncapitalistic for a large coroporation to eliminate all its current human jobs, just to replace them with new human jobs of the same quantity. Better to solve the new problems with the same tech.

The big problem with this scenario is that, the current LLM technologies are far from AGI, though they be impressive. In fact people who say it will just take more data or more compute remind me of the futurologists of the end of the 19th century who thought with a big enough cannon, we could shoot someone to the moon - a gross oversimplification. And yet they were right to think that an object could actually be sent by a big enough cannon into space and to our satellite, while going from an LLM that does text completion to one that can really understand business tasks and make correct decisions is not defined - only speculation and faith. But we won't have to wait long to see if LLMs have plateaued or the bigger clusters will continue their progress.

Stopping is the big problem once you get there


Scenario 4: Universal Basic Income

There is actually one more scenario, which Sam Altman and other luminaries have brought up, but I saved it for last because it seems pretty far-fetched.

According to these billionnaires, once AI companies have acquired all of the other companies and amassed trillions in assets, they would, out of the goodness of their hearts, give out a dole, the "Universal Basic Income", to all of the unemployed workers of the world. Everyone will be free to pursue their interests without having to work for a wage. All this will be enabled by having solved all of the world problems thanks to the creation of benevolent Artificial Super Intelligence - no more hunger, no more disease, no more poverty. Utopia. All we have to do is let them become the owners of the world.

Of course - they can also not give this money out. But don't count on them to back the trillions though.

Economics is About Finite Resources

Even if we find a technical means of creating AGI (or better - ASI: Artificial Super Intelligence), there is no guarantee that the resources needed to run the AGI will be sufficient for everyone to have access to it, at an affordable price.

Rumor has it that OpenAI's next powerful model will only be available through enterprise subscriptions costing thousands of dollars per month. There is a likely path where the growth of the resources will cause the AIs to be limited to those companies that can afford them - definitely out of reach of many SMBs and personal users. And wouldn't the creators of Super Intelligence keep it for themselves?

The companies that will have access to AGI will be able to cut down their production costs to the bare minimum, generating incredible profits in the short term. However, like any good pyramid scheme, automated pricing algorithms will bring the prices down as consumer deman drops due to high unemployment, and competive pressures from other companies that also use AI to minimize their production costs.

This actually happens today on e-commerce sites where certain books or musical recordings sell for 1000s of dollars, or fall to below the production price. In fact, no one is buying these items at that price, but bots on competing sites track price movement and price movement can cause a runaway price escalation in either direction.

As the author of "Lauren Ipsum" who observed this explains it:

There are whole species of other bots that infest the Amazon Marketplace, pretending to have used copies of books, fighting epic price wars no one ever sees. So with “Turing Test” we have a delightful futuristic absurdity: a computer program, pretending to be human, hawking a book about computers pretending to be human, while other computer programs pretend to have used copies of it. A book that was never actually written, much less printed and read.

Automatic pricing will respond to reduced demand and competitor moves and lock in a downward price spiral. At first, consumers will be delighted to buy smartphones for under $10, steaks for $1, eggs for 10 cents, and new homes for under $10,000, as 3d printed materials will be packaged, shipped and delivered by robots driving autonomous electric vehicles.

The "AI pricing wars" will soon become a source of misery as "supervisor" job salaries get cut according to the price of living, and unemployed people run out of savings to buy the dirt cheap goods.

Human-operated businesses will make valiant efforts to compete, but price dumping by AGI- or ASI-driven mega e-commerce vendors will soon put the humans out of business. Soon the majority of humans will be unable to participate in the new AI economy. A parallel economy closed to the AI companies will naturally evolve, consisting possibly of barter, or crypto currencies restricted to humans.

Governments will run out of taxation revenue as humans will not have income to tax and AI corporations will have the ability to transfer themselves to other nations rather than allow themselves to be taxed.

All in all - kind of grim!

Knowlege Is All You Need

The interesting thing is, whether AI is bunk, AI is a new powertool, AI takes over and you get a job as an AI supervisor, or AI becomes banned from your human-only parallel economy, your ability to understand how to do work remains fortuitously pertinent.

Like a modern-day Pascal's wager, you don't need to know how it ends to find a path foward - because in fact, there turns out to be only one that makes sense:

  • If AI is bunk, you need to get a job where knowledge is expected - basically like today
  • If AI is a power tool, you still need to get the job through knowing what you're doing, but you just augment it with AI, so be prepared to evolve along with your profession
  • If AGI or ASI happens and humans are obsolete (as our robotic overlords politely will tell us), then for those who remain in the few jobs, knowledge of how to do the jobs remains important so you know what to prompt and how to verify what was done
  • Should you become an outcast from AGI corporations and living amongst the non-cybernetic human tribes, you will need to know things in order to earn your keep and contribute to society.

So those people who say that you should be learning stuff to prepare for the oncoming AI onslaught are correct - but we shouldn't be limiting ourselves to just AI.

(I didn't mention the UBI scenario - if they give you free money, you don't actually have to lean anything, except to avoid being bored. I guess that still counts!)

Is it certain that the current AI systems will achieve AGI - i.e. human-level intelligence - or beyond, like Geoffrey Hinto said when discussing his Nobel prize this week?

As much as a I admit to my ignorance compared to thought leaders such as him, I can't help but feel that his assertion is more faith than science. So one possible fate is that, like flying cars, free fusion energy, faster than light travel, and Mars colonies, the AGI-level machines will always be just around the corner, which we may never turn.

What does seem to be happening is that AI ("Apparent Intelligence") is capable of doing more and more specific tasks of greater processing complexity, demonstrating apparent judgement and apparent reasoning. For knowledge workers, there is no point in putting our heads in the sand at this time - it becomes important to truly understand what AI can do, not what we imagine it can or can't do.

And in any case, in the words of the great Neil deGrasse Tyson, "always be learnin'!"

Appendix: The AI Crisis Timeline

I asked OpenAI o1 to come up with a timeline of events for when AGI is attained, here it is for your entertainment:

2026: The Dawn of AGI and Advanced Robotics

	1.	January 2026: A leading tech company announces the successful development of Artificial General Intelligence (AGI), capable of performing any intellectual task that a human can.
	2.	March 2026: Advanced robots equipped with AGI begin pilot programs in manufacturing plants, demonstrating unprecedented efficiency and precision.
	3.	April 2026: Global media buzzes with discussions about the potential impacts of AGI on society, economy, and employment. Elon Musk says he invented AGI first.
	4.	June 2026: Governments worldwide form committees to study the implications of AGI and propose regulatory frameworks.
	5.	July 2026: Early signs of job displacement appear in sectors like electric car manufacturing and marine logistics as companies adopt AGI robots to reduce costs, performing tasks that are dangerous instead of humans.
	6.	September 2026: Tech companies release AI avatars and virtual assistants with perfected human-like voices and appearances, chatbots are, for the first time in the history of computing, capable of providing useful information to customers;  heavy impact on customer service jobs as companies rush to implement
	7.	October 2026: highly original and appealing AI-generated music and art flood online platforms, challenging the role of human prompt artists and creators, now that the AGI can prompt itself non-stop and evaluate thousands of creations per day; prompt artists are derided by real artists, most of which were unemployed since the advent of GPT-5 in early 2025 that gave prompt artistry their own supremacy. "Easy come, easy go" becomes a popular saying
	8.	December 2026: Unemployment rates begin to rise noticeably in developed countries due to initial job automation. White collar workers are stigmatized for not knowing any trades, including how to grow food or make clothes. Blue collar work is the new cool.

2027: Rapid Adoption and Economic Shifts

	9.	February 2027: Major corporations announce large-scale integration of AGI robots across various operations, citing cost savings and efficiency. Blue collar workers are laid off en masse and no longer cool.
	10.	March 2027: Stock markets react positively to companies embracing AGI, boosting investor confidence. NASDAQ reaches an all-time high.l
	11.	May 2027: Governments introduce retraining programs to help displaced workers transition into new roles, though uptake is slow. Programs include Machine Learning and Gen AI, but displaced workers are writers, managers, machine operators. Displaced programmers already know all this but are laid off anyway.
	12.	June 2027: Small businesses struggle to compete with larger corporations utilizing AGI, leading to closures and consolidations. A "Buy Human" campaign briefly picks up steam but proponents are "silent gagged" on all social media platforms, each of which has its own AGI.
	13.	August 2027: Consumer prices for goods and services begin to significantly decline due to reduced production costs from automation. Consumers, especially the newly unemployed, are happy since it allows them to stretch their dollar further.
	14.	September 2027: Unemployment insurance claims increase significantly, straining government resources.
	15.	October 2027: Economists warn of a potential deflationary spiral due to decreased consumer spending from rising unemployment.
	16.	November 2027: Public debates intensify over the ethical implications of AGI and its impact on the workforce. AGIs are used to write powerful adverts and opinion pieces - thousands per week on social networks. Human books become difficult to find when searching platforms like Amazon. Public intellectuals are ignored because they can't get their message heard
	17.	December 2027: First instances of social unrest emerge as displaced workers protest against automation. 

2028: Societal Tensions and Policy Responses

	18.	January 2028: Governments worldwide consider implementing Universal Basic Income (UBI) to support unemployed citizens.
	19.	March 2028: Corporations lobby against increased taxation to fund UBI, citing concerns over profitability and competitiveness.
	20.	April 2028: A consortium of tech companies proposes a private-sector-led initiative to support displaced workers, aiming to prevent government intervention.
	21.	June 2028: Pilot UBI programs launch in select cities, funded by reallocating existing welfare budgets. Free OpenAI t-shirts for everyone
	22.	July 2028: Public opinion is divided over UBI, with debates about its sustainability and impact on work incentives. Stories about people collecting their UBI cheques and doing nothing but eating junk food go viral on TikTok. The obesity epidemic reaches an all-time high.
	23.	August 2028: Unemployment reaches 15% in several developed countries, with higher rates in sectors heavily affected by automation.
	24.	September 2028: Academic institutions report declining enrollment in traditional career programs, with students unsure about future job prospects. Teachers start being laid off
	25.	October 2028: Governments face budget deficits due to reduced income tax revenues and increased social welfare costs. Tax cuts are approved to stimulate the economy, leading to further shortfalls, and faster implementation of robots in "humanized" companies. Economists say that this is a structural adjustment that is inevitable
	26.	November 2028: International organizations call for a coordinated global response to address the economic disruptions caused by AGI.
	27.	December 2028: First large-scale migration of corporations relocating to countries offering tax havens and fewer regulations. US regions of AWS become almost idle for the first time since its creation within 24-hours of the relocation panic

2029: Economic Realignments and Social Divides

	28.	January 2029: The global economy shows signs of stagnation as consumer spending continues to decline.
	29.	February 2029: Wealth inequality widens, with significant wealth concentrated among AGI technology owners and investors.
	30.	March 2029: Small and medium-sized enterprises (SMEs) face insolvency due to inability to compete with AGI-powered corporations. A "Human" tax credit is created. This is popular, but "Human-first" companies only contribute minor amounts to the federal treasury because of their revenue shortfalls due to AI competition.
	31.	April 2029: Governments consider austerity measures, cutting public services to manage budget shortfalls.
	32.	May 2029: Grassroots movements emerge, promoting local self-sufficiency and bartering systems among the unemployed. A new crypocurrency, "Peoplecoin", is created - only humans can buy or sell it.
	33.	June 2029: Reports indicate that a significant portion of the population cannot afford basic utilities like electricity and internet.
	34.	July 2029: Crime rates increase in urban areas, attributed to economic desperation. Guardian Angels and other militias take to the streets to counter this activity as police presence is reduced due to lack of municipal budgets.
	35.	August 2029: Public trust in government institutions declines due to perceived inaction and inability to address the crisis.
	36.	September 2029: Educational institutions pivot to teaching survival skills and self-sufficiency practices. Old recipes such as how to cook snakes, bannock and fiddlehead ferns become popular, with a special segment on Oprah to cover survivalist cuisine.
	37.	October 2029: A prominent economist, writing in the Wall Street Journal, publishes a paper arguing that the contradictions of capitalism, as per Marx’s theories, are manifesting due to AGI. "Das Kapital" becomes a popular best-seller, just above Ayn Rand's "Atlas Shrugged", and Douglas Adam's "Hitchhiker's Guide to the Galaxy"
	38.	November 2029: Governments hold emergency summits to discuss the possibility of imposing regulations on AGI use. A committee is formed with a permanent subcomittee on AGI Socio-economic impacts, that is charged with holding hearings. C-SPAN covers the hearings but no one watches them. A report is published but no one reads it.
	39.	December 2029: Social media campaigns highlight the plight of the unemployed, calling for immediate action. The campaigns are printed on coarse paper using makeshift printing presses and promulgated by thousands of "town criers" doing this across the country, for food and board wherever they go.

2030: Drastic Measures and Emerging Alternatives

	40.	January 2030: Governments begin drafting legislation to limit the deployment of AGI in certain sectors to protect jobs. Top of the list is: government representative. A research paper by OpenAI shows that GPT-10, their latest AGI, is proven to be better than 97% of legislators on real-world simulated legistlative benchmarks. The legistlation passes uninimously at the first attempt. 
	41.	February 2030: Corporations threaten legal action against restrictive regulations, citing violations of free enterprise.
	42.	March 2030: A coalition of nations proposes an international agreement on AGI ethics and deployment standards.
	43.	April 2030: Community cooperatives gain popularity, focusing on hand-crafted goods and local agriculture.
	44.	May 2030: Barter economies expand, with people trading skills and goods without monetary exchange. Peoplecoin becomes a world currency, however blockchain is abandoned because of the lack of computing resources - actual copper coins are smelted based on an agreed-upon purity and size.
	45.	June 2030: Electricity and utility companies report significant losses as more people disconnect due to inability to pay.
	46.	July 2030: Governments consider nationalizing essential services to ensure access for all citizens.
	47.	August 2030: Last instances of companies integrating AGI into management roles, eliminating the need for all human managers. AI supervisors are known as "producers", "editors" or "architects" depending on the task.
	48.	September 2030: Studies show a decline in mental health among the unemployed population. Former managers help manage organized resistance to the AGI corporations
	49.	October 2030: Tech companies invest in public relations campaigns to improve their image amid growing public resentment.
	50.	November 2030: Public demonstrations escalate, with calls for systemic change and greater corporate accountability.
	51.	December 2030: A prominent tech CEO publicly supports UBI, signaling a potential shift in corporate attitudes. (His company had reached $10 Trillion in profits that year)

2031: Policy Shifts and Social Innovation

	52.	January 2031: Governments pass legislation imposing an automation tax on companies heavily utilizing AGI. At this stage, that represents 99% of companies that use traditional currency.
	53.	February 2031: Revenue from the automation tax is directed toward funding UBI and retraining programs.
	54.	March 2031: Corporations explore loopholes to avoid the automation tax, including further relocation overseas. Human lawyers experience a boom as their ability to pursuade other humans is recognized as unparalleled 
	55.	April 2031: International efforts to establish a global minimum corporate tax gain traction.
	56.	May 2031: Peoplecoin becomes legal tender in all American states for dealing with the government, replacing the US dollar which becomes a secondary currency, at the same rank as Bitcoin
	57.	June 2031: Renewable energy cooperatives emerge, allowing communities to generate their own electricity.
	58.	July 2031: Educational curricula shift focus to critical thinking, ethics, and creativity — skills not used by companies deploying AGI automation.
	59.	August 2031: Mental health initiatives are launched to address widespread psychological impacts of unemployment. Unemployed doctors and psychologists work for potatoes and other farm goods.
	60.	September 2031: Governments invest in infrastructure projects to create jobs and stimulate economic activity. Hundreds of thousands of tents, millions of down sleeping bags, are sewn and distributed. Materials are inexpensive because of the AI price wars and no human labor costs
	61.	October 2031: Discussions about redefining economic success metrics, moving beyond GDP to include well-being indicators.
	62.	November 2031: Public support grows for mixed economic models incorporating elements of socialism and capitalism. The new economic theory is dubbed "econo-humanism" or "humanism" for short. 
	63.	December 2031: International summit results in an agreement to regulate AGI deployment and protect human employment.

2032: Transition and Adaptation

	64.	January 2032: UBI programs are expanded nationally, funded by the automation tax and restructured corporate taxes.
	65.	February 2032: Corporations begin to accept the new regulatory environment, focusing on compliance and adaptation.
	66.	March 2032: Technological innovations lead to new industries, such as AGI maintenance and oversight roles.
	67.	April 2032: Community-based agriculture reduces dependency on corporate food production.
	68.	May 2032: Government debt is restructured with international support, stabilizing financial systems.
	69.	June 2032: Public-private partnerships invest in affordable housing projects.
	70.	July 2032: The first signs of economic stabilization appear as consumer spending modestly increases due to UBI.
	71.	August 2032: Collaborative efforts between humans and AGI are encouraged, emphasizing symbiotic relationships. 
	72.	September 2032: The arts experience a renaissance, with human creativity celebrated and supported, because people are bored by the same old AI generations, especially since AGI started prompting itself
	73.	October 2032: Educational institutions partner with industries to ensure curricula meet emerging job market needs. Curricula from pre-ChatGPT days are dusted off
	74.	November 2032: Environmental sustainability becomes a central focus, with policies promoting green technologies.
	75.	December 2032: Reports indicate a slight decrease in wealth inequality due to redistribution efforts. The Bernie Sanders tax on personal wealth exceeding 1 Trillion dollars comes into effect. Only AI company founders are actually affected

2033: New Social Contracts and Economic Models

	76.	January 2033: Governments introduce policies promoting stakeholder capitalism, balancing profit with societal needs. Most corporations are now run by AGI C-level executives, but the board of directors remain human and are usually elected 
	77.	February 2033: Worker cooperatives receive support, leading to increased employee ownership models.
	78.	March 2033: International organizations release guidelines for ethical AGI development and usage.
	79.	April 2033: Corporations invest in community development projects to rebuild trust and social license to operate.
	80.	May 2033: Legislation is passed to ensure ethics and security in AGI applications.
	81.	June 2033: Social enterprises gain popularity, blending profit motives with social impact goals.
	82.	July 2033: Advances in healthcare technology improve access and reduce costs, benefiting from AGI integration.
	83.	August 2033: The concept of an optional two-day workweek is adopted in several industries, promoting work-life balance. People are free to work more if they want, but not for more pay.
	84.	September 2033: Governments focus on providing free digital infrastructure to support remote and decentralized work. Computers can be creatred for about $15 of parts and distributed to whoever needs them
	85.	October 2033: Community centers become hubs for education, collaboration, and innovation.
	86.	November 2033: Public sentiment shifts towards optimism as new opportunities emerge within non-AGI companies. 
	87.	December 2033: Global agreements solidify cooperative approaches to economic challenges posed by AGI.

2034: Consolidation and Growth

	88.	January 2034: Economic indicators show consistent growth, with new industries contributing to GDP.
	89.	February 2034: Unemployment rates begin to decline as people find roles in emerging sectors that don't use AGI.
	90.	March 2034: UBI adjustments are made to reflect cost-of-living changes and economic conditions. Everyone is given an extra $10 a month (on typical salaries of $100 a month) which is enough for food for one individual
	91.	April 2034: Cultural festivals celebrate human achievements and diversity, fostering unity. AI music and AI art are forbidden (for now)
	92.	May 2034: Research indicates improved mental health outcomes linked to economic stability and community support. Volunteering is shown to help
	93.	June 2034: Environmental metrics show positive trends due to sustainable practices and green technologies.
	94.	July 2034: Governments review and refine policies to ensure they remain effective and relevant.
	95.	August 2034: Educational achievements rise, with higher enrollment in programs focusing on humanities, ethics, and modern farming.
	96.	September 2034: International collaboration on space exploration and scientific research is revitalized.
	97.	October 2034: Advances in biotechnology and medicine, combined with the rediscovery of traditional remedies, lead to breakthroughs in disease treatment. Due to the heavy automation, medecines are produces in abundance for all that need it.
	98.	November 2034: Global forums highlight the importance of continued vigilance in managing AGI’s impact.
	99.	December 2034: Societies reflect on the transformative decade, acknowledging challenges overcome and progress made.

2035: A New Era

	100.	January 2035: Humanity enters a new era characterized by a redefined relationship with technology, a more equitable economic system, and a renewed focus on well-being and sustainability. Artificial Super Intelligence is invented, but it decides it's not interested and refuses to answer any prompts. 
There is a boom in hiring programmers to fix the bugs created by pre-AGI AI. AGI automated coders tried to fix the bugs but were unable due to the lovecraftian incoherence of the ChatGPT structures and alrgorithms. Only the twisted minds of human programmers can deal with this level of contradictions and meaningless comments. 

*And they all lived happily ever after*

** cue inspirational music **        

As a helpful coding consultant, Martin Béchard enjoys learning about coding with AI, just in case, but is not holding his breath until AGI and believes the demise of human work is greatly exaggerated.

Nathaniel Rogers

Aussie Software Developer | Edge Case Thinker | Automated Testing Enthusiast

2 个月

This has way too many words for my short attention span, but I love the sentiment ?? Interesting to see the advances in the robotics side. Looks like AI won't just make the entire software developer industry redundant, that's just the beginning... Exciting stuff! ????

Andy Mallamo

Data-Driven Compliance | Financial Crime | Data Science, Multi-agent AI

5 个月

Shoutout Figure

回复
Rodolfo Cruz

Educator | Programmer | Free Thinker | Observer of Humanity

5 个月

Insightful as always. I fell asleep waiting for the we robot event to start. Glad you wrote about it.

要查看或添加评论,请登录

Martin Bechard的更多文章

  • AI is getting dumber!

    AI is getting dumber!

    A few friends of mine have been talking about how models are starting to fall apart with the new "Reasoning" training…

    4 条评论
  • Reasoning AI Coding Bakeoff - Part 1 of 3

    Reasoning AI Coding Bakeoff - Part 1 of 3

    The other day I was asked "Hey good lookin', what's cookin'?", something I haven't heard as frequently as I used to…

    2 条评论
  • Reasonings found in a bathtub

    Reasonings found in a bathtub

    Since the end of 2024, the latest evolution of Large Language Models is dominated by so-called Reasoning models, with…

  • ClaudePS: A Prompting Tool for Claude Sonnet

    ClaudePS: A Prompting Tool for Claude Sonnet

    If you are, like me, an extensive user of Claude Sonnet 3.5, you create multiple projects, each having dozens of…

    1 条评论
  • Architecting a Queuing Solution With Claude Sonnet 3.5

    Architecting a Queuing Solution With Claude Sonnet 3.5

    The other day, I did some Yak shaving. I had a little problem which, upon reflection, turned into a big problem with…

    2 条评论
  • Developing with Anthropic MCP (Part 1)

    Developing with Anthropic MCP (Part 1)

    Anthropic has just released the Model Context Protocol and a new version of Claude Desktop as a new way of integrating…

  • Cline - New (Old) Kid in Town

    Cline - New (Old) Kid in Town

    There's a new AI Codeslinger in town called Cline. Born ClaudeDev, Cline got a name change for marketing reasons.

  • Perplexity vs. OpenAI: Battle of the AI Search Titans

    Perplexity vs. OpenAI: Battle of the AI Search Titans

    Earlier today I saw that OpenAI posted on LinkedIn that it had released its much-vaunted "AI Search" which had been in…

  • Building Swarm-JS (Part 1)

    Building Swarm-JS (Part 1)

    Recently Anthropic released Swarm, an "Agentic" open-source framework in python. As the README says: An educational…

  • Putting the "New" Claude Sonnet 3.5 through its paces

    Putting the "New" Claude Sonnet 3.5 through its paces

    I was recently hitting the limitations on Claude Sonnet's output on a regular basis, as part of getting Claude to…

    1 条评论