Time to Take a Breath
The outcomes of the Paris discussions on AI were less than exhilarating at least politically. The general trend of AI is stoking the twin fears of joblessness/business failure and extinction outcomes, whether those are war, economy, or environment. This is directly competing with the equally grandiose rhetoric of wealth creation and utopian worlds. And of course as a Reddit reader, of a beautiful set of concocted and convoluted conspiracy theories (one of my guilty pleasures for Sunday reading). But 25 years around architects has taught me a critical lesson, take a deep breath and stand back from the problem. Whether this is code, a painting, a system malfunction, a stakeholder or a relationship. Or in this case ‘Global-Thermal Nuclear War‘ or its apparent AI equivalent. ;-)
The world is going through a major growing phase right now. I don’t believe this is specifically about Artificial Intelligence techniques like LLMs. I am less worried about AI killing the planet or putting our children in pods to power its world domination than I am about how we learn to work together to adapt to a world, and ultimately worlds, where technology is as fundamental to our existence as food, clothing, and shelter. This is an existential crisis is about the adaptation of high technology to human existence and society.
Predicting the Future
Back around 2005 I spoke at a software architecture conference and described the possibility that one day software would be to blame for a plane crash and that we would face a major problem on that day. I was chided and laughed at a bit then. I had no idea that one technology crisis would be related to airplanes but I saw the direction of human dependence on technology and the sheer lack of coordinated societal mechanisms for dealing with it, especially technology that impacts our actual lives and that sprang to mind.
Since then I have predicted that an ongoing escalation would emerge in humanity in both culture and in terms of societal power structures. In 2018 or so I was asked to speak on the rising influence of AI in Sweden at one of our ITARC conferences. One of the points I made was the need for establishing an authentic implementation of trust and legal methods for navigating it. I will speak more about this at some point but the simple fact is we have very few societal mechanisms for managing full and partial trust, identity, liability, and authorization in a technology world much less an AI-enabled technology world. And our technology equivalents are as deeply flawed. Should your AI Agent happen to hack into a school to improve your daughter’s grades? It is an effective means of improving her grades at least without a LOT of thinking about this topic. This lack of corporate, legal, cultural, and architectural methods for controlling technology is as much to blame for the hype, the fear, the uncertainty and the possible impacts from our current AI-powered hype cycle as any real impacts.
The Past Always Repeats
This is not the first time humanity has navigated a species-level knowledge and abilities evolution. And no I don’t mean the internet, that was simply the opening salvo in the technology adaptation we are currently experiencing.
Instead look to human use of tools, the cotton gin, the formation of states, democracy, written language, medicine, industrialization (obviously in no particular order) and similar transitions that caused humanity to adapt its social, cultural, financial and legal institutions to cope with a changing set of abilities.
And to cut to the chase we have mostly made the right decisions. And we are likely to do so again. Oh I know, I know, doom and gloom and clickbait about megalomania, destruction and robot wars sell papers, but they don’t get us closer to a solution.
In general, humans evolve systems that are mostly good for humans in the area they live and according to cultural norms they can abide by, with lots of ‘yeah but what about?’ examples of that not being the case. In general, humans from other cultural backgrounds or who live in different areas disapprove of many of these adaptations and accepted norms. Outside of those examples are the extreme cases of dictatorial regimes driven by the power of a line of autocrats. That one is a distinctly human pattern that I hope someday we can truly eradicate. I will come back to this risk in a moment as technology may exacerbate this in places if we don’t take certain precautions.
The Changes We Need
I posted a blog a year ago about the way technology is treated when it emerges. I made the claim that if we treated medical approaches in the same way, many or even most of us would have been killed by a rogue medical practice. That was not tongue-in-cheek. Our current societal approach to technology is roughly speaking, ‘sure let’s try that on a bunch of people and see what happens.’ This is a great thought experiment for the changes we will make to technology for it to serve humanity (instead of the scarier Matrix-like version).
The same thing has happened slowly with industrial technology, what we like to call ‘Operational Technology’ sometimes. In the late 1800s and early 1900s it was all speed, investment and chaos. Coal mines, children workers, indentured servitude, environmental horrors, the rich controlling political outcomes, greedy people making money on ‘Growth’, ‘Freedom’, ‘Wealth for All’, and similar approaches to manufacturing, science, and energy creation dominated the world's stage. And well it has taken us a long time to learn the lessons from that. And obviously humans have not fully learned those lessons on a global scale. One would argue that the high technology craze is the extension of that learning process.
But a few exceptional lessons have been learned in places where possibly destructive innovation meets population:
I will write a post soon making the steps we need to take to adapt to the technology pressure, for example, technology business and impact R&D, administration, licensure, and personal liability, more clear.
All of the above I believe, is extremely likely and necessary at this point. I founded Iasa with the belief that someday the world would need trained and licensed digital professionals for exactly this reason.
The Rising Pressure
The level of pressure is definitely rising and will likely continue to do so. As this pressure rises so will the continued failures of our current system of systems in dealing with it. By that, I predict some if not all of the following will occur before the vast majority of humans begin demanding real adaptations.
Ha and there I said I would avoid doom and gloom predictions. But then I don’t believe we will allow many examples of these to occur. Technology is deeply vulnerable to disruption (power and network lines, hacking, knowledge worker walkouts, etc) and honestly the impacts will affect the rich as well as the poor so I believe we will act relatively quickly.
Getting Specific
Recently I spoke to a CTO whose organization had gone from daily or even hourly releases of their product back to monthly (with the possibility of longer when desirable). He said, ‘it has so greatly increased the quality and enjoyment of work that it is noticeable in the hallways’. Not all innovation is necessary. This is because there really was no business need to drive employees to this level of performance. And without that it created a negative speed-driven culture, massive confusion, and lots and lots of errors. Sound familiar? It is. It is a microcosm for what I describe above. We are currently racing towards solutions that honestly don’t sound that great. As one person described it (anyone know who?) “I wanted AI and robots to do the laundry and the cleaning so I could paint and write music, not AI that could write music and paint, so I could do laundry and dishes”.
Here are some questionable goals that a very few humans seem eager to achieve that we may want to address.
Remember, as a professional association leader I am working to establish a real base of authority for educated, experienced, and ethically bound professionals to represent us and the digital health society wants to achieve. I believe that is the shortest and cheapest adjustment to a radical new set of human achievements and the awesome benefits and risks they represent.
I should say, none of these opinions represent or are meant to be mistaken for the beliefs of Iasa members, the Iasa board of directors or Iasa partners. They are my own.
Chief Enterprise Architect | AI Agents & Agentic AI Operating Models | Thought Leader | Author & Speaker | Founder of Enterprise Architecture 4.0
2 周Paul Preiss This is a fantastic perspective on the broader arc of human adaptation to transformative technology shifts. The challenge isn’t just AI itself—it’s the lack of structured, scalable mechanisms to govern its impact effectively. As we’ve seen with past technological revolutions, societies that adapt their governance, trust, and risk frameworks proactively tend to thrive, while those that react too late suffer disruption. One of the key missing pieces today is a robust AI governance and capability model that integrates strategic, operational, and ethical oversight. We lack well-defined trust mechanisms, liability models, and AI-specific regulatory scaffolding—a challenge that will only become more urgent as AI agents evolve from tools to autonomous actors. I particularly resonate with your analogy to the early industrial era. We are indeed in a phase akin to the Wild West of technology. However, if we codify AI capability maturity models, establish professional AI ethics and governance standards, and ensure human-AI collaboration frameworks are embedded into policy, we can mitigate the fear-driven extremes of utopia vs. dystopia. Looking forward to your upcoming post on concrete steps ??!
Technology Director | Sustainable Innovation & Architecture | Speaker, Podcaster and Facilitator | MBCS CITP
2 周Great article - I feel this very much at the moment! Typo? Probably Breath or Breather?