Enterprise architecture roadmap: Applications in the AI era
As-is enterprise architecture requires re-examination considering recent advances in AI. Such an examination goes far beyond technology into the domains of philosophy, biology, ethics, and astronomy. This summary charts one such examination, scary and wonderous, and is an attempt to baseline the conversation such that we might in future discuss the details.
It is hard for me to overstate the precipice we sit on as an industry. During my 35 years in enterprise software, I thought the PC was cool, the internet was a revolution, and the cloud was awesome, but all these technologies are just enabling foundations to the main act - AI. AI is the next step function along the history of human technology innovation - fire, the wheel, electricity, and the splitting of the atom.
I last wrote about AI in Sept 2020 with a piece whose working title was “The last of the coffee making apes.” In that article, I wrote of the emergence of an enterprise consciousness and at that time my projections of this reality were far off from where they lie now. Much of the current discussion revolves around artificial general intelligence (AGI) and GPT. To argue if we have achieved AGI, whilst important, should not overshadow the immediate impact and actions that we should take with respect to AI.
As I look around at much of the day-to-day business of enterprise software I feel like Sarah Conner from Terminator when she screams at her disbelieving captor
“You're already dead, Silberman. Everybody dies. You know I believe it…”
In the immediate term, this death is metaphorical. The advice I would give you is to just stop whatever it is you are doing right now and forget everything you believe about your software and operating models. The half-life of all this stuff will now be measured in months and years not decades. The change is already underway and will accelerate very quickly.
A new application architecture
Re-imagine your application architecture around AI as the core component. From a capability perspective, this means you will need to access both private and public AI models. Then you will need a mixer that is able to take both these public and private inputs and output business value, whether that be customer advice or a new program for your finance system. This mixer will obscure all your IP, including customer data, from public models - a kind of model firewalling.
Our transactional systems can stay for now as GPT4 is not great at math, but the primary interfacing of these systems, together with all our knowledge systems, should be hardwired directly into your private models. Large language models will be great for translation, integration, and data ingestion of all data. These private models will be more cost-effective and specialized than large public ones. Private models will be our most precious jewels housing our differentiated digital DNA.
This change in the application level will drive further changes in the infrastructure layer as the true characteristics of these new applications emerge. For now, your existing infrastructure will do.
Did I mention you will also need to rebuild your systems of engagement? Voice, natural language, and imaging will be increasingly used as multi-modal models to gain traction. You get the picture. I would get GPT to draw me a diagram, but I wanted to at least enjoy my last AI-free post (spell check excepted).
Watch for cracks
Alongside the right data and training, the most important discipline with models is to think about safety and reliability. In the next year, we will see increasing sophistication in the levels of fraud with deep fakes and sentiment manipulation being universally available to bad actors via AI. There is a big responsibility to protect the vulnerable and this is where technologies like model watermarking, data lineage, and the model firewalling I have already touched on will become important. It starts with securing our customer data and intellectual property from leakage into public models.
Another important discipline will be management and monitoring. At this point, we cannot debug and trace errors in models. When we detect problems in a biological infant, we can develop strategies for managing these problems. Models are no different in this regard. Today we call these hallucinations, but we need to be more scientific in our understanding and management of them. Improved training should reduce this but also through developing monitoring, debugging, and control capability we will be better equipped to direct behavior.
There is a tradeoff between time-to-market and quality when we play in competitive markets and mistakes get made. We are human! Things break, bots say the wrong thing, and airplanes will regrettably fall from the sky just as they do today. We must learn from these mistakes and build these learnings into our reliability and safety models to a level that eventually exceeds existing aircraft and nuclear standards.
What about me?
It is impossible not to start iterating through the unlimited number of new opportunities this technology will enable and the benefits it will bring to humankind. But as we examine the use cases one cannot ignore the potential negative human impacts as well.
How will this technology impact you? My guess is over time a lot! Roles will change; friends who were experts in their fields may need to retool; your children may have to rethink their careers. Up until a month or so ago, I might have steered my high-school-aged kids on the path of being a programmer, now I would tell them to think twice. Perhaps being a tester is better! New criteria should be developed for future choices, roadmaps, staffing plans, and training.
Today’s models are young, and we will become teachers and parents to them, whilst at the same time harnessing their eventual superhuman abilities for our own utility. To facilitate this, we will build the infrastructure around them and should instill them with values and goals that will serve all of us as they grow in capability.
As I have been thinking deeply about this technology, I have been through a roller coaster of emotions. I would say in the first instance it is an easy trap to fall into black-and-white thinking about AI. For me starting the journey of understanding has helped me to find at least an uneasy readiness for the massive change on the horizon. Once we can help ourselves in this way then we can help others make the transition.
Buckle up for hyper-drive.
All we have talked about up until now is what we need to do before takeoff.
A stated goal of prominent model builders is to build models that can make better models. The self-improvement of intelligent models will inevitably lead to an intelligence explosion. There is a clear existential threat as digital AI develops into a superintelligence whose capability will far outstrip the capabilities of our biological intelligence. The work that we do now in training and safety will have a significant impact on how things go at this point.
The forces of commerce and curiosity will not allow for a slowdown but should be tempered with the implementation of regulation with a focus on reliability and safety. I think about how the post-war generation must have viewed the specter of nuclear war and am grateful that this generation put in place a regime that has mostly managed the harmful potential of this technology.
Our cosmic destiny
To confront the true impact of AI one is inevitably led to the big questions. What does it mean to be human? Can digital life suffer? What is our destiny? On the last question, at best estimates, there are a short 5 billion years left till Earth's ability to sustain biological and digital life ends. If we follow the laws of natural selection we will need to adapt to survive to allow our descendants in trillions of years' time to look back at their primitive ancestors and be inspired by how we used intelligence to eventually populate the known universes and bring the joy and sorrow of life to trillions of souls both biological and digital.
Bringing it all back home
I realize this is a lot to take in for some and beyond your day job, but the genie is out of the bottle. The best advice I can offer is once again from Sarah Conner -
"Every day from this day on is a gift. Use it well."
If you got to here I assume this article was of interest so please let me know what you think in the comments and repost.
thank you
Ajd
#enterprisearchitecture #aiarchitecture #apis
Whilst private AI models may appear more cost effective since the domains they cover are more specialised and narrow, a lot more effort will have to go into achieving a high level of accuracy of the answers (especially in critical domains such as law or medicine). You don’t want an LLM hallucinating about a medical condition ! I realised that a few years ago when building a specialised ML solution based on IBM Watson for a university. Granted, the sophistication of AI has accelerated since then, but the fundamentals of solutions like GPT or Watson are strikingly similar.
Former Chief Commercial Banking Officer (EVP) Columbia Bank. Results oriented relationship banking sales/credit training| Source/structure/negotiate business financing solutions/proposals| Business plan development.
1 年Andrew, thought provocative article, so where does one allocate capital ?
Solutions Architect | Enterprise Architect Practitioner
1 年Good article Andrew. You make an interesting point about the private and public models getting blended to achieve business outcomes. How will we ensure models are segregated in such a way that customer data, PII, financials, etc don't get exposed inadvertently. Significant challenges ahead and that's great.
Software architecture, innovation and delivery in digital transformation, banking, payments, retail, loyalty, airline and government. Solution Architect | Certified ScrumMaster and Cloud Architect
1 年I think your spot on Andrew, the certification of models will also become critical and will be very different from how educators certify people today, this stems from the lineage or provenance of the learning data and conversely the need to undermine the models of others, especially for the military
Founding Partner and CEO of Realising-Potential |Leadership & Management | Business Systems | Governance | Alignment | Data Insights | Cybernetics
1 年I agree EA does need a reframe. As digitisation increases and evolves and event driven architecture becomes the norm any architecture change will simply be an event. Big questions do need to be asked - the unfortunate thing is we are using big data to address those big questions. Big data is about the past. That’s ok if you can confidently say the future is going to be similar to the past. With what we know and what’s on the horizon there is an opportunity to do the reframe. Question is will we?