Office Help
Dall-E

Office Help

In?a previous article post, I kicked off an idea of dealing with AI integration into legacy systems. Specifically I stated that "AI", whatever it may mean to you or in your world, as a capability of your organisation, must circumvent and deal with the problem of "interoperating with organisations' legacy systems while implementing their business rules and tying it back down to performance, security, governance [...] different topic for another post".

This is that 'another' post. But before I begin, a minor discourse - there is a known business-tech-dev tension. That tension being: maximising profit off of existing legacy system vs. developing new tech that replaces the legacy system tech thereby replacing the business model that hinges on legacy system tech. This tension is not easily resolved. Kodak pinned their hopes on film; Blockbuster on rental fees. Both are out of business.

Rest assured that it is also usually actively opposed out of concern that our business may not survive the business model adjustment required in the worst case while in the best case we will have to develop a new business model out of the ashes of the old one, and only then, once resurrected, have to re-do the hard work of understanding our business model. Rest assured that this is precisely what every will business will undoubtedly have to do, whether they would like to or not and the only thing that can be controlled, is the pace at which this adoption will be managed. Some will their own pace achieve, others will have that pace thrust upon them (and, yet, others will attempt to achieve their own pace and mismanage their attempts)

The reason I can be so carefree, untroubled, nonchalant, and blasé about AI disruption is because we are a software company. And software companies - like licensed product delivery studios and conglomerates or like online-based and heavily own-tech-invested enterprises or consultancy firms like ours - do not suffer much from this problem (and certainly not as much as other businesses). We do not feel the disruption as an existential threat because the underlying operational model of the business already incorporates resilience towards technological advancement. Software companies have simply learned to expect and adapt to new technology game-changers, learn the new rules of the game and then convert that understanding into providing additional services to customers as value-add sales that enhance products, provide additional services, or increase productivity (or, in our case, pass the knowledge on how to reap these benefits by disassembling and re-assembling business processes to accommodate the new rules of the game to our clients). In other words: increase the efficacy of business processes in order to increases the overall efficiency of the business.

Which brings us back to AI and legacy system integration. This engineering challenge, as complex as it is, is made even more so by the what seems to be an even bigger problem - the problem of organisational transformation. Do not mistake my statement for me trying to be glib by downplaying the enormous task of engineering and delivering a software system that underpins AI with integration into existing legacy system.

I am not attempting to reduce the overall effort that the raw engineering posed. Far from it. Model training, integration implementation, delivery of secure, usable working software solution project, system maintenance over the full lifecycle is a hugely complex undertaking. I am merely illustrating the enormity of the challenge of doing all of that in the face of political agendas actively in opposition (more on that in another post). The complexity increases and compounds as a result.

The overall task of delivery in the face of such (unnecessary?) hurdles is huge in comparison to doing R&D, writing code and managing delivery of spec-ed&tested software exclusively and only and without being actively opposed in doing that, only that, and nothing but that correctly. Given how detrimental this opposition is to the delivery process and final outcome one would think to ask - what underpins it? What is the root cause?

As mentioned earlier, in part it is valid concern (risk around business operational survival), in part it is resistance to change (sluggishness to learn something new) but, in most part, it is fear of the unknown.

Because - what is AI anyway? A bunch of consultants using $10 words that don't mean anything to the day-to-day operations of a business?

I would say the answer to that is - yes. In part, that is precisely what it is. That is not all it is but in part it is that too, yes. Terminology of new labels for new concepts in attempts to tackle something new, warrants new words. And new words are related to old words only in the sense that they have the same syllables. Like model. And training. Or neural network. Or integration? Come on. You had to have heard that one before.

Yes, these words don't map 1-1 to day-to-day operations. Here is where business SMEs should come in to assist with filling that gap. Or enterprise architects. Whoever is on hand. In any case - someone has to because not understanding the lingo is no excuse for not attempting to understand the overall impact of the mechanism.

What else could this AI be? Major robot outage that could deplete cashflow in mere seconds if implemented carelessly? To be sure. In the circumstances where it is left to manage fund-related things with no checks. And then implemented carelessly. Absolutely. Under those conditions it can do that damage.

My 2c on that would be to not place it in a position where it can do that damage, put checks in place and implement it all with a lot of care (I would say that applies to all software systems but whatever).

What else could it be? It could be a new profit revenue stream. Or an enhancement to an existing revenue stream. Or a way to mitigate risk and reduce costs. Depends how you think about it, I suppose. Shameless plug: Get in touch if you need assistance with this; we would love to help.

But what is it actually? It's just tech. It is new tech but it is just tech.

Or, in technical terms - a hybrid AI-Enabled Middleware Platform (AIEM) integrating AI capabilities with existing enterprise legacy systems involving the following: AI-Powered Middleware Layer (uses a combination of REST APIs, event streams or message queues and ETL pipelines to integrate with legacy databases, mainframes, document repos and ERP/CRM systems), Neural Network-Based Agents (uses deep learning models to predict operational inefficiencies and recommend optimisations), AI Model Repository (hosts pre-trained neural network models) and/or AI Model Service Integration (uses other AI services either on-prem or SaaS cloud-based), Data Integration Hub (extracts structured/unstructured data from legacy databases, enterprise service buses, and file-based data stores). Optional extras may also include AI Explainability & Monitoring (uses Shapley Additive exPlanations and Local Interpretable Model-agnostic Explanations for transparency in AI decision-making) and Security & Compliance Layer (implements AI Governance Frameworks to ensure compliance with regulations). Use case examples include but are not limited to: predictive maintenance, fraud detection, customer service automation, intelligent document processing, chatbot automation.

In other words, it's a software system that integrates with your legacy systems/data. With AI in it.

Fear of the unknown may be dealt with by knowing. In trying to understand moving pieces which underpin the above mentioned mechanism it helps to understand that this would-be software system relies heavily on data. It can and should deliver AI capabilities, but only if it has been trained with enough data made available to do so. There is a process involved with making data available. Obtaining the data points from storage, converting the data into a format that allows for neural network model training, running the training programs, testing the neural network model, deploying the neural network model, maintaining the neural network model - all that hinges on data being available where there is plenty of known and well-documented work to be done, too. A lot of work, to be sure. Data needs to be accessed, made available for processing (aka model training), stored and handled securely.

These are all, without exception, solvable complexities which pale in comparison with the complexity of getting people to understand that their business model has gone the way of Kodak and Blockbuster and they are now entering the next and final phase - increasing share of a shrinking market; the market of anyone looking exclusively for services or products from a business ignoring the Next Big Thing.

I am old enough to remember a world in which no business had an internet website. And no business needed to, until, that is - every business needed to. Even if businesses continued to run and operate quite successfully without the internet, eventually, all of them embraced the internet-based model. There are many people around today who remember a time when every business could be run without a single computer. Not a single one. An entire business. Those people have now seen this disruptive pattern enter into the marketplace and repeat itself three times in their lifetime. Once with the introduction and eventual ubiquitous adoption of PCs to help run a business, the second time with the introduction and eventual ubiquitous adoption of the internet to help run a business, and now, for the third time, the introduction and, perhaps soon, to be ubiquitous adoption of neural networks to help run a business.

Why do businesses which adapt for the Next Big Thing (like automobiles) remain in business and businesses which cling to the Next-Big-Thingless-based economy (like manufacturing buggy whips) do not?

Because businesses that adopted, what would considered at a given point in time novelty bells and whistles, evolved around those innovations and, in so doing, became more competitive. Got more customers. Made more sales. Ran their businesses more effectively.

No other reason.

要查看或添加评论,请登录

Sa?a Slankamenac的更多文章

  • Trusting Policies

    Trusting Policies

    In his post On politics and software, Thane Thomson outlines Pirsig's "Metaphysics of Quality", reflected in the Taoist…

  • Rushing Effect

    Rushing Effect

    I had a question recently asked how to prevent the "acceleration rushing as project advances" effect. I covered an…

  • AI: Team Up With, Not By And Of

    AI: Team Up With, Not By And Of

    Generative AI is not going to build your engineering team for you by Charity Majors emphasizes a critical perspective…

  • Playing to Win

    Playing to Win

    I have been thinking and reading about sales of late and came across this article by Marcus Blankenship titled The Trap…

  • Boring Is Good

    Boring Is Good

    A friend and colleague sent me a link to and article On Long Term Software Development by Bert Hubert and I read it…

    2 条评论
  • The SSO Gap

    The SSO Gap

    This year, I would like to kick off with a topic near to my heart - security standards and practices. Not a very…

  • Unlearning Lessons

    Unlearning Lessons

    5 Lessons I learned the hard way from 10+ years as a software engineer is a post by Gourav Khanijoe, about his journey…

  • Planning Safety

    Planning Safety

    The "Margin of Safety" by Micha? Poczwardowski explores a mental model for making estimates more reliable by…

  • Engineering Strategies

    Engineering Strategies

    In this article Why Engineers Should Have a Seat at the Product Strategy Table, guest post by Wayne Chen, Principal…

  • Working Titles

    Working Titles

    Software Engineer Titles Have (Almost) Lost All Their Meaning article by Trevor I. Lasn highlights the issue of title…

社区洞察

其他会员也浏览了