The "Code-Rich" Organization: How  Automatic Code Generation Will Revolutionize Everything

The "Code-Rich" Organization: How Automatic Code Generation Will Revolutionize Everything

The idea that software could soon become an almost unlimited resource heralds a transformation in the whole IT world. "Software is eating up the world", but what if the world could be fed with a flow of almost unlimited, fast, and good quality software? The "Software is eating up the world" expression has gained popularity after the well-known investment fund Andressen Horowitz first used it in 2011, and has so far held true: the world's largest and wealthiest companies are either software companies or companies that invest heavily in software. Even the more traditional sectors, which are not primarily impacted, recognize the crucial role software plays in their organizations. This massive investment in software is becoming even harder to track as the digital transformation process is further blurring the lines between IT and business.

Software is a pivotal success factor: from automating business processes to serving digital customers around the clock, software lies at the heart of transformative processes in complex organizations. Any small change in business logic is largely contingent on software system modifications, including core systems, API interfaces, UI etc.

However, despite its significance, software is often costly and very difficult to modify. Organizations incur software costs in a number of ways:

  1. Option "buy": purchased software (licensed or SAAS) comes with initial license costs, but additionally and often unaccounted for there are the hidden costs that come with its adaptation to fit specific organizational requirements. Enterprise software offers customization options but they aren't limitless and require regular maintenance. While good SAAS and on-premise software are flexible enough to be adapted or extended to mitigate their original limitations, this can't change the fact that off-the-shelf software was not written specifically for our case and with our business logic in mind. More often we end up having to adapt our way of working to the Software, as it happens with the CRM and ERP.
  2. Option "build": for internally developed software, costs can quickly escalate. From clarifying initial requirements to ensuring a robust solution architecture, a multitude of potential issues can arise during development phase . The software must follow a specific lifecycle, involving updates, patching of critical vulnerabilities etc. According to many studies, modern software development sees 70% to 90% of its necessary budget dedicated to maintenance.

Whether organizations decide to buy or internal develop their software, the cost and challenge of change always persist. This difficulty explains why modern Enterprise IT tends to be "code-thin"; even the most advanced organizations can't model every process due to cost constraints and complexity. Potentially just the most important and critical processes, the skeleton, are modeled into Software. Processes may be left manual due to high automation costs, hence sacrificing potential unseen improvement margin, visibility of certain processes in the organizational KPI, and data optimization opportunities. This is a necessary compromise given the current challenges of software development, but... what if this were not anymore the case? What if software were cheap, abundant, quick to deliver, easy to write, replace and adapt?

With Large Language Models trained on Software Repositories, like OpenAI Codex, Replit Whisperer, Hugging Face’s Starcoder, and Amazon Whisperer, we can dynamically generate the code we need when we need it. What if migrating from an obsolete stack to a supported one were as simple as copying and pasting text and asking to rewrite it? What if everybody could generate code for new projects or test cases on the fly, with a simple prompt?

That's the direction where Code-trained LLM are moving to. Although Large Language Models are not without their flaws, they have a unique advantage when writing code compared to natural language. Code is a formal language and, as such a formal verification of its structure can be performed by the system itself. Even if we still have to confirm its adherence to requirements, maybe using the first results as a stub and then finalizing the business logic, or using the coding LLM to fix the problems we encounter can significantly speed up delivery time.

The prediction is that software will no longer be scarce or difficult to manage, and software projects will cease to be a "black art", as in the famous Software Estimation book from Steve McConnell. The process will become quick, manageable, and predictable. LLMs will offer IT professionals a broader toolset to meet business needs. Business people could simply explain directly with a prompt what they expect to have and iteratively refine.?During the OpenAI presentation of GPT-4 we’ve seen Greg Brockman sketching on a napkin the layout of a website and letting GPT-4 generate on the fly an entire HTML Site. This idea could already be a great starting point for many IT Projects, a first skeleton of our idea we can the iteratively extend, as in the best practices from agile methodologies.

In the future, the role of developers will transition toward Software Architecture by understanding needs and collaborating with AI to achieve the best solutions possible for the internal and external customers. Although skilled software developers will remain crucial, because many components will still require manual development and or deployment to implement the right logic, the advent of coding LLMs will drastically streamline the whole process. We already have in many modern IDE code generators and pre-built templates, that can already speed up software engineering processes, but still they are tools that require specific knowledge and experience to be properly used. With LLMs code generation become much more easy, because it can be triggered by a simple description in natural language.

This transformation is inevitable. Skilled developers are very hard for organizations to find and retain and are already a factor hindering the throughput. However, it will not be a matter of cost reduction, but of productivity enhancement. Early studies from non-IT line of business already show a significant productivity increase through LLMs, particularly among low-performers who seem to immediately achieve the performance of their more skilled colleagues. Productivity inside organizations will be massively boosted. Despite this increase in productivity, the employment rate in IT is not expected to drop; instead, the same number of people will deliver much more. Market pressure will eventually come from competitors: if we fail to employ LLM Code Assistants, our competitors will outperform us by developing the same number of features at a fraction of our time and cost. Once all companies reach this level, the enterprises that automate more processes and thus cover more use cases through software, or "code-rich" companies, will serve their customers better and outperform their "code-thin" counterparts. That's the pressure we will receive from the market.

This pressure to become data-driven will further drive enterprises to be "code-rich", instead of "code-thin" enabling more structured digital interactions that can be analyzed for operational improvement. No longer will we need to choose between compromise-ridden off-the-shelf software and costly, custom-built software. We can quickly have tailor-made software specific to our needs with a predictable and convenient price tag and far fewer lifecycle challenges.

This shift will have far-reaching implications, not just on IT departments, but on enterprises and organizations as a whole. The consequences on Software Industry will be radical as well. By embracing this new era of AI and LLM-based automatic code generation, we are ushering in a future where software is no longer a scarce resource but an abundant, easily managed one that can revolutionize IT ?as we know it to become "code-rich". Code will not just eat the world, it will be abundant, it will be cheap, it will finally be everywhere.

Jaroslav Pantsjoha

Associate Director | Google Cloud CoP Lead | AppMod & Data Architecture

8 个月

Great read to stumble across, and it still stands the test of exponential time. GenAI is eating the software which is eating the world. And I start think the next challenge would be factoring #GenAI disruptive nature into the business strategy and planning. Can anyone even have a 2-3 year plan? In the age of "abundant, generated software" the competitive edge, moat and IP may be more challenging to maintain and develop. Every organisation IMHO needs to realise that GenAI Upskilling, updated internal processes to effectively leverage technology - earlier the better - is the race many find themselves, to build-in business resilience for the years to come. More on that in my post here https://www.dhirubhai.net/pulse/need-frequent-upskill-age-genai-jaroslav-pantsjoha/ I welcome to read through.

Paola Bonomo

Non Executive Director, advisor, investor

1 年

I have a question for you: today, a mid- to senior-level developer can check and test the code output from an LLM, because that developer has the expertise required to spot flaws or unexpected behaviors and make and test hypotheses about how to fix the issues, or direct the LLM to fix them. Once there are no junior developers to go through the apprenticeship of writing lots of code, though, how does one become a mid- to senior-level developer? What will the apprenticeship look like? Airline pilots use a lot of automated procedures but are still trained for a fully manual landing in an emergency. How will developers be trained?

Paolo Magrassi

High-tech analyst. Ex physicist.

1 年

I subscribe verbatim. Problem is, I am personally?likely to have written those words 25 years ago. Recall, indeed, that the promise of generated code ?derived from natural language is at least that old, or more. The fury around AI code generation is hot, and for good reason. Just let's not overestimate the quickness of the 'final' result. There is little correctness in 2023 LLMs. They make up things and hallucinate. (E.g., it can be shown that they can be used as Turing machines but the prompt engineering effort is immense). Getting to using them to develop <business software> that does not need another 75% of effort to be tested (like today) will not be a piece of cake.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了