From Backend-Code to Abstract Prompt: What will the adaptation of LLM-Orchestration-as-Code mean for software development?
Vlad Larichev
Industrial AI Lead @Accenture IX | Software Engineer | Keynote Speaker | Research Enthusiast ??| Building with LLMs since 2020 | Helping Industries Scale AI with Impact - Beyond the Hype and Marketing Fluff.
It seems that we are getting closer to using LLMs as a backend for our applications - here's why it might be reasonable and exciting:
This week, OpenAI announced an update to their API function, which, in my opinion, highlights a direction for the usage of LLMs that hasn’t received enough attention.
Short Intro for non-developers:
What is JSON? JSON is a text format, which is like a universal language for computers. It's a simple format used to organize and share data in a way that's easy for both humans and machines to read and write.
How do developers use it? Developers use JSON to send information between different parts of a software system, like from a server to a website or between different programs. It's especially popular because it's lightweight and easy to understand.
Why is consistent formatting important? Application expects data in a specific JSON format, to be processed. If the format (or "language") changes unexpectedly—say, missing pieces or arranged differently—the application might not understand it. This misunderstanding can cause the application to malfunction or even crash because it doesn't know how to process the unexpected data.
Last year, many developers attempted to use the OpenAI API as a backend server for their applications, trying to force GPT to provide responses in strictly defined formats.
? Well, unsurprisingly, this approach led to challenges, as the responses were not reliable, and slight changes in structure could cause the application to crash.
? However, this approach also revealed new capabilities—backends could dynamically adapt to API calls from the frontend, that weren't explicitly programmed in the backend ??.
For example, in one of the experiments, a grocery app, with functions like:
"?? ?????? ???????? ???? ????? ??????????????? ????????" and
"???????????? ????????,"
the system could dynamically reason and execute a command like
"?????? ???????????? ?????? ???????? ????????????????" in the shopping cart.
Another amazing example is the article by Mate Marschalko from January 2023 (!), where he shows, how to use LLM as dynamic backend for Siri and smart home automation.
Don't get me wrong - this approach sounds very unusual and unreliable to me as a full stack developer, but looking at it from the side, I don't see why this shouldn't be possible or even be best practice in some areas in a few years' time, to teach the backend with a prompt, instead of hard coding each variation of the repetitive calls.
OpenAI launching "Structured Outputs"
An important step in this direction is OpenAI's introduction of "Structured Outputs" in their API this week, a feature that ensures model outputs ???????????????? conform to developer-supplied JSON Schemas.
This advancement addresses the challenge of generating consistent structured data from unstructured inputs by guaranteeing that outputs match specific schemas, opening the way for using LLMs as a reliable backend.
领英推荐
Generating structured data from unstructured inputs is one of the core use cases for AI in today’s applications. Developers use the OpenAI API to build powerful assistants that can fetch data and answer questions via function calling, extract structured data for data entry, and build multi-step agentic workflows that allow LLMs to take actions.
We had to work around the limitations of LLMs in this area using open-source tooling, prompting, and retrying requests repeatedly to ensure that model outputs match the formats needed to interoperate with their systems. Structured Outputs solve this problem by constraining OpenAI models to match developer-supplied schemas and by training models to better understand complex schemas.
It's particularly useful for applications requiring precise data formats, such as data entry or multi-step workflows, and it supports function calling across various models.
Looking to the future: from low-level prompting to high-level orchestration?
LLMs are still not perfect, but when used correctly, they are extremely powerful and flexible building blox for complex problems. But starting each project with one singe block is not the best way to go.
In 2024, frameworks like LangChain, AutoGen, and LangGraph are changing LLM workflows and agent orchestration by offering modular tools and scalable workflows for complex AI applications. These frameworks enable AI agents to collaborate, use tools, and handle tasks with persistence and context, making them ideal for production-ready applications. Additionally, evaluation frameworks like DeepEval are crucial for assessing the performance of multi-agent systems.
As the community creates more workflows and frameworks to simplify the usage of LLMs and incorporates best practices, we will likely see the same transition and increase in quality that we saw in web development with the rise of the first frameworks.
In the world of LLMs, we are rapidly moving from low-level code approaches to higher levels of abstraction—week by week. Let's compare what path we've had in software development and what trends we can expect in LLM and GenAI on that basis to better understand this trend.
Programming Languages: From Low-Level Code to High-Level Abstractions
The development of programming languages has seen a dramatic shift over the decades, moving from low-level code that directly manipulates hardware to high-level abstractions that allow developers to focus on solving business problems rather than managing technical details.
The Evolution of LLMs for Developers: From Manual Prompting to Orchestrated Agent Networks
Similarly, the usage of LLMs in software development will undergo a transformation, from manually prompting a single assistant to orchestrating networks of agents that work together to solve complex tasks.
The parallel journey and what we learn from it for the near future
Both the evolution of programming languages and the development of LLMs as developers share a common trajectory: moving from manual, detailed control to high-level abstractions that streamline the development process. In both cases, the goal is to reduce the cognitive load, allowing them to focus on innovation and problem-solving rather than getting bogged down in the minutiae.
As we continue to advance in both fields, we can expect even more sophisticated tools and frameworks that further abstract complexity, enabling developers to build even more powerful applications with less effort. Whether it’s defining infrastructure in code or orchestrating a team of LLMs to complete a task, the future of development is one of increasing automation, best practices, and high-level abstractions.
Just as higher levels of abstraction have driven the evolution and quality of traditional programming, the increasing sophistication of LLM solutions—like using LLMs as a reliable backend— will unlock new possibilities, streamline development, and elevate the standards of AI-driven applications.
Exciting journey ahead with Christof Horn , Pankaj Sodhi , Kathrin Schwan , Erik Soelaksana , Laura Mosconi , Thomas Reisenweber , Nick Rosa , Irina Adamchic, PhD , Ramon Wartala , Igor Aranovsky , Alexander Herttrich and many others.
President, DigiValley.AI Group
7 个月Many big points are now coming to mind as it has been hinted now in post and my appreciation is forwarded to the author
Founder and Principal Analyst at AI ALPI (Driving Intelligence Acceleration for HR Leaders), Chief Business Officer at Xerago and Thought Cred (Driving Thought Intelligence Acceleration for Enterprises)
7 个月Interesting take! Comparing LLMs to software evolution makes you think. Vlad Larichev how do you see LLMs changing the way developers work in the near future?