From Backend-Code to Abstract Prompt: What will the adaptation of LLM-Orchestration-as-Code mean for software development?

From Backend-Code to Abstract Prompt: What will the adaptation of LLM-Orchestration-as-Code mean for software development?

It seems that we are getting closer to using LLMs as a backend for our applications - here's why it might be reasonable and exciting:

This week, OpenAI announced an update to their API function, which, in my opinion, highlights a direction for the usage of LLMs that hasn’t received enough attention.


Short Intro for non-developers:

What is JSON? JSON is a text format, which is like a universal language for computers. It's a simple format used to organize and share data in a way that's easy for both humans and machines to read and write.

How do developers use it? Developers use JSON to send information between different parts of a software system, like from a server to a website or between different programs. It's especially popular because it's lightweight and easy to understand.

Why is consistent formatting important? Application expects data in a specific JSON format, to be processed. If the format (or "language") changes unexpectedly—say, missing pieces or arranged differently—the application might not understand it. This misunderstanding can cause the application to malfunction or even crash because it doesn't know how to process the unexpected data.


Last year, many developers attempted to use the OpenAI API as a backend server for their applications, trying to force GPT to provide responses in strictly defined formats.


? Well, unsurprisingly, this approach led to challenges, as the responses were not reliable, and slight changes in structure could cause the application to crash.


? However, this approach also revealed new capabilities—backends could dynamically adapt to API calls from the frontend, that weren't explicitly programmed in the backend ??.


For example, in one of the experiments, a grocery app, with functions like:

"?? ?????? ???????? ???? ????? ??????????????? ????????" and

"???????????? ????????,"

the system could dynamically reason and execute a command like

"?????? ???????????? ?????? ???????? ????????????????" in the shopping cart.


Another amazing example is the article by Mate Marschalko from January 2023 (!), where he shows, how to use LLM as dynamic backend for Siri and smart home automation.

ChatGPT in an iOS Shortcut — Worlds Smartest HomeKit Voice Assistant | by Mate Marschalko | Medium

Don't get me wrong - this approach sounds very unusual and unreliable to me as a full stack developer, but looking at it from the side, I don't see why this shouldn't be possible or even be best practice in some areas in a few years' time, to teach the backend with a prompt, instead of hard coding each variation of the repetitive calls.

OpenAI launching "Structured Outputs"

An important step in this direction is OpenAI's introduction of "Structured Outputs" in their API this week, a feature that ensures model outputs ???????????????? conform to developer-supplied JSON Schemas.

This advancement addresses the challenge of generating consistent structured data from unstructured inputs by guaranteeing that outputs match specific schemas, opening the way for using LLMs as a reliable backend.

Generating structured data from unstructured inputs is one of the core use cases for AI in today’s applications. Developers use the OpenAI API to build powerful assistants that can fetch data and answer questions via function calling, extract structured data for data entry, and build multi-step agentic workflows that allow LLMs to take actions.

We had to work around the limitations of LLMs in this area using open-source tooling, prompting, and retrying requests repeatedly to ensure that model outputs match the formats needed to interoperate with their systems. Structured Outputs solve this problem by constraining OpenAI models to match developer-supplied schemas and by training models to better understand complex schemas.

On evaluations of complex JSON schema following, the new model gpt-4o-2024-08-06 with Structured Outputs scores a perfect 100%. In comparison, gpt-4-0613 scores less than 40%.

It's particularly useful for applications requiring precise data formats, such as data entry or multi-step workflows, and it supports function calling across various models.

Looking to the future: from low-level prompting to high-level orchestration?

LLMs are still not perfect, but when used correctly, they are extremely powerful and flexible building blox for complex problems. But starting each project with one singe block is not the best way to go.

In 2024, frameworks like LangChain, AutoGen, and LangGraph are changing LLM workflows and agent orchestration by offering modular tools and scalable workflows for complex AI applications. These frameworks enable AI agents to collaborate, use tools, and handle tasks with persistence and context, making them ideal for production-ready applications. Additionally, evaluation frameworks like DeepEval are crucial for assessing the performance of multi-agent systems.

As the community creates more workflows and frameworks to simplify the usage of LLMs and incorporates best practices, we will likely see the same transition and increase in quality that we saw in web development with the rise of the first frameworks.

In the world of LLMs, we are rapidly moving from low-level code approaches to higher levels of abstraction—week by week. Let's compare what path we've had in software development and what trends we can expect in LLM and GenAI on that basis to better understand this trend.



Programming Languages: From Low-Level Code to High-Level Abstractions

The development of programming languages has seen a dramatic shift over the decades, moving from low-level code that directly manipulates hardware to high-level abstractions that allow developers to focus on solving business problems rather than managing technical details.

  1. Low-Level Code: In the early days of computing, developers wrote code in assembly language or machine code, which required a deep understanding of the hardware. This was a painstaking process, as every operation had to be explicitly defined, leaving little room for error and innovation. ??
  2. High-Level Languages: As the complexity of applications grew, so did the need for more expressive languages. High-level languages like C, Python, and Java emerged, abstracting away the hardware specifics and allowing developers to write code that was more understandable and maintainable. This shift empowered developers to focus on logic and functionality rather than the intricacies of memory management and CPU instructions. ??
  3. Frameworks and Libraries: The next major evolution was the advent of frameworks and libraries. These tools encapsulated common functionalities, allowing developers to build applications faster and more efficiently. Frameworks like React, Angular, and Next.js enabled developers to build sophisticated web applications with less boilerplate code, while still adhering to best practices. ??
  4. Infrastructure as Code (IaC): As applications became more distributed and complex, managing infrastructure manually became a bottleneck. The rise of Infrastructure as Code tools like Kubernetes, Terraform, and CloudFormation allowed developers to define their infrastructure declaratively. Developers could now describe their desired state in code, and automated tools would handle the provisioning, scaling, and management of resources, incorporating best practices and reducing human error.

The Evolution of LLMs for Developers: From Manual Prompting to Orchestrated Agent Networks

Similarly, the usage of LLMs in software development will undergo a transformation, from manually prompting a single assistant to orchestrating networks of agents that work together to solve complex tasks.

  1. Manual Prompting: Initially, interacting with LLMs involved manually crafting prompts to generate responses. While powerful, this approach was often hit-or-miss, as developers had to iterate on prompts to get the desired output. The potential of LLMs was evident, but the process was labor-intensive and required a deep understanding of how to communicate effectively with the model. ??
  2. Function Calling and Structured Outputs: As developers sought more control over the outputs, features like function calling and structured outputs were introduced. These allowed developers to enforce stricter formats and schemas, making LLMs more reliable for specific tasks. This was akin to moving from low-level code to high-level programming languages, where the focus shifted from managing the model’s quirks to leveraging its capabilities more effectively. ??
  3. Orchestrated Agent Networks: The next evolution in LLM development is the orchestration of multiple agents working together. Rather than relying on a single assistant, developers can now design systems where multiple LLMs collaborate to solve complex tasks. These agents can communicate, delegate tasks, and operate in a coordinated manner, much like microservices in a distributed application. This approach allows for solving more sophisticated problems, where each agent can specialize in a particular aspect of the task. ??
  4. High-Level Abstracted Commands: Just as Infrastructure as Code abstracted away the complexities of managing resources, the development of LLM orchestration tools allows developers and users to give high-level, abstracted commands. Instead of micro-managing each agent, developers define the overall goals and constraints, and the LLMs figure out the details, incorporating best practices to achieve the desired functionality. This represents a shift from low-level prompt engineering to high-level task management, where the focus is on what needs to be done rather than how to do it.

The parallel journey and what we learn from it for the near future

Both the evolution of programming languages and the development of LLMs as developers share a common trajectory: moving from manual, detailed control to high-level abstractions that streamline the development process. In both cases, the goal is to reduce the cognitive load, allowing them to focus on innovation and problem-solving rather than getting bogged down in the minutiae.

As we continue to advance in both fields, we can expect even more sophisticated tools and frameworks that further abstract complexity, enabling developers to build even more powerful applications with less effort. Whether it’s defining infrastructure in code or orchestrating a team of LLMs to complete a task, the future of development is one of increasing automation, best practices, and high-level abstractions.


Just as higher levels of abstraction have driven the evolution and quality of traditional programming, the increasing sophistication of LLM solutions—like using LLMs as a reliable backend— will unlock new possibilities, streamline development, and elevate the standards of AI-driven applications.



Exciting journey ahead with Christof Horn , Pankaj Sodhi , Kathrin Schwan , Erik Soelaksana , Laura Mosconi , Thomas Reisenweber , Nick Rosa , Irina Adamchic, PhD , Ramon Wartala , Igor Aranovsky , Alexander Herttrich and many others.

Amit Dev

President, DigiValley.AI Group

7 个月

Many big points are now coming to mind as it has been hinted now in post and my appreciation is forwarded to the author

Joseph Abraham

Founder and Principal Analyst at AI ALPI (Driving Intelligence Acceleration for HR Leaders), Chief Business Officer at Xerago and Thought Cred (Driving Thought Intelligence Acceleration for Enterprises)

7 个月

Interesting take! Comparing LLMs to software evolution makes you think. Vlad Larichev how do you see LLMs changing the way developers work in the near future?

要查看或添加评论,请登录

Vlad Larichev的更多文章

社区洞察

其他会员也浏览了