The citizen developers are coming. But what’s still needed for GenAI to disrupt no-code tech? (part 2)
Tim Wagner
Vendia co-founder and CEO | Building the Next Generation of Serverless and Blockchains | Kind Human
This article was originally published on Medium .
Part one of this article series explored how no-code platforms work today and why GenAI will fundamentally change how these tools are developed, unlocking the next wave of IT potential.?In part two we examine what progress needs to be made before GenAI can fuel the coming generation of citizen developers.
Here, we discuss the progress that needs to first be made in the following three areas: prompt engineering, domain-specific training, and better support for shared data. Let’s look at each of these in turn.
“Alexa, make me an app” – Prompt engineering for business app development
The first problem we need to address is the expression of the problem statement. Let’s say you’re in procurement and you need an app to help you optimize inventory holdings against order frequency. How do you tell a no-code platform what you want? For all its many failings, the classic drag-and-drop no-code UI approach is at least a way to structure a conversation between an information worker and an app generation platform without relying on a programming language.
With GenAI, the “prompt”—the action that someone tells the AI to do—is the equivalent of a professional developer writing code or the UI in existing no-code frameworks. So, what, exactly, should that prompt be? “Alexa, make me an app to solve my business challenges” is too vague and “Alexa, write a for loop in JavaScript over the Z variable” is way too specific to expect from a citizen developer.
This issue is one reason why coding-specific LLMs to date have focused more on making professional developers more productive rather than turning non-developers into developers: Today’s LLMs still require deep expertise to phrase the question (and to understand the answer!).?
Offerings such as Amazon CodeWhisperer, Github Copilot, and the amusingly titled OverflowAI from StackOverflow are primarily intended to help people who already understand the vocabulary and problem space of software development. They’re essentially Google searches on steroids more than they are ways to write net-new applications from scratch or to extend or rewrite existing apps. Even knowing how to ask the right questions, let alone interpreting the answers, requires some level of training and experience.
In seeking a better way to express business needs, we need to return to how humans communicate with each other, in the form of specifications and requirements gathering. Even before computers were a thing, program and product managers were asking frontline business professionals what they needed in terms of processes, forms, or other kinds of help to get their job done and/or optimize the business.?
Modern-day equivalents of these conversations often happen in the form of application development specs, where a product manager interviews the target users to learn what’s needed, review early ideas for approval, and then monitors post-launch outcomes to continuously improve and optimize the application based on ongoing feedback.
This same cycle, expressed in the very same way, must also happen inside a GenAI-powered no-code platform.
In more technical terms: We need to encode the role of the product manager—soliciting requirements, asking follow-up questions, proactively getting feedback on proposed outcomes, etc.—as a conversational workflow inside the LLM. Meaning, app development isn’t a one-and-done “Alexa, build me an app” request, but an ongoing process and conversation between the AI-powered platform responsible for creating and maintaining a business application and the users of that app.?
Today’s AI chatbots, even code-specific ones, aren’t yet ready to take on this role, but they’re getting close. Some thoughtful prompt-engineering investments from an innovative startup or a motivated big company will inevitably push this over the finish line in the next year or two.?
Making your business the AI’s business – domain-specific training
Clearly we need not just “a programmer in a box” but actually “a programmer and a product manager together” in a box. But even that isn’t quite enough to empower our new generation of citizen developers. To make that prompt engineering meaningful and the outcomes it generates powerful and effective, the GenAI needs to understand not just coding but business, and not just any business but the company’s specific business.
Let’s look at an example. Suppose a business analyst at a retailer like Walmart or Target wants to look at optimizing inventory holdings as a percentage of sales, including which categories to initially focus on. Computing the inventory turnover ratio (ITR), the result of dividing the cost of goods sold (COGS) by average inventory holdings in constant dollar terms, and then viewing this on a regional or category basis as a fraction of sales is something that an analyst would intrinsically understand. But if these terms and formulas are a blur to you, imagine an untrained AI trying to process them…it would be just as lost.
To be effective, GenAI for business applications needs to know a helluva lot about business. In addition to being trained on code and algorithms, it needs an MBA…both general business terms—such as understanding what COGS means—as well as how to compute formulas like ITR. It also needs sector-specific knowledge, such as how inventory value is calculated for retail stores (Average over a quarter? End of quarter holdings? Something else?).?
Plus, it requires information specific to the individual company, including details like where information about COGS, inventory, and so forth actually lives in the company’s databases and how to get access to it.
In short, an effective replacement for a no-code platform needs at least the following in its training set:
Today, no one system has all of this in a single place, although we’ve seen successful GenAI prototypes for each of these capabilities in isolation. This tells us that bringing them together is just a matter of time—not a fundamental limitation of GenAI as an approach.
RAGging on your AI results
Now, not all of the categories above will make it into generic LLMs, such as the ones that power chatBot, because businesses won’t want their private data or company details like org charts to leak out. Yet these sources are still necessary to make the GenAI piece work. So, how do we square that circle?
Let’s suppose that prior to generating an app, our procurement engineer wants to ask a simple question like, “How many pillows do we have in inventory in the eastern region of the US?” A generic AI chatbot wouldn’t know the answer because it didn’t have access to (and thus wasn’t trained on) that retail company’s private data.?
领英推荐
Enter RAG, an acronym for retrieval-augmented generation. To make a query like above work, the GenAI platform would need to turn it into a multi-party workflow like so:
While GenAI systems with RAG capabilities are only just starting to appear, they’ll quickly become the norm for business purposes for all the reasons discussed. To make this work, however, we need to ensure that the underlying data is actually present in the database, which brings us to our third requirement.
Your data has left the building: How shared data solutions will empower no-code outcomes
Notice how so much of the conversation about GenAI is actually a conversation about data? Training LLMs requires data, RAG is all about dynamically querying data, and the generated applications themselves are going to produce and consume—you guessed it —more data.
Now here’s the rub: Companies such as Amazon have said that 80 percent (or more!) of mission-critical data lives outside their four walls—sitting with suppliers, merchants, logistics partners, SaaS companies, you name it. Corporate data that previously resided in a company’s private data center is now distributed across hundreds of partners, public and private clouds, and dozens, if not hundreds, of third-party SaaS apps.?
These trends are only accelerating, with CIOs and CDOs becoming less stewards of corporate data and more conductors of a large data orchestra: responsible for the outcome without actually playing the instruments.
Training LLMs for no-code app generation, powering RAG-based construction methods, and running the apps will require access to all data related to the business—not just the data a company owns—which must be accurate, complete, and up to date regardless of source. In this way, modern GenAI-powered applications are inherently multi-party data automation apps and must also navigate all the complexity that this implies. If and when these apps need to generate and update data (which they will), the problem only grows hairier.
Modeling and sharing data will take on an even more business-critical role as more information workers become data producers and consumers without the usual gatekeeping by software and data teams to help manage it. Critical concerns such as which fields represent PII, what data can be shared externally, and whether to redact or obfuscate sensitive data now lies in the hands of an AI model.?
This means it needs to be really, really good at understanding the underlying entitlement and infosec policies of the company in order to protect sensitive assets from accidental exposure while still enabling business apps to be created faster than ever.
The good news is that unlike generating no-code applications, sharing data is something that a data platform offering can already provide. Distributed ledgers—notably multi-tenanted cloud-scale ledgers—can track ownership and provide an auditable trail of who-shared-what-with-whom, as well as share the data payload itself when necessary.
Unlike traditional APIs, which are akin to dumb “garden hoses” that simply carry whatever’s put inside them, modern data-sharing and automation platforms are capable of synchronizing data accurately across any combination of companies, clouds, geographies, and tech stacks. This ensures that all parties and systems view a single, unified, and complete “golden chain” record, whether it describes customers, supplies, transactions, or other business data.
Bottom line? Combining distributed ledger technology with appropriately trained GenAI and RAG-based optimizations will undoubtedly deliver a powerful end-to-end platform for creating next-gen no-code solutions.
Does this Brave New World still have developers in it?
Citizen developers are coming, because they must—the economics demand it. The tools of their trade, reimagined no-code platforms based on distributed ledgers and genAI expertise, are going to change IT landscape in countless ways to unlock a new wave of business benefits and value.
This new generation will sweep away the limitations of the past, where no-code platforms focused more on how to prevent users from doing rather than empowering them to create anything they need. Instead of restrictions and guardrails, LLMs and AI training will embody the same expertise and capability of a real human developer.
So, does this mean developers are eventually going away? In a word: No.?
If you look at any major computer-related advancement over the past 75 years—invention of the transistor, personal computers, the Internet, public cloud, etc.—none of these transformative changes put developers out of a job. On the contrary, new innovations only increase the underlying demand for developers.?
But we can expect to see an evolutionary shift in jobs, especially when it comes to more mundane development tasks. Similar to how data lakes such as Snowflake and Databricks gave rise to a massive generation of workers with “data” in their title (data analysts, data operators, data scientists, etc.), we’ll likely see more new job titles related to GenAI and data sharing.
For example, web app developers won’t disappear but may be partially replaced by? “prompt engineers” or “RAG engineers” who help businesses style and tune their GenAI platforms for better outcomes. Data operators, especially those who specialize in intercompany, multi-party, and shared data, are also set to become a thriving subsegment of the data market.
Unlocking these revolutionary business transformations requires solving some incremental problems, including turning GenAI prompts into something more akin to conventional product managers, as well as better infrastructure to incorporate a company’s domain-specific information into an LLM knowledge base.?
Researchers, startups, and companies like Amazon, Google, and Microsoft are already hard at work delivering these capabilities commercially. Better prompt engineering, RAG, and multi-party data sharing will together unlock a transformation as revolutionary as the public cloud…and maybe even bigger!
All this to say: Even if you didn’t stay at a Holiday Inn Express last night, you’ll soon wake up capable of creating unique, powerful, secure, and safe business applications for your company—regardless of your background, job title, or technical skills.??
Welcome to the brave new world, citizen developer!
How can your organization safely embark on this new GenAI frontier? Check out our eBook, “Unlocking the promise of generative AI ,” for actionable measures you can take to keep sensitive data always secure.
Co-founder, data geek, designer, presenter and product guy
7 个月Love this bit.... "Clearly we need not just “a programmer in a box” but actually “a programmer and a product manager together” in a box. But even that isn’t quite enough to empower our new generation of citizen developers. To make that prompt engineering meaningful and the outcomes it generates powerful and effective, the GenAI needs to understand not just coding but business, and not just any business but the company’s specific business."