Exploring the ‘Final Frontier’ of End-User Programming

Exploring the ‘Final Frontier’ of End-User Programming

How do you create a world where anyone can build, test, and deploy their own software from scratch, using only requirements expressed in language, images, video and audio?

That’s the question my co-authors – researchers at Cambridge University: Diana Robinson, PhD candidate, Christian Cabrera, Research Associate, and Neil Lawrence, DeepMind Professor of Machine Learning and Lars Mennen, co-founder and CTO of Cogna – and I tackled in a recent paper: ‘Requirements are All You Need: The Final Frontier for End-User Software Engineering’.

The software development pipeline currently relies on product managers and user researchers, acting as intermediaries between users and engineers. They identify unmet needs in groups of users, and then help define software that can meet those needs. As my co-author Diana points out, the problem with this approach is that you’re essentially designing a product for the lowest common denominator – flattening specific user needs into basic, generalised requirements that you can build for.?

The paper explores a new paradigm for software development: a future where the development pipeline is driven directly by users, assisted by AI, with efficient oversight from professional engineers, also assisted by AI. This approach creates a one-to-one relationship between users and software, allowing any user to create applications tailored specifically to their needs. The key to this vision is natural-requirement driven synthesis, where users can fully express their needs through natural language, images, audio, or video demonstrations, and have AI synthesise a hyper-customised, precision application that meets them.

This is exactly what we’re working towards at Cogna. But with it comes the challenge of helping users imagine what can be built. If you’ve never built software before, it’s hard to know what visualisations and interactions are possible. My co-author Diana points out that people are often led by popular culture and movies in terms of what's possible – and those expectations are not necessarily based in fact. So how do we guide users to fully articulate their requirements in a way that can be built and deployed?

To address these challenges, we outlined a research agenda in the paper, focusing on three key areas:?

  1. Expressing user requirements: Helping users imagine and articulate their requirements through natural language, images, audio, and video demonstrations.
  2. Generating meaningful tests: Developing methods to automatically generate tests that ensure the software meets user requirements.
  3. Responding to dynamic requirements and environments: Creating systems that can adapt to changing requirements and environments automatically, while keeping the user in control of the software’s evolution.

Consider an example of a pilot-tracking app for a US airline company. The Federal Aviation Administration (FAA) requires pilots to complete a flight review every 24 months and to perform at least three takeoffs and landings within the last 90 days to carry passengers. So the requirements for this app could be quite short: 1) allow pilots to log their reviews, 2) access flight logs, showing takeoffs and landings, 3) show a dashboard of the pilots, 4) raise alerts about expired requirements, and 5) allow the booking of review sessions. Just with those basic requirements, the airline could use AI to synthesise a custom app tailored to its exact needs.

Part of the beauty of precision software is how simple it can be. Unlike general tools, with overwhelming options and functionalities, precision applications like this just do exactly what the user needs. Think about spreadsheets. I love them, but a lot of people are understandably intimidated by them – a zillion different buttons and a zillion different formulas you might write. The promise of natural-requirement driven end-user programming is software that is more accessible and user friendly.

But how will the user know that the software meets their needs and actually works? Given the ambiguity of natural requirements and the various edge cases that could crop up, testing is vital. AI can help here as well. AI could generate tests to cover all possible scenarios, and analyse them more quickly and thoroughly than a human. The trickiest part is presenting these results in an understandable way to the user, so they can validate and refine their requirements.

What happens when the requirements change? Maybe the underlying platform or API updates, or the FAA changes its requirements for pilots. We already have existing methodologies to handle the former. Say a Python library changes. It should be simple enough to automate updating the code, re-running the test suite, and repairing any errors. For the latter, right now, users would manually need to re-synthesise the app with the new requirements, but I think we’ll soon see AI able to automate this process by monitoring relevant sources and adapting the software accordingly. There’s an open question, however, about how you tell the user about these updates. You might not need to bother them about updates of the first sort, but you’d want to make them aware of requirement changes so they remain in control of their software and how it’s evolving.

So we’re still early in the journey, but this is the vision we’re working towards. It’s exciting to imagine a future where everyone can define and build the software they need. As my co-author Diana has said, it could lead to solutions that don’t even resemble traditional apps. Users will inevitably surprise us, experimenting and building software in new ways, creating applications – or entirely new forms of software we can’t yet imagine.

This paper looks further ahead than what we can currently deliver – and has been accepted for the Software Engineering in 2030 Workshop later this year, which hopes to define a roadmap for the next decade – but it charts a path to get where we want to go. And engaging in this sort of research creates a flywheel. Today, we’re working with customers to understand their requirements, help them articulate these needs, and then synthesising them into software. We’re generating a lot of data about the problems customers actually want to solve, which informs both our development and further research.

Collaboration with researchers at Cambridge University, with their expertise in human-computer interaction and autonomous AI systems, is instrumental. By addressing the research questions posed in the paper, we can gradually incorporate these insights into what the team at Cogna are building – and get closer to making AI-powered end-user software engineering the norm, transforming how we all build and use software.

Read the paper here.

Muhammad Hamza Shahbaz

Connecting Businesses with AI Solutions | Digital Marketing Strategist | Empowering Growth at Centrox AI

9 个月

What an inspiring vision ?? I'm curious, what challenges do you anticipate in making this a reality for everyday users?

Jie Zhang

Lecturer (Assistant Professor) at King's College London

9 个月

Great vision and insights indeed! And cannot agree more! Will you attend FSE 2024 for the SE in 2030 workshop you mentioned in the post?

Jasmin Jahi?

Director of Studies in Computer Science at Queens' College, Cambridge

9 个月

Interesting. I like the following example. Imagine someone owning a bakery and they want to promote their business and have sw to manage their store. How deep is the technology gap? Asking that person to get a domain, hosting, and a cloud solution for their software might leave us talking to a wall. So, there has to be someone in between.

Elena Simperl

Professor of computer science at King's College London and director of research at the Open Data Institute, @esimperl.bsky.social on Bluesky

9 个月

This is very insightful. Working on a similar thing at the moment to design knowledge bases...

要查看或添加评论,请登录

Andy Gordon的更多文章

社区洞察

其他会员也浏览了