Embrace the low-coders and the no-coders (and perhaps even the GPTers)

Embrace the low-coders and the no-coders (and perhaps even the GPTers)

In the early 1950s, there was a problem with programming. Digital computers offered the promise of automation and innovation: the press was full of reports about the wonders of ‘electronic brains’. But it had become apparent that just having computers was not enough: to do useful work, they had to be programmed, and programming turned out to be hard.

It’s important to remember what programming meant in those early days. It did not mean opening up an IDE: there were no IDEs, there were no text editors, there weren’t even any screens. It did not mean importing libraries, or entering commands in a language which looked like English. It meant breaking down every problem into mathematics, and then breaking the maths down into basic arithmetic and atomic logic. The primary productivity innovation was the creation of assembly languages: symbols and mnemonics to make it easier to shuttle numbers in and out of memory and perform operations on them - but even these languages were only one step away from the physical hardware.

The computer pioneer, Grace Hopper, had an idea which would change the nature of programming: she realised that it was possible to use computers to automate part of the process of programming. She observed that programmers routinely built standard subroutines to do common tasks. Why not treat those subroutines as commands, and write software that read those commands, looked up the subroutines and integrated them into the machine code? This process of pulling together existing subroutines became known as compiling, and the software that did the work became known as a compiler.

You might imagine that this idea was greeted enthusiastically, and that programmers would quickly start figuring out how to build and enhance compilers. In practice, while Hopper did find a community of collaborators, she also met with scepticism and resistance.

Some of this initial resistance was justified and data driven. At a time when the number of computers in the world was small, run time on those computers mattered. Productivity analysis on early compilers showed that, while they saved time in developing programmes, those programmes ran more slowly. When the extra time was multiplied across many iterations of the code, the programming productivity benefits vanished. Human generated code was more efficient than machine generated code. But this was only at the start - subsequent iterations of compilers, aided by improvement in computer performance, soon eliminated this human advantage.

Yet Hopper still met resistance, this time driven by belief, behaviour and habit, rather than data. It was difficult for some programmers, who had learnt the peculiarities of their machines, to accept that those peculiarities could be abstracted away by a piece of software. As Hopper observed in a 1976 interview, “Well, you see, someone learns a skill and works hard to learn that skill, and then if you come along and say, ‘you don’t need that, here’s something else that’s better,’ they are going to be quite indignant.” To those people, programming using a compiler was somehow not ‘real programming’.

Today it seems unthinkable to build software in a world without compilers or interpreters. Most programmers (including me) would have no idea how to start writing code that operated at the assembly language or machine level. Through her work on compilers, Grace Hopper became Director of ‘Automatic Programming’. Today, we would not call the field ‘automatic programming’: to us, it is just programming.

However, some of the scepticism and resistance to productivity tools remain with us. High level languages of the type that Grace Hopper helped invent when she worked on COBOL, have been remarkably successful. Even though COBOL has long been out of fashion for new development, the use of English like commands, and a range of familiar logical constructs have enabled us to build a digital world. And yet, there are other ways of building software.

Low code and no code solutions are marketed at end users, with the proposition that they will enable people who are not technical experts to automate aspects of their work. They can assemble logic and create rules using tables and graphical interfaces. Most such products now come with an extra injection of AI. Some are sold with the promise that, with these tools, you can bypass the killjoys in the IT department, who will ask lots of awkward questions about security, resilience, maintainability and costs. Your users can become ‘citizen developers’.

Many of us in the technology profession, particularly those who have been writing code for many years, shudder at the prospect of low code and no code solutions built by end users. We feel uneasy at the term ‘citizen developer’. And sometimes with good reason: most of us have, at some point in our careers, been passed ‘systems’ built by someone with limited technical expertise but a lot of enthusiasm, and asked if we can ‘just’ make them scale 100x, or integrate them with our production database, or make them publicly available on the Internet. We shrug our shoulders and suck our teeth, knowing that ‘just’ doing those things will require a full rewrite.

However, I think that, when we feel the temptation to react in this way, we should look in the mirror and ask ourselves: are we behaving like Grace Hopper’s enthusiastic crew of collaborators, who help make efficient compilers a reality? Or are we reacting like the sceptical assembly and machine code developers, defensive of our craft, and hostile to tools that abstract away our skills?

Perhaps the right way to respond to low code and no code tools is to take the label of ‘citizen developer’ seriously. We often complain that there aren’t enough developers in the world, and we have a whole population of people who have just volunteered to take on this role. Rather than sighing at their efforts to build systems that aren’t reliable, scalable or secure, maybe we should take the opportunity to teach these disciplines, and embrace them as part of our community.

This question seems particularly pertinent as we enter an era of AI generated code. Some of this code will be terrible (after all, it is trained on code that already exists, much of which is terrible) and we do not yet know its long term impacts on readability and reliability. But we can’t (and shouldn’t) wish this innovation away: to engage is better than to ignore.

The history of programming is a history of pulling ourselves up by our bootstraps, from circuits, to machine code, to assembler, to high level languages, to frameworks, to low code and no code platforms, and now to AI. We have had to learn how to live and practice well at each layer of abstraction: we now have some more learning to do.

(Views in this article are my own.)

I am not a coder so cannot comment on the technical merits of LCNC. However, the concept of the Citizen Developer intrigues me. I am interested in the motivations for why an organisation would entertain it? Is this purely for economic gain (speed, efficiency, scalability etc.)? Or, is there actually value in getting "other" people, let's say your consumers, to shape your products (apps and services) for you?

回复
Sean Alderson-Claeys

Associate Architect at HSBC Global Banking and Markets

3 个月

Like many of those reading and commenting on this really interesting article I spent years honing my skills from my young ZX80 experiments to 8085 assembly and onto numerous languages like C, C++, Java, Perl..... more recently Python. IDE's have evolved to become much more intuitive and whilst I am an LCNC skeptic it is the natural direction of travel. In the mid to late nineties I worked with a tool that generated VB code from UML, the concept was to capture the requirements in UML the base code would be generated from. In many places it would just generate code stubs where the programmer would place the detail and it sometimes struggled with reverse engineering (generating UML from hand cranked code) but you could foresee what was to come. I see LCNC primarily in response to the continued proliferation of end user computing (excel) in an era of increasing data concerns. Many end users see IT as a slow moving beast, if they want to do something at pace they try to do it themselves. LCNC is a means for end users to continue to have control in a controlled environment, professional developers will still be needed to do "heavy lifting" but interviewing will need to change / adapt to cater / differentiate for LCNC developers

Jonathan Idle

OutSystems, Camunda and Pegasystems experts delivering for public & private sector customers throughout the UK and Europe from our off-shore CoEs.

3 个月

Great article David, tech never stands still, and embraacing low code of which for me there are two distinct flavours depending on platform, citizen and professional is a great way to increase productivity.

Sam Hope-Evans

Senior Solutions Architect at SoundCloud - ex GitHub/Microsoft

3 个月

Interesting article & history David Knott Low code/No code tools certainly have their justified place in businesses. They can enable a culture of citizen devs outside of the IT dept (AKA pro devs?). This is great for shifting left some tech skills and empowerment to keen business users and encouraging the growth mindset in tech. Also can cover those smaller projects which IT rarely has bandwidth for. Launching within a IT dept led initiative with guardrails and mentors should keep it on track when it comes to scalability and security, whilst community & collaboration across an org.

回复

要查看或添加评论,请登录

David Knott的更多文章