The Death of Linux
I was handed my first set of slackware floppies around 1994 - around 20 years ago. At the time I was an aspiring eagle scout and for some reason the local computer science students helped out at our troop.
Mosaic, one of the first web browsers responsible for popularizing the internet was also released around that time.
Back then if you wanted to get online on linux you were responsible for compiling the software that usually didn't compile and tweaking the configs for everything. Chances are your modem had no support either which was extremely frustrating for a 11/12 year old that didn't know how to program yet. Let's not even talk about installing x-windows. Enter the man pages and my eternal struggle to understand how a computer works. (Hint: no one really knows.)
Fast forward to the late 90s and early 00s and we had the first tech boom. Once the exuberant markets had settled down the engineers got back to work designing new systems that would be easier to manage and able to handle the ever increasing load.
Even though the concept of virtualization is rather old it wasn't until VMware released their virtualization software and then later on in '03 Xen released theirs that and the idea jettisoned into the mass consciousness. Not too many people knew it, but this was the beginning of the end of the monolithic kernel. Andrew Tanenbaum and his acolytes had won. Maybe not in the way they imagined, but the death march was on.
It wasn't until later that I truly understood why the kernel was doomed. It turned out not to be a political or technical argument at all. It was only the economics of software development at play.
Later on in the mid 00s a company out of Seattle started leasing out un-used server capacity in their datacenters. That company was Amazon. Once again, little did anyone know but this decision in the span of less than a decade had destroyed the concept of manually managing your own servers. In the span of a decade most businesses went from managing their own physical servers to working with virtual ones. Capex and Opex were, at the outset, not the reason it was done. It was simply practical to programmatically work with your servers via software instead. Of course, the adoption led to massive capex/opex reductions later on.
Now anyone could launch a server or 1000 in 5 minutes.
Later on in the late 00s another interesting thing happened. A company called Heroku started offering fully managed virtual servers. That is - you didn't need *any* operations people. At the time I personally thought it was completely insane that a company would pay for such a service. The bewilderment that most engineers out there that didn't take care of their own systems was foreign to me.
Turns out, as I learned when I was responsible for recruiting said engineers, most engineers might be able to code well enough, but very very few know how to take care of servers and code well - let alone manage them in bulk.
In the past few years a tsunami of projects and companies have started realizing that it's not the servers that need to be dealt with - it's the applications using the servers. The servers are merely raw resources, not homes for your applications.
An entire cottage industry - the container ecosystem has spawned countless companies in the past year ALONE and was the result of tens of thousands of engineers coming to the realization that something had to be done.
The server must die.
So engineers, being who they are, tried to automate the problem away. Noble intentions, but countless problems sprang up resulting in the influx of new companies willing to take care of them. Everything from security problems to networking to simply getting the software from one computer to another were problems the new container paradigm brought to the front purportedly to solve these same problems.
This is how big of a problem this issue really is.
The evolution continues to play out but we are not done yet.
Today it is quite common for large tech companies to employ ONE person to manage the operations of over 10,000 servers. One person - 10k servers. If your project is just starting out you might be on a single free micro ec2 instance but as you scale you'll have to employ more engineers that are more experienced and add more servers.
What happens when John Stenbit's vision of every bullet having an ip address comes to life?
We are long past the days of being able to simply ssh into a host. We don't name our servers after greek and roman gods anymore. They aren't pets - they are cattle and even that metaphor doesn't quite work anymore.
While all this activity was going on under our noses something else was happening too.
Marc Andreessen's insight that "software is eating the world" was dead on. (The same Marc who co-authored the afore-mentioned Mosaic browser.) Now there is no more "tech industry". There is no "IT Department". The music companies were the first to realize it, fight it, and LOSE. Slowly but surely every single industry is being overwhelmed and forced under the mighty steamroller of Silicon Valley. Nothing is safe. Not your babysitters, not your taxis, not your lunch, not your banks. Even your mighty government is close to going the way of the dodo bird.
Here come the geeks.
Everything is software. Everyone is apart of this now whether they like it or not. The luddites can smash as many buses as they like - it won't change anything. The singularity already came and it stole your lunch too.
The effect was that now, every company, in order to grow, had to adopt large teams of software engineers. This meant bigger applications and more problems to go with it. Traditional tech organizations knew about this first and so pizza teams and service oriented architecture (micro-services) began their evangelistic crusade.
This of course exacerbated the already existing problems of managing those pesky servers.
The traditional monolithic operating system (Linux) is not a good fit for our future. It was designed for real hardware, multiple users "logging in" and filled to the brim with security holes for unscrupulous governments and bored 12 year olds.
Unikernels, sometimes called microkernels are its replacement. Already they exist on the JVM (https://osv.io/), Erlang (https://erlangonxen.org/), Haskell (https://github.com/GaloisInc/HaLVM), Go (https://lsub.org/ls/clive.html), OCaml (https://www.openmirage.org/) and others.
The current container paradigm is not sufficient - it is merely dewdrops on a leaf when what we need is an ocean to drink from.
The software is still nascent as the most mature projects are for more esoteric languages not widely used such as OCaml's MirageOS. There's also quite a bit of pain for most developers that are used to traditional POSIX systems but make no doubt about it - if you are writing software professionally in five years - it'll be on a unikernel.
Linux died a long time ago. It's time to move on. It's time for the unikernel.
Advising climate founders on building great teams
9 年poetic, and great imagery. can you make this into a movie? :-)