The cresting wave of computer science
In 1936 Alan Turing created something magical: a mathematical abstraction we now call the "Turing Machine". It could solve all theoretically computable problems. He threw that tiny abstraction into a vast pool of unexplored mathematics, and it caused a ripple.
From ripple to wave, the first 80 years
World War II accelerated the ripple when it found an important application: defeating the Nazis. The German engineers had designed an encryption device called the Enigma Machine. It could take any text message and scramble it so that it could not be understood if intercepted. Turing knew that his machine could defeat the Enigma machine, and set about doing so.
By the end of the war, John Von Neumann had designed the precursor to all modern computers based on Turing's model. The innovative ripple Turing had initiated was a fully fledged self-sustaining wave.
When I began studying Computer Science at the University of Manitoba in the early 1980s, the wave was already huge. Computers were fast and had large memories. I had access to an 8 core machine with a clock rate in the MegaHertz and 16 MB or RAM, or at least the meagre fractions of CPU seconds dolled out to undergrad computer science nerds. It was an Amdahl mainframe which cloned the IBM 370 series. We could feed it stacks of punched cards, or use a monochrome monitor and keyboard.
For the next several decades, the wave grew, and I tried to stay on top. What I knew when I graduated in 1985 was already obsolete by 1990. I learned more and faster in industry than I ever could have in academia. That is because I care about applications.
Mathematical models are wonderful things...to mathematicians. Nobody else cares. Solutions to abstract problems like correct thread forking and joining, synchronizing access to shared storage, etc. are incredibly dull to anybody but a computer scientist. Applying these tools to real world problems allows us to make the world better in a way that improves peoples' lives tangibly. Now that is fulfilling.
The biggest challenge of a practicing computer scientist for the first three decades of my career was managing tradeoffs. This is so well understood by engineers that it is a cliche. You can always trade memory for computational speed, or vice versa for example. You could get more of both if you spent a fortune.
I am happy to say I became very proficient at managing these tradeoffs. I designed many systems, with the help of great colleagues, that solved incredibly complex problems. Many projects in my career did not succeed. However, this was never, not even once, because we couldn't build it. This is unsurprising, since Turing proved it could be done in 1936.
After 30 years of trying to stay atop the wave of the computer revolution, I was getting tired of swimming. Then I learned cloud services. When I had an opportunity at Irdeto to take a selection of AWS courses and build a solution with a top notch team, all the pieces of the puzzle came together.
I didn't realize it at the time, but this was an existential crisis for my computer scientists' mind. The basic problems were solved before I was born. The engineering problems were now solved. Trade-offs become somewhat irrelevant when you can have effectively infinite storage, CPU horsepower that ramps up as needed, and interface with anything in the world that speaks internet protocol, at lightspeed. And by the way, for nearly free.
You would think that is a good thing. However, I had spent my adult life assembling the puzzle in my head. Now it was complete, and I didn't know what to do next. The wave of computer innovation had completely engulfed me. So I did the obvious thing and retired.
Then by accident something wonderful happened to me: I discovered a fascinating family of unsolved problems in music. Now I know that as long as I am able, I will be applying the beautiful inventions of Turing, Von Neumann, and yes Bezos to solving these problems. That is why I started Not a Recipe Software.
I feel like I am back on top of Turing's enormous wave, and I can see farther than ever before. The wave is unstoppable and accelerating, and impacting every single aspect of human experience. However, like the famous wave in Katsushika Hokusai's print, it is both beautiful and ominous.
The Crest
The three hard problems of computer architecture are
1) How to get enough memory to model all your data. Turing's machine had an infinite tape. This problem has been receding at an ever increasing pace, and with cloud computing has effectively disappeared. S3 is an infinite tape for all intents and purposes.
2) How to run a bunch of related but different tasks simultaneously so you get done quicker. This is solved by multi-instruction multi-data (MIMD) multi-processing. Just running a bunch of Von Neumann computers in parallel gives you this. Even the old U of M Amdahl had this capability. With cloud services, you can spin up as much MIMD capacity as you need, faster than you can think. You just need to be skilled in the art.
3) How to process huge amounts of similar data in parallel with the same algorithm. Generating really good computer graphics is an example everyone has seen. Obviously GPUs solve this class of problems. GPUs are an example of single-instruction multi-data (SIMD) multi-processing. GPUs are also available in the cloud ready to tackle any problem we can imagine.
So all the solutions to the classical problems of computer science exist for nearly free, and they can scale on demand. This effectively means that any problem we can define mathematically can be solved. Now.
It will take humankind time to solve the vast body of problems that have not yet been addressed by combining these powerful tools. However, it will shock most of us how quickly it occurs. I am predicting that the set of interesting problems that can be solved by non-quantum computers will be done in less than 10 years.
At this point, some people are smirking, because they know that artificial intelligence is getting sophisticated, but it is a long way from doing everything perfectly. You may notice that this is the first time I've mentioned AI in the article.
That's because AI is a fascinating distraction from the real revolution that will come in the next few years. The real revolution will come from combining pure math with the incredible capabilities at our disposal with effectively infinite storage, effectively infinite SIMD and MIMD processing, and the boundless communication of the internet.
Any problem that can be modeled as a vector space can be solved efficiently by a GPU array. TensorFlow provides a convenient way to build SIMD software for arbitrary applications.
Currently it is being used for AI, but that is only a limitation of imagination. Nobody says you need to use opaque and inefficient learning algorithms to solve a problem. Many of the problems solved by current neural network AI would be solved orders of magnitude more efficiently if a mathematician merely modeled it.
For example, chess is a problem in vector space:
- the board has 2 dimensions
- the game has 1 time dimension
- each of the 32 pieces on the board trace a path through this 3D space, from an initially defined position until that piece is taken, or the game ends
The implications of this observation are massive in many areas. I will name a few of them that will impact all of us significantly, and soon:
Medical research will accelerate dramatically over the next decade for many reasons, one of which will be this technology. Protein folding is a fine example of a problem which has been solved. Applying the same approach with increasing computation will allow us to explore, understand, and manipulate the epigenome in the near future.
Security software will have a threat that is orders of magnitude greater than it is designed for. Most cryptography from the Enigma machine forward is simply based on creating a problem that takes a Von Neumann (or single instruction single data SISD) machine a long time to crack. An algorithm with a 4096 byte key is just an Enigma with 4096 dials, where each dial has 256 settings.
My question is: do the hackers know they are not supposed to use SIMD? What is to stop them from setting up an AWS account and using an array of 10K GPUs, each with 100 processing cores? If the prize is worth it, the budget will certainly be there. Which brings me to....
Cryptocurrencies should probably only be held by security experts until they are secured by international agreement among the financial establishment. The huge and thriving crypto-market is like the wild west right now. If someone robs the bank where you maintain your account, too bad for you. That is if the bank itself isn't run by crooks.
I am not willing to believe every fly by night crypto-store company is capable of overcoming the threat I outline above. Furthermore, I am confident they will not make you whole if your BitCoin disappears along with that of all their other depositors.
If you must keep BitCoin or other cryptocurrencies as a store of value in the long term, I urge you to store it in an offline serialized form impervious to internet attack. If you don't know what that means, you are playing a game you don't understand.
Ultimately banking and cryptocurrency will be made more secure by this technology once the encryption algorithms use the SIMD approach. We simply need to make encryption hard for SIMD to defeat the problem. Being hard for SISD is no longer enough.
Robotics and autonomous vehicles will be revolutionized by algorithms that are orders of magnitude more efficient than AI, completely verifiable by formal methods, and transparent.
AI itself will be revolutionized. The neural network approach has born amazing fruit, but it is overused. Often AI applications seem to be learning something that can simply be calculated, if anybody bothered to figure out the formula. It is just a matter of getting the functional decomposition correct and assigning the right small parts to the neural network, rather than the whole problem. From a design perspective, this shines the light on the problem a bit more. As the opaque AI portions of the solution get smaller, the well understood parts get larger. Eventually, the tiny bits of AI should be able to invent their own transparent replacements.
Music. Over the next few years I and a small group of colleagues at Not a Recipe Software will demonstrate what is possible with cloud computing and mobile apps using the techniques I describe above. Music applications that nobody dreamed possible will be available soon. The reason that I am so confident in this audacious claim is simple: Alan Turing proved it in 1936.
You can see the Turing Machine in the keyboard image above. The rest is just math...or as the textbooks say "left as an exercise for the reader".
Quantum wave
It may take a decade or more before quantum computing becomes practical. The ripple that will become the great second great wave of computer science has begun. By the time is impacts us, the first wave will have overwhelmed everything in its path.
This means a lot of solved problems, and a lot of advancements in the wellbeing of humans everywhere. It also means a lot of pain. Transition is never easy, especially for those who get left behind.
If you are a software professional or executive, don't let the current wave overwhelm you. Take some cloud computing courses and get on top of it. The view is much better than from under the growing white curl. When the wave breaks, you definitely want to be on a surfboard, not floating in an inflatable life jacket.
On one last personal note, my dear departed friend and colleague Lejla Eminovic would have understood all this in the span of a single cigarette. She would have wanted another to think about the implications, though.
Software/Security/Data
4 年This is an interesting article, especially for me with a different mindset (I would consider myself as a student of C. Shannon instead of A. Turing). In my limited experience, trade-off is everywhere and some of the most beautiful trade-offs are actually expressed in formulae (eg, works done by David Tse for communications). I am not sure if the trade-off between computing resource is really gone if money is in the loop. Back in the very begining of cloud computing, the main motivation is actually for money saving. The same as the wave of technologies using the term Software-Defined. "Many of the problems solved by current neural network AI would be solved orders of magnitude more efficiently if a mathematician merely modeled it." I am not sure if it is the case. Many recent AI related works were actually done by mathematicians and I don't see much breakthrough at least in some subareas (such as reinforcement learning) I pay attention to.
Retired and busy - Youth Development Cycling Coach and Volunteer
4 年Hmm... Math and Engineering... My claim is that the Turing Machine is the foundation of theoretical computer science for engineers... since Turing Machines require state, and are easily implementable in hardware. But Church's Lambda Calculus is the foundation of theoretical computer science for mathematicians... Everything is a function, and so it's stateless. (Alonzo Church was Turing's thesis advisor, and Turing Machines have been mathematically proven to be computationally equivalent to the Lambda Calculus.) Lambda Calculus, in my subjective view, is much more mathematically pure and beautiful than Turing machines. But then, I can build a (finite) Turing Machine in my basement out of mechanical parts! Since computers are glorified implementations of Turing Machines, one can say that computer languages compile to Turing Machines. Except maybe Haskel. Haskel uses System-F, a typed Lambda Calculus as its intermediate language, and compiles to that. But then, it to must enter the "real world" and the System-F is compiled down to run on a real machine.
Retired and busy - Youth Development Cycling Coach and Volunteer
4 年I've always been amused by how the term "AI" is used to refer to a class of problems we think require human-like abilities, but only until the problem gets solved and widely deployed. Face-Recognition was magical AI until every photo app on the planet started recognizing faces in photos and creating photo albums based on the contents of the photo automatically. Now it's "Meh, ho-hum, just another computer algorithm". As I type this comment, Grammarly (written in Lisp!) is automatically marking what I write, suggesting grammar improvements... more "AI magic" that has become "ho-hum".
Technical Writer
4 年NP-Complete or NP-Hard used to mean "not-programmable, hard". To a 2020s cloud architect is means "no-problem, complete". It has one timeless meaning though: "nerdy and pretentious", at least according to my wife ;-).