THE SUN HAS GOT ITS HAT ON
The original neural network was developed for the US Military in 1958. It was called the Perceptron, and it was a physical machine rather than a ‘computer’ as we understand it today. It was a set of 400 light-detecting cells connected by a bunch of wires connected to switches that updated their response with every run, akin to neurons, and designed specifically for image recognition. Steampunk AF.
Designed by Cornell psychologist Frank Rosenblatt, it was an early indicator of the potential for artificial intelligence and set the ball rolling for future developments.
Looking to the future, Rosenblatt asserted, "[Perceptron is] the embryo of an electronic computer that [the Navy] expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence.’
Indeed, much of the early experiments in proto-AI were driven by military needs. Sometime later, the US Army wanted a better way to find hidden enemy tanks in a forest. These tanks are camouflaged and hidden among trees and bushes, making them blend in perfectly. So, if there was a way to see them easily, that might be a decent advantage.
The generals called in the boffins with their new-fangled computers to devise a program to recognise the tanks. So they hid some tanks in the forest and took photos of them. Then, they took more photos of the empty forest. Next, they showed these photos to a program designed to learn like a human brain, a more advanced ‘neural’ network’.?
Now, this nascent AI doesn't know about tanks, forests, or colours. Its image recognition abilities just ‘know’ that some photos have something important and some don't. It looks at the pictures and tries to find what's different.
After training it for a while, it became pretty good at spotting the hidden tanks in the photos. This was tested by continually showing new pictures of tanks and empty forests that it had never seen before. Amazingly, the computer picks the right ones.
Boom.
领英推荐
But when they wheeled out the computer into the real world to find hidden tanks, it failed miserably. It was no better than random. What had gone wrong?
It turns out that all the tank photos used to train the AI were taken on sunny mornings, while the photos of the empty forest were taken on a cloudy afternoon.
It wasn’t the machine’s fault. It worked perfectly well; it's just that it had been actually trained to know if it was sunny or not. The presence of tanks was irrelevant.
How do we know about what a machine knows? When it ‘knows’ nothing.
Even our best AI boffins are still not exactly sure ‘how’ machine learning does what it does; we can only see the results.?
Increasingly, we're seeing statements out of corporate mouthpieces around the adoption of ‘AI-first’ policies. Find an AI before you hire people, in essence. Or if the AI doesn’t do it, then build one that does. Only if there’s no AI, then we hire people.
All sounds efficient, but it's a nightmare scenario for HR. How will they conduct their useless unconscious bias training, Myers-Briggs and the rest? Sure, it’s all a bit of fun, and at least they have some idea of what’s happening in their human employees' heads.
But what kind of Halloween party does the black box want??