Where it all began
(an entry in the "how we got here" category)
I have written in the past a bit about Frank Rosenblatt, the Cornell University professor who developed the earliest known computer simulation of a human neuron. In some ways the story of Rosenblatt's research really is the "where it all began" moment for what we call today generative AI.
Of course Alan Turing gets the credit for initially stimulating interest in the topic of machine intelligence, having first written of the question of whether machines could think in his 1950 paper "Computing Machinery and Intelligence." It opens with the words:
"I propose to consider the question, 'Can machines think?'"
Which in turn stimulated John McCarthy to propose, in 1955, the Dartmouth Artificial Intelligence Conference, ultimately held in the summer of 1956. It was in the proposal that McCarthy is seen to have coined this phrase "Artificial Intelligence" when he wrote:
We propose that a 2 month, 10 man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire. The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.
But it was Rosenblatt, a research with a wide array of interests in both biology and computation, who first set out to engineer a system that could demonstrate a working thinking machine. He had started his career as a psychologist but began work with the Cornell Aeronautical Laboratory working on computational systems with the US office of Naval Research. The first demonstration of a computer simulation of a neuron was in 1958, at which time Rosenblatt wrote:
领英推荐
“Stories about the creation of machines having human qualities have long been a fascinating province in the realm of science fiction. Yet we are about to witness the birth of such a machine – a machine capable of perceiving, recognizing and identifying its surroundings without any human training or control.”
Rosenblatt went on to build a machine (called the Mark 1) dedicated to the task of simulating neurons and a mathematical learning process that we continue to expand on today. To understand the concept of a perceptron, let's start with how we currently understand the way our brain's neurons work.
Neurons are specialized cells in the brain that transmit information through chemical and electrical signals. The process begins when a neuron receives input from other neurons. These inputs can be either "excitatory" or "inhibitory." Excitatory inputs increase the likelihood that the neuron will generate an electrical signal, called an action potential, while inhibitory inputs decrease this likelihood.
When the neuron receives enough excitatory inputs, it generates an action potential, which travels down the neuron's axon. At the end of the axon, the action potential triggers the release of chemicals called neurotransmitters. These neurotransmitters travel across a small gap, called a synapse, and bind to receptors on the dendrites of other neurons.
This process is how neurons communicate with each other. Through this cooperative activity and the formation of neural networks, the brain is able to process and store information. This includes tasks like learning, remembering, sensing, and controlling the body's movements.
A perceptron is a simple artificial version of this neural network, made up of just a single layer of connections. The perceptron can be thought of as a kind of algorithm, which takes an input and generates a similar excitatory or inhibitory reaction to that input. For example, it could be used to decide whether an animal in a picture is a dog or a cat. In doing this, the perceptron makes a prediction, say, it thinks the animal is a dog, and if the prediction is wrong, it adjusts itself, based on the error it made, to make a better prediction in the future. Over thousands or millions of predictions and corrections, it becomes more accurate and reliable in its classification.
This simple concept is the foundation of today's complex and powerful artificial neural networks used in generative artificial intelligence systems, where multiple layers of connections (multi-layer neural networks) can be trained to recognize complex patterns and solve challenging problems in a wide range of fields.
Rosenblatt tragically died in a boating accident on his 43rd birthday and never had the opportunity to see how his simple Perceptron device would scale with increased training data and computation to fulfill his original ideas for a "...machine which senses, recognizes, and responds like the human mind."
Florist | Volunteer Community Manager at Say, Pi | Google Local Guides Guiding Star on Google Maps | Kin Beta Tester
2 个月This is so fascinating. Neural networks and generative AI are really helpful and I'm really grateful to everyone who is involved in the development. ?? This article is very interesting. Thank you so much for sharing, Ted Shelton. ??
IT Principal at Cigna
2 个月Fascinating that prototype neural networks were being constructed in the 50's
Principal at Reveal Group
3 个月Great writeup, Ted. Enjoyed the explanatory portion on human neurons and how that understanding has formed the basis of today’s neural networks.