AI, Startup & Security (Part I)
As Chatgpt-3.5 is going viral, some see 'a smarter chatbot', some see business opportunities as increasing unit productivities, while some, see that the world is about to chang irreversibly.
If you are a believer of the latter, even the term VUCA (Volatility, Uncertainty, Complexity und Ambiguity) appeared to be so mild. We are used to certainty, and when faced with a huge wave that is about to slap us in the face, we still try to anticipate what might happen in the future - not to say how to react, but at least to be psychologically prepared.
In this series, I'm writing down related contents in 4 parts. The first part talks about the background of AI - where it comes from, why is it coming up right now, and what it potentially might bring.
The second part will be more down-to-earth, sorting out the impacts on our organisations, society, and eventually on us.
While the third part talks about products and the essence of startups, the last part shall go a bit into discussion of the consciousness, ethic and security topics of the artifical intelligence.
Since some contents below are more or less linked with academic terms and abstract concepts, it might not be that easy to read through and may sounds like talks of daydreamers or technical insanities, but trust me, at least for this one, I write what is on my mind, and there is no Chatgpt involved in this text :)
ANI, AGI & ASI
Back in 2013, prominent scientists and thinkers from the east coast of the Pacific were once again debating the question: When will Super-Artificial Intelligence arrive?
In the article <The AI Revolution: The Road to Superintelligence>, Tim Urban defined the terms by the capabilities of AI:
Artificial Narrow Intelligence (ANI): Narrow / Weak AI is an AI that specialises in a single field, such as Alphago, which can beat the best human player as Ke Jie and Lee Sedol in GO, but nothing else. If you ask it how to make a dish of scrambled tomatoes and eggs, it would never find out.
Artificial General Intelligence (AGI): Human-level artificial intelligence. An AGI is an AI that is comparable to humans in every way, and can do all the mental work that equals to human.
Artificial Superintelligence (ASI): Oxford philosopher and leading AI thinker Nick Bostrom defines superintelligence as "being much smarter than the smartest human brain in almost all areas, including scientific innovation, general knowledge and social skills. "
Where we are now, is somewhere between ANI & AGI. We're actually nowadays surrounded by ANI, saying, in the last 20 years, they are actually our proud inventions.
Although the ANI today is not really a 'threat' to our survival, and we sometimes laught at "artificially retard", every innovation in ANI actually adds to the journey towards general and super AI. In Aaron Saenz's view, weak AI is now the amino acids in the soft mud of the early days of the Earth - unmoving matter that suddenly became life.
Is the time of AGI coming?
Chatgpt-4 = AGI ?
If you've ever hear anything about the underlying logic of Chatgpt, it's actually a wording game where it guess the maximum probability of the next word based on the previous text. Most likely AI doesn't know what it's talking about, rather like "a parrot which is super good at imitation", that's why many scoff at the idea that 'Chatgpt = AGI' - is that all?
But there are two factors that we cannot ignore.
Firstly: this 'wording game' is not for everyone.
The discipline of machine learning has been there for half century. Various types of learning methods have competed with each other and iterated. For a long time, recurrent neural networks (RNNs) and convolutional neural networks (CNNs) were the acknowledged King, until the emergence of the Transformer around 2017, which from the dimensions of better parallelism and shorter training times, took over the crown. GPT, the current star, is the Generative Pre-trained?Transformer, which compresses the information to a high degree. Ilya, the CTO of OpenAI, is a believer that if you can compress information efficiently, you must have the knowledge - otherwise you can't compress it, "you just got to have some knowledge."
For a long time in NLP (natural language processing), people need to find the verbs, nouns and adjectives to analyse the sentences. Even if it's targeted, it can be still hard to figure things out, for instance - Budweiser, is it the name of your old uncle, or the name of a film that some little director made and then kept in the dust, or, a glass of beer ? It doesn't know. Without this knowledge, natural language processing will never work.
In Dr.Qi Lu's words: "The only way to make natural language work is you have knowledge. Transformer has compressed so much knowledge together, which is its biggest breakthrough."
Another point made by A16Z, a VC in Silicon Valley, is that AI will evolve from generating content to understanding it. In their judgement, especially in the to B sector, AI will move from generating long articles based on simple prompts, to summarising refined insights based on massive amounts of information, and then providing recommendations for implementation based on the insights.
The question is: If this 'super clever parrot' can already answer everything perfectly, do you still care what is its logic?
Another one: If it soon evolves to be able to summarise and refine information on its own, how much confidence do we have still, to say that it will never evolve a consciousness of its own? ( *I'll go deeper into this topic in Part IV)
The second factor that we cannot just ignore, is its evolution speed.
Exponential Technology Explosion
Tim Urban told a fascinating story of a time-traveler:
Imagine taking a time machine back to 1750 — a time when the world was in a permanent power outage, long-distance communication meant either yelling loudly or firing a cannon in the air, and all transportation ran on hay. When you get there, you retrieve a dude, bring him to 2023, and then walk him around and watch him react to everything. It’s impossible for us to understand what it would be like for him to see shiny capsules racing by on a highway, talk to people who had been on the other side of the ocean earlier in the day, watch sports that were being played 1,000 miles away, hear a musical performance that happened 50 years ago, and play with my magical wizard rectangle that he could use to capture a real-life image or record a living moment, generate a map with a paranormal moving blue dot that shows him where he is, look at someone’s face and chat with them even though they’re on the other side of the country, and worlds of other inconceivable sorcery. This is all before you show him the internet or explain things like the International Space Station, the Large Hadron Collider, nuclear weapons, or general relativity.
This experience for him wouldn’t be surprising or shocking or even mind-blowing—those words aren’t big enough. He might actually die.
But here’s the interesting thing—if he then went back to 1750 and got jealous that we got to see his reaction and decided he wanted to try the same thing, he’d take the time machine and go back the same distance, get someone from around the year 1500, bring him to 1750, and show him everything. And the 1500 guy would be shocked by a lot of things—but he wouldn’t die. It would be?far?less of an insane experience for him, because while 1500 and 1750 were very different, they were?much?less?different than 1750 to 2023. The 1500 guy would learn some mind-bending shit about space and physics, he’d be impressed with how committed Europe turned out to be with that new imperialism fad, and he’d have to do some major revisions of his world map conception. But watching everyday life go by in 1750—transportation, communication, etc.—definitely wouldn’t make him die.
...So, in order for someone to be transported into the future and die from the level of shock they’d experience, they have to go enough years ahead that a “die level of progress,” or a Die Progress Unit (DPU) has been achieved. So a DPU took over 100,000 years in hunter-gatherer times, but the post-Industrial Revolution world has moved so quickly that a 1750 person only needs to go forward a couple hundred years for a DPU to have happened.
This pattern—human progress moving quicker and quicker as time goes on—is what futurist Ray Kurzweil calls human history’s Law of Accelerating Returns.
He believes that what've been achieved in 100 years of the 20th century could be achieved in just 20 years at the 21st century. And a few decades later, we could achieve the equivalent progress of the entire 20th century several times a year, and perhaps later once a month.
On the basis of the accelerating returns, Kurzweil believes that humanity will make 1,000 times more progress in the 21st century than it did in the 20th.
Sounds crazy, or?
If the term 'edge of change' sounds a bit abstract, take a look at the picture below - What does it feel like to stand here?
As Tim urban explained, it seems like a pretty intense place to be standing—but then you have to remember something about what it’s like to stand on a time graph: you can’t see what’s to your right. So here’s how it actually feels to stand there:
Yes, Vernor Vinge and Ray Kurzweil, and many other scientists, support the idea of an exponential technology explosion. Jeremy Howard, an expert in machine learning, described as follow graph in his 2014 TED talk:
领英推荐
These people believe that ASI will happen in the near future because of exponential growth. Machine learning is still slow right now, but it will become unimaginably faster in the next few decades.
If the Law of Accelerating Returns is true, congratulations everyone, that if we are able to survive another few decades, it's almost set in stone that we will see ASI alive.
Of course, not everyone agree to that. Some others, such as Paul Allen the founder of Microsoft, psychologist Gary Marcus, NYU computer scientist Ernest Davis, and tech entrepreneur Mitch Kapor, believe that thinkers such as Kurzweil underestimate the difficulty of AI, and believe we are still quite far from ASI.
A third camp, including Nick Bostrom, sees no reason for either of the other two camps to be so confident in the future, arguing that ASI is something that may happen in the near future, and may not happen for a long time.
Rest people, such as philosopher Hubert Dreyfus, do not belong to any of the three camps, argues that all three camps are too naive and that there will be no such thing as ASI.
Bostrom conducted a survey to hundreds of AI experts on "When do you predict AGI will be achieved". People were asked to give an optimistic, neutral or pessimistic estimate. The following statistical conclusions were finally drawn:
What was Kurzweil's own answer? - 2029
By the way, the above discussion took place in 2014.
Today, in 2023, CHATGPT-3.5 is officially on the stage.
The discussion is over. Curtain pulling back, welcome to the era of AGI.
Why us, Why now
While the Part I of this article is drawing near to the end, I suddenly feel kind of speechless. I had a talk with my entrepreneur partner and we actually feel grateful to be living in this age. Let's switch to a easier topic, looking back and try to find out, whether it is a coincidence that all this happen to us right now.
If you have read the book <Fooled by Randomness>, it actually argues that the entire human civilisation is a coincidence. The curtain of AGI is gradually revealing, standing on the accumulations of our past.
Talent
If there were a few names to remember in this field, I would start with Rumelhart, 'the Grandfather'.
Rumelhart was the founder of the deep neural network RNN, the backpropagation algorithm that he and McClelland developed in 1986. Without him and the scholarship named after him, there would hardly be Hinton, now known as the Godfather of machine learning. Without Hinton, there would hardly be Ilya, his student the aforementioned co-founder and CSO of OpenAI, and naturally, no Chatgpt as we see it now.
In 2012, Hinton led Ilya Sutskever to the competition (ILSVRC, ImageNet Image Recognition Challenge) and completely outperform the rest competitors, by using CNN algorithm.
And one of the organisor of this competition, is Feifei Li.
Feifei Li is a ethic Chinese scientist who immigrated to the United States with parents in her teenager. From a girl who can hardly English, she eventually has become a legend in the field of AI in the US and so to say, won every academic honors people could imagine in the sector.
One of her most renowned achievements was having created the ImageNet Lab — a database of 15 million named and labeled images, sponsored by Amazon, opened the gate of big data and deep learning in a violent aesthetic way. She then organised the ILSVRC, which established an acknowledged benchmark for other machine vision teams, that rather than playing individually, could since then sitting on one table to compete and communicate.
Would be fair to say, without ILSVRC organised by Feifei, Hinton and his team's algorithm would not have the stage to shine, may be still buried in the ocean of literatures. If to name the team of Hinton are the father of deep learning, Feifei would be considered as the midwife or the mother.
Computing Power
Moore's Law should be quite well-known enough. Nvidia has been sleeping in the gaming graphic card sector for years, until they meet Bitcoin.
The Bitcoin mining has enabled Nvidia to grow fast and improved their technology. Although Ethereum switched PoW to PoS in 2022, Nvidia, lucikly enough to meet the rise of AI and the computing power of their GPU is still in high need.
Without the boost of computing power brought by crypto market, it would be the dead bottleneck of artificial intelligence.
Capital
It's a long road for science to go technology, and to engineering and finally commercialization. While the breakthrough point could be tomorrow, it could also be someday 500 years later.
When people see how OpenAI received $10 billion investment from Microsoft, it is actually a a survivor deviation that ignores how they spent the last 6 years. Apart from the 'generousity' of Elon Musk, it is also due to OpenAI's current CEO: Sam Altman, who is also a partner at Y Combinator, a leading VC in the Silicon Valley.
Data
The rise of Web 2.0 was based on the soil of Mobile-internet. Especially in China, with the waves of M-startups, enormous figure of data were generated in the past years. The government has even listed Data as one of the five factors of production, sitting in parellel with Labour, Land, Capital and Technology.
However, please note the fact that according to statistics cited by Wiki from w3tech, the percentage of Chinese-content on internet only around 1.5%. With the reason that everyone knows.
It would be a shock for Chinese to imagine what this suggest, that chatgpt is learning from a number of data that could be out of imagination. With new data constantly being produced online everyday, every second is chatgpt learning like a wolf, tirelessly webcrawling every tiny of it.
Data is the most valuable asset left to the AI age from the Mobile-internet age. Without it, there would be no chatgpt.
So far is the first part of this article. Let's sit back, grasp a drink and take deep breath before moving to the second part, where we would talk about something more down to earth, talk about the forseeable changes and influences, and brainstorming about new patterns of startups.
.
.
.
To be Continued