Dangers of over-simplification
In 2021 Sam Altman wrote an essay "Moore's Law for Everything". It gives some insight into his thinking on how AI will change the world for the better. There are many different points which I think are too simplistic view of the world but I will quote some paragraphs to illustrate the point. He writes
Under the above set of assumptions (current values, future growth, and the reduction in value from the new tax), a decade from now each of the 250 million adults in America would get about $13,500 every year. That dividend could be much higher if AI accelerates growth, but even if it’s not, $13,500 will have much greater purchasing power than it does now because technology will have greatly reduced the cost of goods and services. And that effective purchasing power will go up dramatically every year.
Firstly, productivity drive has been going on for decades and AI is just a new tool to improve productivity. We have also had AI improving productivity for 20+ years although it is not GenAI. We are yet to see any of this reduction in costs because of so many other factors which he fails to understand. $13,500/year cant buy you anything in today's America. Average rent in NYC is >3k/month. What can you do with $13k/year?
The Knowledge Illusion
There is an excellent book "The Knowledge Illusion: Why We Never Think Alone" the authors discuss this very phenomenon.
The Knowledge Illusion explores two key ideas. The first is that we all overestimate our knowledge. The second is that the knowledge that we access in making decisions does not just reside in our heads but also outside our heads — in our bodies, our environment, and other people in the communities in which we live and participate.
In one humbling experiment, people were asked to evaluate how well they understood how a zipper works. Most people confidently replied that they understood it very well — after all, they use zippers all the time. They were then asked to explain how a zipper works, describing in as much detail as possible all the steps involved in the zipper’s operation. Most had no idea. This is the knowledge illusion. We think we know a lot, even though individually we know very little, because we treat knowledge in the minds of others as if it were our own.
领英推荐
My take on Sam Altman's views
He falls into this trap of thinking he understands economics, psychology etc. because he is the CEO of OpenAI - which has created the best AI model. He fails on multiple counts - I wonder if he understands AI itself. He gives some grand view of economics for USA and everyone will get $13,500/year since his AI is going to displace all jobs -
In the next five years, computer programs that can think will read legal documents and give medical advice. In the next decade, they will do assembly-line work and maybe even become companions. And in the decades after that, they will do almost everything, including making new scientific discoveries that will expand our concept of “everything.”
His assumption that mankind just wants to sit around doing nothing is so shocking, which is why he is suggesting, it will take over everything. AI workers don't pay tax or vote. How can governments function? Even the richest people in the world today, find meaningful work and don't just sit back.
He goes on to write -
We should therefore focus on taxing capital rather than labor, and we should use these taxes as an opportunity to directly distribute ownership and wealth to citizens. In other words, the best way to improve capitalism is to enable everyone to benefit from it directly as an equity owner. This is not a new idea, but it will be newly feasible as AI grows more powerful, because there will be dramatically more wealth to go around.
Now, I start to wonder if he understand AI or not. How will AI enable everyone to hold "equity" for a country? The core idea he proposes is deterministic - making things equitable but he tries to say that a probabilistic model will solve this? It's like saying - give it more data and GPT will be perfect in math. It can NEVER be, no matter how much data you give.
In a way he is good for OpenAI marketing, because he does not understand AI and its complexity of how it works. But, now letting him give policy ideas - we will all be doomed. Its not the AI which is going to kill us - its the people.