PipeCandy Memo: We are going scorched earth against ourselves
This newsletter has mostly been about topics related to eCommerce that piques my curiosity. For the last couple of weeks, our team had been poking around topics related to how AI will transform eCommerce. I got into several rabbit holes and one of them is AI safety. I have more questions now and they are very uncomfortable because the answers vary from ‘it depends’ to ‘we are toast.’
This week’s newsletter is not about eCommerce or even commerce. It’s about unchartered territories for work, living, and humanity. Of course, these are my questions and very rudimentary views, not the company's.
Whenever there is a hard concept to understand, I fall back on Tim Urban’s simple yet powerful breakdown of things. He comes up with this fake metric called ‘DPU - Die Progress Unit’ to describe the shock value of technology. Measured in time, it says how much time should have gone by for someone from another era transported to the current era to feel the shock enough to just die.?
What we are experiencing in the last 5 years is shocking progress in tech which is enough to reduce the DPU from centuries to even months. From cute Instagram lenses, we now have deep fakes that are indistinguishable. Such powerful tools are useful for eCommerce to create great product videos and virtual influencers. But that’s like saying “Cute hand grenade. I will use it as a paperweight.” The stuff that’s happening is explosive.
As technology evolves at breakneck speed, there are questions to be asked. Forget the answers, the questions themselves are disorienting.
We have more population than ever but we will have fewer jobs than ever. Purpose, however mind-numbing it is, gives something for humans to wake up every day. What will a human in 2035 wake up for?
The usual argument is that technological progress makes some jobs obsolete. True. Corporations will assemble AI agents on hire for narrowly defined tasks. So a lot of highly intelligent but specialized white-collar jobs might be at risk. Anything that needs atoms to move (bits will be moved by super-intelligent machines) still might need people. But how many?
One of the reasons OpenAI claims to be a ‘for-profit’ company now is that it will be able to sponsor the world’s most ambitious ‘Universal Basic Experiment.'
Now, how many atoms need to be moved and who decides it?
Humans need to eat, sleep, dress, and live. So that calls for a lot of fundamental industrial work. Yes, most of the fundamental work happens because of constraints humans have. We get hungry. We feel cold. We have an evolutionary need to impress the opposite sex. Machines don’t. At least, machines need very few core industrial atom-based needs met. On the balance, a formless, super-intelligent being is a more efficient preserver of atoms than humans.
Are you looking at a machine upheaval?
Perception, awareness, and consciousness are hard problems. Real learning that kids have when they encounter new ideas is highly perceptive.
Neural network-based development of intelligence is what the focus of today is. We are seeing brain as I/O devices and are modeling it. We are essentially approximating brain functions with the models. The cognitive science approach thinks of brains having models of other brains. Cognition is how we introspect and communicate. It leads to concepts of acknowledgment, motives, empathy, curiosity, etc. Humans have complex, evolving models of the world and ourselves. Our minds can simulate imagination that leads to action. Learning for humans is essentially tweaking our models of the world as we progress towards goals, sometimes day to day or across generations.
Humans can take fragments of data, and apply probabilities as we infer. When we interact with humans or nature, we are deriving interpretations of what their minds do/think or what physics we might expect to play out.
There is active research on how to teach machines consciousness.
ChatGPT is not super intelligent. It does not do real learning either. How much progress will we make in teaching machines consciousness? When will machines start interpreting the world around them and the motivations of humans? Humans often get interpretations wrong. So there is no guarantee that machines will get things right. Humans conflate due to biases. Machines can do that too. A super-intelligent machine does not automatically mean an ‘always right’ machine.
领英推荐
So the right question might not be whether we will have super-intelligent machine forms but what will they do with that intelligence? That brings us to the next question.
What is the purpose of machines?
Are they to serve humans, co-live, or enslave humans? Humans evolved over many eras. We were the fittest to survive. We didn’t go around killing all the previous versions recklessly. They just went adrift. We domesticated some species. We inadvertently made a few species go extinct. Our actions have had unintended consequences.
What if conscious machines treat us the same way we treat ants? That is, we largely don’t care but don’t feel bad either if we stamp on them. What if machines think moving atoms beyond their own basic needs is distracting??
None of these questions has clear answers. AI safety groups play up the worst possible scenarios that can happen in the shortest span of time. What we have is a set of plausible hypotheses for the future.
But here is what Sam Altman says about AGI and safety:?
"We want to successfully navigate massive risks. In confronting these risks, we acknowledge that what seems right in theory often plays out more strangely than expected in practice. We believe we have to continuously learn and adapt by deploying less powerful versions of the technology in order to minimize “one shot to get it right”?scenarios"
"We believe this is the best way to carefully steward AGI into existence—a gradual transition to a world with AGI is better than a sudden one.”
How would policymakers across the geo-political spectrum come together to understand the?risks (in spite of the jaw-dropping rewards that come with political mileage)? Can we actually release AGI to the world in an incremental and safe way? Can we completely avoid?‘one shot to get it right’ scenarios??
Why are regulation and political will more important than ever?
We don’t need perfect super-intelligent AI. A flawed one is still more powerful than humans. Will they take over the world? Don’t know. Can they change our place in the hierarchy of species? An existential risk is not black or white. It has shades. We may start with the loss of jobs, move to restrictions to our collective agency, and eventually extinction. It’s perfectly possible that we become sentient borgs ourselves (global hive mind sponsored by Neuralink?) and become a formless species ourselves.
Can we regulate AI like any other modern tech?
Modern technologies, like cars or even microwave ovens, were iterated over time. Each flaw out in the field meant more engineering. What does a flaw represent in AI? Is it ‘It is biased about a race’ level of flaw or ‘it thinks humans are a threat to other species’ level flaw? Can we regulate/fix a conscious being that is more intelligent than us?
Even if we have intent, engineering is iterative and unintended consequences of iterative experiments could lead to misalignment. Misaligned AGI can cause grievous harm. Not my words. These are Sam Altman’s.
Will the world come together to put the genie back in the bottle or would we declare AGI the 'Times Person of the Year' in 2025, unironically??
CEO at Almabase | TEDx speaker | On a mission to make education affordable by helping universities & schools become alumni-centric
1 年Beautiful writing!