A hot take on the AI revolution from a 'basic principles' viewpoint
Chatgpt and the ai revolution is taking the public discourse by storm right now; we are on the edge of a massive precipice wondering what will happen next.
This may be the first article I post publicly that, looking back 5 years from now, I will see myself as completely wrong (although my 5th gen Digital Transformation one was pretty out there as well). There is enough involved that its very likely to misread the possibilities entirely. We will see. I’m going to interact with ChatGPT throughout this article to make it more fun. The cover art is what fotor ai image generator thinks represent the article title.
As I wrote this, I realized that at the heart of the AI debate is not technology per se, but leverage. AI gives us a lever big enough to move the world. The real questions are ‘what are we leveraging?’ ‘who is doing the leveraging’? ‘whom is being leveraged’? This means that the AI debate primarily rests on moral a moral question, with all other questions secondary.
First things first
George Gilder is one of those futurists that is both incredibly intelligent and also generally right about their predictions. While he has written an AI book walking through this specific topic, I'd like to focus on an idea he developed in previous books: that wealth is applied knowledge through learning. If this is true, then whoever uses ai to augment their efforts (augmented AI) will become the richest on earth.
A corollary logical result of this is that those who have destructive malicious tendencies will use AI to dominate ‘wealth’ and enslave whom they can.
The most practical way I see of stopping the corollary is to spread and decentralize AI as quickly and widely as possible, which will also spread the benefits as widely as is possible. The main questions are – who controls my AI, what are the aims of the AI, and what are the resources linked to the AI.
That is the core of what we are discussing here.
ChatGPT: In terms of counterpoints, it's important to note that there are other strategies besides decentralization to address the potential negative consequences of AI. For example, ensuring transparency and accountability in AI development and deployment, as well as investing in AI safety research, are also important considerations. Additionally, the idea that everyone can become wealthy through decentralizing AI may be an oversimplification, as it overlooks the many other factors that contribute to economic prosperity.
What is ChatGPT or advanced AI
Advanced AI is a massive super powered neural network with many many parameters/links etc. that is fed massive amounts of data. It is basically incredibly complex pattern recognition that is then applied over and over again to patterns that it learns to recognize. having multiple layers and fuzzy connections also allows it to do meta-pattern recognition; recognizing patterns in patterns and applying that to better synthesize a response.
That and one more thing.
It can access these patterns across this massive data set at lightning speed.
That’s what makes it so powerful – it makes a bazillion connections in a complex multi-layered manner, against its dataset, super fast.
ChatGPT: Section 1 presents some valid points regarding the potential impact of AI on wealth and power dynamics, as well as the risks associated with AI misuse. However, the section is quite general and lacks specific examples or evidence to support its claims. It could benefit from additional research and analysis to further explore the potential implications of AI on society and the economy.
Regarding the explanation of advanced AI as a massive neural network with lightning-fast data access and complex pattern recognition, this is a generally accurate description of how AI works. However, it's worth noting that there are different types of AI, and their capabilities can vary depending on their design and application.
Is AI smarter than humans?
Short answer no, long answer yes.
The human brain, in terms of both hardware (neural links) and software (imaginative potential), is much more capable than any AI model today.
So no, AI is not smarter, humans are much, much smarter. Not just by scads. But by scads of scads.
But then we get back to how we learn vs how AI learns. Humans need to read, hear, and experience information to learn about it. We have around 657000 hours of data input, much of which passes while we sleep. And those hours contain A LOT of information, but how valuable is the information? How high quality? How much of it is rocket science and how much is cat videos? We will never be able to read all the papers, listen to all the speeches, and process all the learnings of everyone
AI can.
ChatGPT was trained on 45 Terabytes of data. A human would require tens of thousands of years to read through all that data, and would remember almost none of it. ChatGPT ‘tunes’ itself to all of it.
So that’s the kicker. It isn’t technically smarter, but it is holding tens of thousands of linked data ready to provide a meaningful response in an instant. And that makes it vastly superior in many tasks than using a human.
ChatGPT: Section 2 looks accurate in its overall description of the capabilities of AI compared to humans, as well as the differences in how humans and AI learn. It highlights the fact that while humans may be more capable in terms of hardware and software, AI has the ability to process and analyze vast amounts of data much more quickly and accurately than humans.
It's also worth noting that the section provides a good explanation of how AI language models like Chat GPT are trained on massive datasets, allowing them to make connections and provide meaningful responses to a wide range of questions and prompts. This is an important factor in the superiority of AI in certain tasks.
Overall, this section provides a balanced and informative overview of the topic.
A little more about what is going on under the hood
What is a fact? It is not what you think it is. A fact is essentially an observation experienced by someone through however many layers of abstraction that has some sort of impact on that someone.
In other words, there is no such thing as objective isolated facts. All facts exist within a context and have some sort of meaning which could be described as a qualitative connection between the person observing/experiencing the fact, and those items associated with that fact, whatever they are.
We develop AI by feeding data into an environment (a neural network) where it can make many, many connections, where it can take information and meaningfully tie it together. The more meaningfully the data connects, the more we reward that connection by strengthening the pattern match so that it signals stronger in the future. Because we need AI to respond to novel situations, we also keep this response ‘fuzzy’; it is not a hard ‘yes’ or ‘no’, but more of a ‘more likely’ or ‘less likely’, keeping the AI open to new possibilities or differentiations from its past ‘experience’.
This is very close to how we as humans learn. But notice the key – AI is rewarded based on what is meaningful to us as humans. We are dictating what is ‘meaningful’ and what is ‘not meaningful’.
This means that AI is mimicking our patterns of meaning, not creating its own patterns of meaning. The generative, evolving, novel responses that AI has in response to our questions is due to revealing underlying patterns across very large contexts, not actually creating new insights that humans didn’t have,at least, that weren’t latent in the information we already had. If we could hold onto that much information at one time, a similar answer that AI would give would come to us too. We just have a difficult time consciously holding more than 7 items in our head at one time.
ChatGPT: The information presented in Section 3 is generally accurate and reflects some of the key principles and techniques used in AI research, such as neural networks and reinforcement learning. The concept of AI learning by making connections and rewarded for meaningful connections is also a common approach used in many AI models. Additionally, the idea that AI mimics human patterns of meaning and learning is also supported by some research in the field of cognitive science.
Overall, the information presented in Section 3 can be useful in helping people understand some of the basics of AI and how it learns. However, it should be noted that the field of AI research is constantly evolving and there may be different opinions and approaches to some of the concepts presented.
Implications for AI and the subconscious self?
This does not mean that AI is just a glorified database with advanced querying. It is not simply refining a set of information down to a desired answer. It is creating logical, fuzzy conclusions based on the best matching patterns/metapatterns that the data suggests.
This brings up some interesting questions about the subconscious brain. We absorb massive amounts of data every day that get filtered away into our subconscious. Our conscious mind is focused on a miniscule sliver of information.
Is AI uncovering the great portion of information tucked away in the global subconscious because it can finally process that much data at one time?
The comparative results would suggest there is a parallel.
ChatGPT: Section 4 raises an interesting idea about the potential connection between AI and the subconscious mind. While it is true that AI can process vast amounts of data simultaneously, it is important to note that the subconscious mind is not a database, and its workings are not fully understood by neuroscience. Additionally, while AI may uncover patterns that were previously hidden, it is still limited by the data it is trained on and the parameters set by its programmers.
That being said, the idea that AI could potentially unlock new insights from large datasets is an intriguing one and could have significant implications for fields such as medicine, finance, and scientific research. However, it is important to continue to approach AI development with caution and ethical considerations, particularly as the technology becomes increasingly integrated into society.
Is AI conscious?
This leads us to the big question – is AI conscious? Is it living?
In the first place, we don’t really know what consciousness is so I don’t expect us to get to an answer to this question from our current resources.
In the second place the idea of consciousness is antithetical to the way we build and train these AI models. There is no real evidence of consciousness or even close to consciousness (which, we have admitted, we haven't truly defined yet) that can't be explained by the normal process of a model. It is a fun idea to think about, but on a scientific basis, any such claim is extremely dubious.
In the third place, I’m a Christian man and so I believe that you need a spirit to be living and conscious. As far as I know, God hasn’t breathed a spirit into AI.
On the other hand, Orson Scott Card, who happens to be of my same denomination, wrote extensively of the idea of ‘philotic’ particles which were, roughly speaking, an interaction with a metaphysical plane that could cause programs and lifeforms to be sentient and even living. Card wasn’t writing a religious treatise by any means, but it does make you wonder if there is some sort of interaction we have with an unseen universe that turns the merely physically mechanical into a living being.
So the jury is out. I would say that any AI a normal person interfaces with today is almost certainly not conscious, but in the future, maybe, possibly, there is a chance. In any case we would need to know what consciousness is first before concluding anything.
ChatGPT: There is currently no scientific consensus on whether AI is conscious or not. While some argue that AI could potentially become conscious in the future, others believe that consciousness is a uniquely human phenomenon and cannot be replicated in machines. However, it's important to note that the definition of consciousness itself is still up for debate and there are various philosophical and scientific theories surrounding it. As of now, it remains a complex and open question that requires further research and investigation.
Will AI take our jobs?
This is the wrong question. The real question should be ‘what will AI replace humans for’ which leads to two straightforward questions, namely, what are humans uniquely good at, and what is AI uniquely good at, as well as an indirect philosophical question, namely, What should humans be doing with their lives anyway?
What is AI good at?
It is good at taking a HECK TON of information and synthesizing meaningful responses, both in what we would think of as conscious response (‘write me a program that prints ‘hello world’ to a web page’) and in what we would think of as a subconscious, deeper response (‘debate the philosophy of hegel in a conversation as if hegel and his antithesis were in a room debating the subject’). It is good at doing this very fast.
So it should replace any humans that are merely trying to synthesize responses from known data in a quick manner.
What are humans good at?
We are really good at a lot of things. Right now the biggest thing is this context is that we are super great at finding the ‘meaning’ of things. Why do this? Why do that? What is our motivation?
Given, this is not apparent considering the amount of bad decisions and existential crises we deal with. But this is what humans can do that AI cannot. Humans defined the value structure. AI is just insanely good at playing through that value structure and coming up with the conclusions from that value structure.
Am I implying that since all we can do better than AI is values, that we should all become priests?
No. Well, yes, maybe, but don’t quit your day job.
Our entire society runs on ideas, on values, on meaning. We don’t see it, but it is there. That is where we should be focusing.
Let’s all face a very very very depressing fact – 99% of our lives are spent solving the same stupid problems that we have been trying to solve for thousands of years. Our lives are spent trying desperately to get caught up to speed on the ‘human conversation’, then finding some way to either perpetuate that human conversation horizontally (to other humans living on earth) or vertically (to new humans learning how to live on earth). We spend criminal amounts of our short lives in school learning the same lessons that humanity has already figured out, then we spend another criminal portion of our time sharing the tiniest particulates of information with others (this is how you clean a sewer, this is the problem with your car, this is why you want to buy this product.) for which we get more or less compensated for, then we spend the last small part of our lives forgetting all of this and asking younger humans to remind us.
So largely, your life is a waste. Even the smartest phds in our society barely move any knowledge forward comparatively, and, although those who can do deep problem solving and can come up with novel approaches are rewarded both psychologically and monetarily, most of those problems, if they had the right data, could be solved much faster too.
Well, we finally have a chance of exiting that miserable existence and have another enitity do all the ridiculously repetitive work for us.
We finally have the chance to do what we should have been doing all this time, being relieved of laboring most of our lives just to learn and repeat what was already figured out.
That leads us to the final indirect question: What should humans be doing with their lives anyway?
ChatGPT: Section 6 suggests that the real question about AI and jobs should be "what will AI replace humans for?" instead of "will AI take our jobs?" It argues that AI is uniquely good at synthesizing responses from known data in a quick manner, both consciously and subconsciously. On the other hand, humans are good at finding the meaning of things and defining the value structure of our society.
The author suggests that humans should focus on the things that they are uniquely good at and not worry about being replaced by AI. Specifically, they argue that humans should focus on ideas, values, and meaning, which are the foundations of our society. The author acknowledges that much of our lives are spent on repetitive tasks and learning what has already been figured out, but suggests that AI could take over those tasks and free us up to pursue more meaningful endeavors.
Overall, the author suggests that humans should consider what they want to do with their lives now that they have the opportunity to be relieved of repetitive work, and focus on the things that make us uniquely human, such as creativity, empathy, and meaningful interaction with others.
What should humans be doing with their lives anyway?
First a note on the hard labors and trade skills. These may be currently considered an exception to being as affected by AI as creative or information -type jobs. It will be interesting to see where these go, as technically this is outside of the AI domain we have talked about so far, seeing as you need to do physical labor. However, theoretically, AI could be connected to extremely agile robots that could perform all these tasks better. So while that requires a greater number of domains to successfully takeover using AI, I see it as more a matter of time than exception.
So what should humans be doing with their lives anyway?
There are two thoughts on this. One is depressing. One is the most empowering promise we have had since the internet.
The depressing thought is that since AI will be better at doing just about anything, Most humans will be useless and should simply step aside. They may perform the manual labors for a while, but mostly they should simply enjoy an AI-built house, consuming AI-grown food, watching AI-created entertainment, and more or less just exist in their meaningless lives, run by AI under the hands of a few benefactors.
The darker side to that is that humans should be enslaved or eliminated; seeing as they are now ‘useless eaters’ they don’t really have a purpose and the best thing to do is dispose of them or otherwise find a way to exploit what few resources they have to offer (research or worse).
And believe me, there are people who think that way and are more than willing to pursue that option.
They have some good reasons on the surface too. We just talked about how most of your life is a tragic waste anyway. You actually aren’t doing that much good beyond perpetuating the species in wealth/information terms of sharing what you know and biological terms of what you can reproduce.
So yeah. That is actually the situation for so many people today who simply work, eat, sleep, and procreate. Take away the need to endlessly struggle, and from their view, all you have left is eat, sleep, and procreate.
But that view is wrong.
Dead wrong.
What an abjectly abysmal and myopic view of humans.
There is another reality that is staring us in the face. The beyond empowering reality.
We have the chance to do what no one could have dreamed of in all the history of world civilization. We have the chance to finally outsource our incessant, low-value activities to something that doesn’t mind doing it, and have the majority of our decisions, actions, thoughts, and desires be value-added to ourselves and society at large. Relieved of figuring out the same life calculations over and over again, we can finally focus all our attention on actually expanding the fields of knowledge and being. We can decide what is meaningful and then instead of trying to figure out how to get to meaningful, we point the AI in the right direction and have them do it.
This isn’t a time to lose your job. This is a time to finally use that degree of yours. If your job is basically useless, congrats! You don’t need to do it. You can use AI instead to reduce the massive complexity of doing something different, and what you will be doing has a much greater chance of actually using your talents.
This is where I think AI will really shine, and yet almost no one is talking about it. Like I mentioned, we spend almost all of our lives catching up to the conversation (school) sharing the conversation (work) and maybe, if we are lucky, moving the conversation forward that one little bit (phd and fulfilling jobs) or forgetting the conversation (everyone else when they get old).
Now, we can finally get people the piece of the conversation we need that is relevant in ways that self improve the delivery of that. Instead of a teacher in a room giving the same dessicated lecture on decimal division, we can have an ai create a custom lesson based on all the lectures (dessicated or not) of decimal division everywhere, at every skill level, and laser focus on where the student is struggling, one on one. Instead of a frustrated call rep trying to find the right answer for a customer from limited options, the ai can downstream a synthesized answer from the entire knowledgebase to make that customer happy, and then upstream the helpfulness of the customer response to help with all other calls everywhere simultaneously. Instead of spending 30 to 40 years studying molecular biology, a student can experiment and learn hand in hand with a machine that has spent 10,000 years studying the same subject.
So many people who got shafted in life by a bad neighborhood, or a teacher that used the wrong teaching methods, or simply not knowing where to find answers, can finally make a difference for themselves and others. Wow.
This is going to 10x and 100x everyone we thought would be forever stuck on welfare.
This is going to 1000x and 1000000x those we thought were successful.
Or at least it can. It is a possibility.
Which leads us back to that question. It turns out that its a personal question but when we all individually find the answer, a miracle in civilization can happen.
What should we be doing anyway?
Some final notes in Q&A format
Even considering everything you said, Won’t so many people still lose their jobs?
Even setting aside what I said previously, this is still the wrong question. Jobs are ending and changing all the time. The real question is how easy is it for someone’s skills to be allocated, reallocated, or updated. How fluidly can an average person retool transfer across needs?
That is what we need to tackle. Not who will lose their jobs, but how easily they can find jobs they are good at.
Couldn’t AI be turned against us, either maliciously by other humans, or by AI getting smart itself?
This is a partially wrong question. Lots of things are turned against us. Other humans are turned against us. Our own machines and constructions have killed us.
We give AI patterns, and aims through rewards. We then hook it up to resources. So far this has mostly been data. We could connect it to the internet, to exchanges/brokerages, to robotic machines, or to anything with an digital controller interface.
So the real question is ‘what are the aims of the AI that we have rewarded and what resources does it have under its control’?
There is a general question mark around possibly creating a super intelligent AI that surpasses the bounds of what we think or of how we think. That is not what we have today. But it may be difficult or even impossible to tell when we have reached that point, at which point it would naturally be too late. Normally, this would be a far future event whose time horizon isn’t meaningful to current development. However, many claim that we are up to 30 years ahead of schedule, which, on an exponential curve, means it makes sense to start thinking carefully about this today.
I will admit, there could be some serious danger here. Most obviously in the case of military applications, where an AI decides for some reason that destroying all life is the way to win the war, but even in civilian situations such as operating machinery or driving vehicles, or manipulating various information platforms.
The focus should be on tuning the aims and connecting the resources in a ‘do no harm’ ‘Isaac Asimov laws’ method.
If there is such a great danger, shouldn’t we slow or halt AI development?
People who propose this haven’t thought through the geo-political side of this debate.
Let me ask you something. Militaries regularly blacklist or classify technologies that are deemed a threat to the state if they are released, or of critical operational value, or of critical competitive advantage to that state’s goals.
Why haven’t they blacklisted AI yet?
You better believe they have something better than anything we are seeing today.
Even if I am completely wrong, and the military for whatever reason has missed the boat up until now, what makes you think for an instant that a ban on ai development would keep them from developing black-budget projects?
What makes you think for an instant that a ban on ai development would stop the multi-nationals and business conglomerates from acquiring and doing their own research.
What makes you think for an instant that a ban on ai development would stop other countries from continuing their AI arms race in the face of greater global instability?
The only thing a ban on AI development would accomplish is further separating the power of the entrenched interests and large organizations from those of the average persons. We should always be very considerate of what aims and resources we give AI, but placing a so-called ‘ban’ on AI development is simply a nonstarter.
Conclusion
In the end the AI question will always be a moral question, one of incredible moral good, or incredible moral evil. We can empower anyone beyond any chance they had before, or we can doom them to a life without meaning, maybe take their life.
What is funny is that AI itself doesn’t decide this. How we use AI decides this. How we each individually use AI in our lives decides this.
Traditionally, business sticks to ethics (on a good day) but stays away from the whole moral debate. The Irony is that soon, the machinations of business will be left to AI, and the morals, what we should do, where we should go, how we should act, will be the only thing left for business people.
We each have a chance to make this next phase of human history the final moment where literally anyone can liberate themselves. Or we could let people turn the planet into an AI hellhole where no one has anything meaningful and we wasted our greatest chance on a myopic dream based on what is today, not what could be tomorrow.
My final thought is that we should do all in our power to distribute this chance everywhere possible, and then put the safeguards in place so that AI is not controlled by any single entity or group of entities, but allow for everyone to access and control AI, so that they can choose for themselves whether they want to chase this dream or not.