AI Safety for normies - the most important century in human existence
Introduction
A few months ago, I moved to Silicon Valley to launch a tech startup, OLY.AI. I live in a house full of PhDs, physicists, mathematicians, computer scientists, and various other rationalists, all hacking away on projects to shape the future. In this article, I'm aiming to share what I've learned, to give you a crash course in everything you need to know about AI safety as a normal person just living your life, a.k.a., a "normie." Buckle up!
Holden Karnofsky, Director of AI Strategy for Open Philanthropy, argues that this century is the most important in human history. He points to our compounding technical acceleration, making the case that it will continue to precipitate creating the pillars of a multi-planetary species that embraces both organic and synthetic intelligence to prosper throughout the galaxy. That the next 100 years will determine the way in which humans embrace artificial minds, create artificial super intelligence, use that intelligence to improve themselves, replicate themselves & beyond. Humans will use these capabilities to expand throughout the galaxy. Karnofsky believes what we do today will shape that world disproportionally to any other time. We start here to set the stage for considering just how important what we do now is to the future of civilization.
In this article, we’ll explore why AI safety is an important subject for everyone to understand on some basic level. Phrases like AI Optimist, Doomer, concepts like AGI and ASI will all be thrown at us over the next few years. As these words come into the Overton window, I'll aim to give you a basic understanding in the hopes that we can begin forming an ethical framework for how we shape our future. Prosperity is achievable but not guaranteed. Vigilance is required.
The bad news:
Meet Eliezer Yudkowsky, a respected AI researcher. At the 4:14 mark, he gets to the heart of his belief that humanity is moving toward its own demise. While some dismiss his perspective as overly pessimistic or 'doomer', Yudkowsky emphasizes the lack of a clear scientific roadmap for ensuring safe control of AI as it surpasses human knowledge. It's important not to dismiss what Eliezer is saying—no matter where you sit on the ideological spectrum, he's right that we can't get this wrong. If we do, there will be consequences. Consequences that could shape life on earth for all time to come.
The good news
The good news is that today's current technology isn't capable of any of this. However, it is for this reason that we should begin defining the boundaries for which our future world can share common viewpoints to avoid catastrophe. Nick Bostrom, author of "Superintelligence" published in 2014, identifies the mapping of language as a fundamental piece of decoding the representation of reality in a logical system of record. This could give artificial intelligence a baseline for understanding the world as we do through our representational system.
"Superintelligence" is now scoffed at as being "2014 thinking" due to the development of the modern transformer architecture, which may render some technical aspects of Bostrom's theories erroneous. But putting that aside, Bostrom outlines a compelling possibility: an incredible confluence of technologies that will overlap to create a synthesis of human capacity, evolving mankind into its next stage. This next stage being a marriage of humans and technology to harness the advantages of both organic and synthetic systems.
What Do People Mean When They Say AI?
All Artificial Intelligence is just a prediction of something based on previous example data. That's it. Some AI predicts if you'll like a video or not, while other AI predicts the words in a sentence to respond to a question you have. The modern breakthrough comes in the form of a new architecture called transformers. Essentially, all of the words a model is fed are broken down into different representation pieces called tokens. The token is a long number that has embedded representations, think of those as representations of elements like plurals, root words, and other things to help give the word context linguistically. Those tokens are then placed into a longitude latitude type database, picture a 3 dimensional space (but more dimensions), the distance between each word is measured at different angles or vectors to see what are the 'nearest neighbors'. All of the words in the database are mapped to best understand the probability when a word is mentioned what the most likely next words might be. There's more to it, but if you start with this basic understanding, you're closer to knowing than not.
Below Andrej Karpathy, an OpenAI founder gives a more in depth introduction.
Understanding AGI and ASI
Advanced General Intelligence (AGI) is a term you'll be hearing a lot about in the coming years. While it has been a catch-all phrase for the past decade encompassing a vast range of ideas, it's now starting to solidify into a more specific definition. AGI refers to a type of AI that can map an existing knowledge base and derive new skills, capabilities, and insights from its training data. In simpler terms, some believe AGI could potentially perform tasks as well as the most skilled humans (or better). It will figure out how to do what you want it to do regardless is the simplest way of thinking about it. As companies like OpenAI make AGI a core focus, you might be bombarded with marketing for ever-newer versions of this revolutionary technology. Get ready.
Advanced Superintelligence (ASI), on the other hand, is a term being shaped to describe intelligence that surpasses human capability in understanding the world. ASI would theoretically have the capability of understanding the whole of human knowledge simultaneously as it interprets the incoming patterns to predict outcomes beyond what humans can. There are other related terms like Safe Superintelligence (SSI) and whole brain emulation, but those are concepts further down the road.
The important takeaway here is that while we aren't there yet (and aren't necessarily close), the rapid advancements in technology suggest that within the next 20 years, we could have capabilities far exceeding our current imagination. By considering the full spectrum of possibilities, we can begin to identify the ground rules for responsible AI development.
Kardashev Scale
In 1964, Soviet astronomer Nikolai Kardashev proposed a measurement of the technological advancement of a civilization based on energy consumption capabilities, now called the Kardashev scale.
The reason for including the Kardashev scale is to lay down a foundational understanding of what we, as humans, have defined as the grading scale for civilizations as we enter into an age where artificial intelligence or ASI could come to align with helping humans achieve a more advanced technological civilization. A multi-planetary Type I Civilization to be specific.
The Godfather of AI:
Geoffrey Hinton, the former head of AI for Google and known as the “godfather of artificial intelligence,” invented the basis of all modern AI systems, the Neural Network. Hinton, a former neuroscientist, revolutionized modern computing by introducing concepts replicating the way the human brain learns in computational processes. Hinton left Google in 2023 and is now speaking publicly about his concerns. I'll use three of his key points to give you a sense of his views on the subject.
领英推荐
1. AI war is bad
One unnerving possibility Hinton outlines is that malicious individuals, groups, or nation-states might simply co-opt AI to further their own ends. Hinton is particularly concerned that these tools could be trained to sway elections and even to wage wars. Fighter jets no longer piloted by humans but rather AI, robot-armed vehicles, or humanoid robots marching through the streets of far-flung nations in the name of safety, justice, or freedom. Science fiction has seeded us all with some opinion on the matter, but to date, no actual policy has been established.
The AGI race has begun. We are building machines that can think and reason. By 2025/26, these machines will outpace many college graduates. By the end of the decade, they will be smarter than you or I; we will have superintelligence, in the true sense of the word. Along the way, national security forces not seen in half a century will be unleashed, and before long, The Project will be on. If we’re lucky, we’ll be in an all-out race with the CCP; if we’re unlucky, an all-out war.
Leopold Aschenbrenner from his recent series, Situational Awareness, outlining the OOM shift in advancement towards AGI.
2. So AI Neo-Capitalism?
Capitalism can be attributed to many positive things in our modern world but can also be its own form of modern religion. Establishing what predatory AI practices are or what penalties companies may incur for the complete disruption of labor markets is an important conversation to have now.
Companies like Waymo look to completely remove the driver from the equation by offering self-driving taxi services. Aurora trucks look to automate away freight truck driving. The last US Census recorded 3.5M truck drivers in America, 1.7M ride-share drivers, and 1.5M traditional delivery drivers. If 6.7M people find themselves out of work in the next 5 years, how are we going to reskill these workers to support themselves? Driving is just one example of economic disruption for an industry. With what we've established about AGI, it's worth considering thoughtfully as many other industries could just as easily be disrupted by advances toward AGI.
3. What if ASI wants different things than we do?
The alignment problem, as computer scientists call it, refers to the challenge of ensuring an AI prioritizes the goals we intend for it. You've likely heard the cautionary tale of an AI programmed to make paperclips, which then takes over the world in its relentless optimization for paperclip production. While this is a simplified example, it highlights the importance of alignment. We need to ensure that an Advanced Superintelligence (ASI) prioritizes our goals, not the objectives it might independently determine to be most important for achieving optimal outcomes.
An ASI's ability to perceive patterns beyond human comprehension makes the alignment problem even more critical. By solving this challenge, we can build trust in ASI even when its actions and decisions stem from a vastly different perspective than our own. While we are possibly decades from ASI, the imperative around solving the alignment problem is considered critical to human existence in the ASI age.
Conclusion
Artificial Intelligence will reshape the world in similar but different ways to how the internet and smartphones have. The change is coming, but we're just getting a taste of it today. Tomorrow, another breakthrough could take us by surprise and cause existential threats, ranging from heightened geopolitical tension to workforce disruption that would have far-reaching effects on every man, woman, and child. To avoid the catastrophic problems of a more technologically advanced civilization, it's critical we take action now by electing representatives that align with our core beliefs and push for the ethical frameworks of the future today.
Don't be shy about being part of this conversation. Start by listening & understanding first, avoid irrational fear while digging into the details & learning, identify self-interested marketing language questioning it objectively but critically. The future has the potential to be everything we've ever hoped for and more, but it's complicated and will require humanity to evolve past many of the shortcomings we've suffered from since the beginning of recorded time. If we can thoughtfully push through this era into the brave new world of AGI and ASI we can unlock a destiny to be proud of. Realize your personal importance to the future of civilization - now take action in creating a legacy worth having.
Thank you for reading. Be a hero & fill out your opinions on this brief policy survey about AI ethics. I will publish the results to share with the broader community to take this conversation to the next level.
Let's build a better future together.
About OLY.AI
OLY.AI offers user-friendly natural language apps that use SaaS data sources to generate real-time metric answers, reports, and alerts. Users can get responses within seconds by speaking or typing their questions, saving valuable time. The world's most innovative small businesses, financial professionals, & bookkeepers use OLY.AI to empower every person in their organization, from the CEO to front-line employees, with data-driven insights creating metric-driven win together business cultures.
We deliver skilled developers fast to get the job done
1 周Thank you very much for very useful and important information!
Ambassador of Connections?? |Founder and Chief Empowerment Officer (CEO) of FIESTA ?? l Community Builder | Coach/Mentor/Encourager ???? | Educator ?? | Life-Long Learner ?? | Lover of others ?? | Enjoyer of ?? &??
4 个月Hey Michael Galluppo, that was a fascinating read. I want to connect you to Jake O'Shea, co-founder and organizer of AITX, a monthly meetup for AI entrepreneurs, operators, and enthusiasts in Austin, TX, as he might be able to help with the questionnaire.
Co-Founder @ NicNames & NIC.UA | Patent Attorney, Connecting Web2<>Web3
4 个月Good article!