p(doom), the AI Alignment Problem, and the Future of your Product
Wei Wen Chen
I write about data management, analytics, artificial intelligence and machine learning. Please connect with me and we will learn and grow together.
Over the past three decades, I've witnessed the evolution of countless theories, models, and predictions in the world of technology and business. At the recent Insight Partners #scaleupai event George Mathew Insight’s Managing Director hosted a panel on #responsibleai in which he closed the session asking each panelist what their current p(doom) score was. While p(doom) is a term that might sound ominous, understanding the potential risks and challenges in our rapidly changing world, as well as the context of AI in products that we create is an important topic.
What is p(doom)?
At its core, p(doom) represents the probability of an existential catastrophe or a catastrophic event that could lead to the end of human civilization as we know it. It's a metric that quantifies the likelihood of events that, while rare, have profound implications.
What did the #scaleupai panelists think?
George Mathew did a lightning round and asked each panelist what their p(doom) was 2 years ago and what it was now. The answers were interesting:
Other Recent Insights on p(doom)
Recent surveys suggest that the current p(doom) hovers around 5-10%, emphasizing the growing concern within the AI community. This percentage represents the probability that AI could lead to human extinction. The discussions surrounding this topic are not mere speculations but are grounded in real concerns about the rapid advancements in AI and the potential for "misaligned" AI systems.
Voices from the Industry
Prominent figures in the AI community have expressed their concerns:
Here are some other predictions by others with opinions
The AI Alignment Problem
The challenge with AI is not just about its potential to cause harm but also about ensuring that it acts in line with our complex, nuanced values. AIs, by their nature, will do what you ask but not necessarily what you meant. This distinction is at the heart of the AI alignment problem.
When we will achieve AGI (Artificial General Intelligence) , p(doom) and alignment has spawned many theories and measures. I found this graphic particularly interesting as the researchers are using formulas based on survey data, plotted on a timeline for their forecasting.
Translating Human Desires to AI Logic
The challenges of AI alignment are significant due to the inherent difficulties in translating our fuzzy human desires into the precise, numerical logic of computers. How do we ensure that AIs make decisions in line with human goals and values?
One promising solution is how we are proceeding with Large Language Models (LLMs) today, that is to have humans provide feedback on AI decisions and use this feedback for retraining. This iterative process can help bridge the gap between human intentions and AI actions, ensuring that the technology serves us rather than the other way around.
Does AI Doom Your Product or Make it Better? You Hold the Key
Rather than just pontificating about p(doom), let's get down to what it means for you if you are in charge of a product or platform.
领英推荐
News flash: 85% of enterprise big data and AI projects fail.
With #ai technology evolving at a breakneck pace and everyone rushing to tap into the power of GPT, how should you think about integrating AI and evolving your product?
A recent analysis from a Reforge blog post shows there’s a delicate balance between the human importance of a challenge and the amount of context necessary for an AI model to solve it.
According to the article, AI products fall neatly on a “Survival Curve.” Each of the examples in the curve are plotted relative to their context.
The AI Survival Curve tells us whether an AI-based solution stands to solve the needs of your users today. As the stakes and optionality increase, models need more human attention to consistently find the best answer.
Products that achieve product market fit, regardless of AI, do so by focusing on high ROI use cases that ensure the cost of context doesn’t increase beyond the point of what’s meaningful to their users. This sets them up to start the virtuous cycle of data collection and expand their product benefit as technology evolves.
If you are responsible for your company's product strategy you will already have AI in your roadmap, if not ...
"The earlier you start with AI the better. Otherwise, you might get left in the (digital) dust."
The Way Forward
I've mostly framed the general discussions around p(doom) and AI alignment from an alarmists perspective. They serve as a reminder of the responsibilities we bear as we advance in the realm of AI. It's essential to approach AI development with caution, ensuring that safety and ethical considerations are at the forefront.
While risks like the AI alignment problem are real, some argue that the likelihood of existential catastrophe may be overstated. After all, new technologies often come with concerns about societal impacts, but we find ways to manage the challenges responsibly over time. For example, concerns surrounded cybersecurity in the early days of the internet and bioengineering in the early days of biotech. Yet through thoughtful governance and cooperation across science, ethics, policy, and industry, we were able to implement safeguards and guidelines to ensure these fields developed positively.
When it comes to AI alignment today, concrete steps can be taken to steer progress in a prudent direction. Setting up ethics boards and advisory councils can provide vital oversight and guidance. Investing in AI safety research and testing fail-safe mechanisms are also important. Fostering broader literacy around responsible AI development ensures alignment considerations remain top of mind. And creating feedback loops between AI researchers, developers, policymakers and the public enables cooperative course-correction. With vigilance and collective determination, we can work to ensure AI fulfills its promise while avoiding pitfalls. If we proactively address alignment now, AI can be a transformative force for us all to flourish within.
As a CPO, I've seen firsthand the transformative power of technology and seen the cycles come and go over my 30 year career.
“With great power comes great responsibility”
For my part, I'm not over indexing on the general hype of p(doom) and the AI alignment problem, but critically determining the impacts to the platform and product that I'm responsible for as my way forward. It's about being prepared and ensuring that the power of AI is used with wisdom, and foresight, regardless of the context.
George Mathew ended the #scaleupAI panel with this hot take ...
"There will be more demand for Responsible AI engineers than Prompt Engineers in the future"
WDYT? Do you agree? What’s your p(doom), and how are you approaching responsibly leveraging and integrating AI into your products?
Do share in the comments below ??
More p(doom) perspectives:
Vanderbilt Business and Legal Studies | Dean's Achievement and Peabody Scholar
5 个月I feel it might be useful to provide a more specific explanation of why technical alignment is a challenge, touching on simplified versions of concepts like orthogonality, our weak interpretability tools, and an intelligence explosion post-AGI. In my personal life, I've often found that people react strongly to the emotional reality of X-Risk; dismissing it as impossible, as science fiction, or as overblown paranoia. Or, even if they acknowledge the possibility of risk, it takes a backseat in their mind compared to capabilities research, especially when geopolitics and economics become involved. Even a 5-10% estimate of X-Risk is an extraordinarily serious concern, and should demand a comparative level of investment, public interest, and (most importantly) mainstream acceptance by academics of the importance of alignment/safety research in mitigating it.
Co-Founder & CTO at Adri AI (YC W23) | MS CS Columbia (dropped out) | IIT KGP '17
1 年Interesting juxtaposition!!
CPO ? CTO ? COO ? AI, Data, Analytics ? Scaling companies with groundbreaking solutions ? 2 exits ? 1 IPO
1 年?? Ramon Chen fantastic insight on AI and a well thought out perspective. Learnt something new today.
Advisor to CEOs and Boards on AI Analytics Data Strategy Roadmap | Serial Founder of 8 Data Analytics Internal Startups across Industries | Board Member
1 年?? Ramon Chen, thank you for this thought-provoking article! I am also a fan of George Mathew since 2013 when he was president & COO of Alteryx (I was at Cardinal Health)! On p(doom), I agree with him: 0% 2 years ago, 5% now.