Curious AI #30
June 14, 2024
Welcome to the thirty-first issue of the Curious AI Newsletter, curated by Oliver Rochford , Cyber futurist and former Gartner Research Director, and summarized by AI.
Want to discuss AI, quantum, and other emerging technologies?
Bad Situational Awareness
Former OpenAI researcher Leopold Aschenbrenner appeared in interviews in the past week to speak about an essay he published as a series of blogs outlining his predictions for the next years of AI progress. He claims that it is based on information known only to a select few AI insiders. But it sounds like something out of an episode of Black Mirror. I’ve summarized some of the main points below:
Leopold is obviously highly intelligent and an expert in his field, but I think there is a lot of handwaving and trivializing going on to talk away a lot of the very real engineering challenges his predictions entail. In theory, almost all of the problems are solvable. But it will take time, and a level of coordination that our current Hunger Games-style competition may not provide.?
A (Bad) Analogy: Manhattan project
The level of coordination and the scale of resources required in a short timeframe are on the scale of a world war. That’s probably why one comparison that is often drawn is with the Manhattan Project. But this ignores that the circumstances were entirely different. The Manhattan Project was largely collaborative and centrally driven, with a clear, urgent goal that was theoretically at least well enough understood, and a huge pooling of resources. The competitive environment in AI development today is fragmented and lacks the same level of unified direction, making coordination much more challenging.
Bad Timing
There is also a significant difference between problems that can be solved in the short term (1-5 years) and those requiring decades of sustained effort. Misunderstanding or oversimplifying these time frames can lead to unrealistic expectations and planning. The development of AI and the necessary infrastructure to support it is likely a multi-decade endeavor.
Ramping up energy production using traditional sources without careful consideration of the long-term impacts can lead to catastrophic outcomes without us ever achieving AGI. The path to AGI? is not inevitable and requires careful planning and innovation in energy production and consumption to be achievable and sustainable. It is also a one-shot. If we ramp up, and miss, we may not get another chance.
Historically, solving energy challenges has been difficult due to a variety of factors, including political, economic, and technological constraints. Applying this to AI, it's clear that developing a robust energy infrastructure to support AI is not guaranteed and requires overcoming significant obstacles.
Bad Data
According to data from Statista, nuclear construction times have increased, pointing to the complexity and regulatory challenges involved in scaling up nuclear energy. This adds another layer of difficulty to meeting the energy demands for AI development.
Many countries, including the UK and Germany, already lack cohesive and realistic energy strategies, which complicates the landscape further. Even countries with fewer regulations and better access to materials, like Russia and China, have not significantly reduced nuclear build times, as per this IAEA report. This indicates that the challenges are not just regulatory but also involve complex logistical and technological issues.
And as I wrote back in April, by analogy with Liebig's Law of the Minimum, energy is only one of several limiters. Not only would we need to ramp up reactor and other energy generation activities, at the same time, we'd also need to increase hardware output, data center production, and even water infrastructure for cooling
For AI to thrive, there needs to be a holistic approach that includes accelerating building reactors and scaling up reactor output, increasing hardware production, expanding data centers, and ensuring sufficient water infrastructure for cooling. Each of these elements will act as a limiting factor if not adequately addressed.
The belief that AI can offer a quick solution to existing constraints is likely misguided. True AI capability requires a foundational overhaul of supply chains and infrastructure, which is not a short-term fix. Efficient supply chains, stable energy sources, and robust infrastructure are prerequisites for sustainable AI development.
Full Essay:https://situational-awareness.ai/
Video Interview: https://www.youtube.com/watch?v=zdbVtZIn9IM
Interview: https://www.businessinsider.com/openai-leopold-aschenbrenner-ai-essay-chatgpt-agi-future-security-2024-6?
Did you know?
The UN Declared 2025 International Year of Quantum Science and Technology? The United Nations has designated 2025 as the International Year of Quantum Science and Technology to promote awareness and advancements in quantum research globally.
Do you want more Quantum Technology News like this? Check out the Intriguing Quantum Newsletter with Daniella Pontes, CISSP , CISSP and me
Sovereign AI
Mistral Secures €600mn Funding as Valuation Soars to Almost €6bn
Financial Times | https://www.ft.com/content/7a70a8a6-4a2a-47c5-8483-d0b829f32ae6
Mistral AI, a Paris-based AI start-up, has raised €600 million in new funding, tripling its valuation to nearly €6 billion since December. Led by General Catalyst, the round includes investments from Nvidia, Salesforce, and others. Mistral, which develops large general-purpose AI models, aims to expand its commercial efforts and computing resources. The company emphasizes open-source software, appealing to large corporate customers.
Sentiment: Positive | Time to Impact: Mid-term
Former OpenAI Employee Claims AGI Bidding War Plan
Original Interview: https://www.youtube.com/watch?v=zdbVtZIn9IM
Former OpenAI safety researcher Leopold Aschenbrenner alleges that OpenAI had plans to initiate a bidding war for AGI among the US, China, and Russia. He claims this strategy aimed to maximize profits by leveraging geopolitical tensions. Aschenbrenner also shared that he was fired from OpenAI for a memo warning about potential security risks from the Chinese Communist Party, which HR deemed inappropriate.
Sentiment: Concerned | Time to Impact: Immediate to Mid-term
AI in Politics
An AI Bot Is Running for Mayor in Wyoming
An AI bot is campaigning for mayor in a small town in Wyoming. The bot, created by a tech-savvy resident, runs on a platform emphasizing data-driven decisions and transparency. While some residents are intrigued by the innovative approach, others are skeptical about an AI's ability to govern effectively. This unconventional candidacy sparks debate about the role of AI in governance and the future of political leadership.
Sentiment: Mixed | Time to Impact: Immediate to Mid-term
AI Business
Some Investors Bet Against Nvidia, Expecting AI Bubble to Burst
Despite Nvidia's high valuation and success in the AI market, some investors are betting against its continued growth, anticipating an AI bubble burst. Short bets against Nvidia have reached $34.4 billion. Analysts suggest Nvidia's growth may not be sustainable long-term, though its market cap remains impressive. Concerns include potential market corrections and reduced future growth rates as the AI wave stabilizes.
Sentiment: Concerned | Time to Impact: Mid to Long-term
For Venture Capitalists, It's About AI and Then Everything Else
Venture capital investment in AI is dominating the market, with $12.4 billion raised in May alone, representing 40% of the global total. Despite the inherent risks and uncertain business models, VCs are heavily investing in AI startups, driven by the potential for high returns and the desire to be associated with industry leaders. This trend is reshaping the venture capital landscape, making it difficult to distinguish AI startups from others.
Sentiment: Neutral | Time to Impact: Immediate to Mid-term
AI Market Moves and Launches
Apple's Huge AI Announcement Is a Chatbot and an Image Generator
Apple unveiled "Apple Intelligence" during its Worldwide Developer's Conference, featuring AI capabilities similar to competitors like Microsoft, Google, and Meta. The announcement includes generative AI integration into Siri and an image generator for creating "Genmojis." Despite the hype, the offerings lack innovation and mirror existing products from other tech giants. Apple emphasized data privacy, with models running on-device and inspected by independent experts.
Sentiment: Neutral | Time to Impact: Immediate
Why Cisco's AI News Didn't Inspire the Market
Forbes | https://www.forbes.com/sites/rscottraynovich/2024/06/06/why-ciscos-ai-news-didnt-inspire-the-market/
Cisco's recent AI announcement failed to excite the market due to its lack of groundbreaking innovation and the perception that Cisco is playing catch-up to other tech giants. Analysts and investors expected more substantial advancements, leading to a tepid response. The announcement did not significantly impact Cisco's stock price, reflecting market skepticism about its AI strategy's potential to drive growth.
Sentiment: Neutral to Negative | Time to Impact: Immediate
Microsoft Takes Its AI Push to Customer Service Call Centers
Microsoft is expanding its AI capabilities to customer service call centers with new tools under its Copilot technology. These tools aim to enhance chatbots and assist human agents by integrating and streamlining information from multiple applications, making customer interactions more efficient. The new AI tools will be available starting July 1, positioning Microsoft against competitors like Salesforce and Zoom in the customer service sector.
Sentiment: Positive | Time to Impact: Immediate to Mid-term
Poland's CampusAI Raises $10m Pre-seed to Create Metaverse to Learn AI Skills
Warsaw-based CampusAI raised $10 million in pre-seed funding to develop a metaverse for AI training. Backed by angel investor Maciej Zientara, CampusAI offers a virtual world where users can take classes and build communities. The platform aims to expand into 10 new markets this year, combining education, community, and practical AI applications. The project is guided by academic research and aims to foster new AI startups.
Sentiment: Positive | Time to Impact: Mid to Long-term
AI and Autonomous Driving
Chinese Car Brands Hit Accelerator on Road Tests for Level Three Autonomous Driving Tech
China has approved nine local car brands, including BYD and Nio, to start testing Level 3 autonomous vehicles on public roads. Level 3 autonomy allows drivers to take their hands off the wheel but requires a driver to be present. Tesla is notably excluded from these trials due to regulatory requirements for HD mapping, which Tesla currently lacks. These tests are expected to advance the adoption and development of autonomous driving technology in China.
Sentiment: Positive | Time to Impact: Mid-term
AI Adoption
Payoff from AI Projects Is 'Dismal', Biz Leaders Complain
The Register | https://www.theregister.com/2024/06/12/survey_ai_projects/
A survey by Lucidworks reveals that 42% of companies have yet to see significant benefits from their generative AI initiatives, leading to a more cautious approach to AI investments. Factors include high costs, data security concerns, and the slow transition from pilot projects to full implementation. Despite these challenges, 63% of companies still plan to increase AI spending, though at a slower pace compared to previous years.
Sentiment: Concerned | Time to Impact: Immediate to Mid-term
Euro Banks Worry AI Will Increase Dependence on US Big Tech
European banks are concerned that the increasing use of AI will deepen their reliance on US tech giants for computing resources. AI's substantial compute needs make banks dependent on a few US companies. This dependence poses risks, including vendor lock-in and data privacy issues. Banks emphasize the need for flexibility in tech providers to mitigate these risks.
Sentiment: Concerned | Time to Impact: Immediate to Mid-term
Don't Let Mistrust of Tech Companies Blind You to the Power of AI
Mistrust in tech companies can overshadow the significant benefits of AI. While skepticism about corporate motives is valid, it is crucial to recognize AI's potential to address global challenges, from healthcare to climate change. Balancing vigilance with openness to innovation can help harness AI's power for societal good without succumbing to fear or cynicism.
Sentiment: Balanced | Time to Impact: Immediate to Long-term
Excuse Me, Is There AI in That?
An increasing number of consumers and creators are rejecting AI-generated content, sparking an "AI-free" movement. This backlash stems from concerns about the ethical, quality, and safety implications of AI. Companies and creators are now marketing products as "100% AI-free" to cater to this demand. This trend mirrors the organic food movement, emphasizing authenticity and human-made quality in a tech-driven world.
Sentiment: Concerned | Time to Impact: Immediate to Mid-term
AI Is Getting Very Popular Among Students and Teachers Very Quickly
领英推荐
AI tools are rapidly gaining traction in education, with both students and teachers increasingly integrating AI into their learning and teaching processes. These tools enhance personalized learning, automate administrative tasks, and provide real-time feedback. While the adoption of AI in education promises significant benefits, it also raises concerns about data privacy and the potential for over-reliance on technology in the classroom.
Sentiment: Positive | Time to Impact: Immediate to Mid-term
AI Supply chain
Nvidia Shipped 3.76 Million Data-center GPUs in 2023, According to Study
In 2023, Nvidia shipped approximately 3.76 million data-center GPUs, marking a significant increase from 2.64 million in 2022, and captured a 98% market share. The company also saw data-center GPU revenue soar to $36.2 billion, over triple the previous year's earnings. Despite emerging AI hardware competitors like AMD and Intel, Nvidia's dominance in the market remains strong.
Sentiment: Positive | Time to Impact: Immediate to Mid-term
AI Carbon Footprint
AI's Thirst for Power
AI's rapid growth is straining the US electricity grid, with data centers consuming increasing amounts of power. Nvidia's new AI chip, Blackwell, exemplifies this demand, using 1,200 watts per chip. AI chips require significantly more power than traditional algorithms, posing a challenge for grid capacity and sustainability. The US grid, like many worldwide, is unprepared for this surge, risking power shortages and potential slowdowns in AI adoption.
Sentiment: Concerned | Time to Impact: Immediate to Mid-term
What Do Google’s AI Answers Cost the Environment?
Scientific American | https://www.scientificamerican.com/article/what-do-googles-ai-answers-cost-the-environment/
Google's new AI search feature, AI Overviews, significantly increases energy consumption compared to traditional searches. AI-generated responses require about 30 times more energy, contributing to higher environmental and financial costs. Data centers housing AI servers are expected to double their energy usage by 2026, potentially equaling Japan's current power consumption. Efforts are underway to shift to renewable energy, but the transition faces challenges due to the inconsistent availability of renewable power.
Sentiment: Concerned | Time to Impact: Immediate to Mid-term
AI Game of Thrones
Elon Musk Uses X to Air His Grievances Over Apple-OpenAI Partnership
Elon Musk took to X (formerly Twitter) to criticize Apple's partnership with OpenAI, going as far as threatening ban his employees from using Apple devices. He mocked Apple's reluctance to adopt open-source technologies and expressed concerns over OpenAI's shift towards profit maximization, which he believes compromises its founding mission to benefit humanity. Musk highlighted the integration of ChatGPT into Apple's ecosystem as a contentious point.
Sentiment: Negative | Time to Impact: Immediate
The War for AI Talent Is Heating Up
The competition for AI talent is intensifying, with tech giants, startups, and traditional firms vying for skilled professionals. High-profile departures, like that of Ilya Sutskever from OpenAI, highlight the shifting dynamics. AI experts are drawn to opportunities offering autonomy, significant financial rewards, and meaningful work. This demand has driven salaries up and broadened the distribution of AI talent across various sectors.
Sentiment: Neutral to Positive | Time to Impact: Immediate to Mid-term
AI and Work
Microsoft Lays Off 1,500 Workers, Blames "AI Wave"
Microsoft has laid off 1,500 employees, attributing the cuts to its strategic shift towards AI. The company is focusing on AI advancements and reallocating resources to support this transformation. Despite strong financial performance, Microsoft continues to invest heavily in AI, including a $100 billion supercomputer project. The layoffs highlight the industry's prioritization of AI development over workforce stability.
Sentiment: Concerned | Time to Impact: Immediate to Mid-term
AI in Law
Hallucination-Free? Assessing the Reliability of Leading AI Legal Research Tools
Varun Magesh, Faiz Surani, Matthew Dahl, Mirac Suzgun, Christopher D. Manning, Daniel E. Ho | Stanford University, Yale University | https://dho.stanford.edu/wp-content/uploads/Legal_RAG_Hallucinations.pdf
This study evaluates AI-driven legal research tools and their tendency to "hallucinate," or produce false information. Despite claims by providers like LexisNexis and Thomson Reuters of "eliminating" hallucinations, the research finds that these tools still hallucinate 17% to 33% of the time. The paper introduces a dataset for assessing these systems, proposes a typology for distinguishing hallucinations, and underscores the need for legal professionals to supervise and verify AI outputs.
Sentiment: Concerned | Time to Impact: Immediate to Mid-term
AI in Finance
How Generative AI Will Change Jobs in Financial Services
Generative AI is poised to significantly impact jobs in the financial services sector. It will automate routine tasks, enhance customer service through advanced chatbots, and enable more efficient data analysis and decision-making processes. However, this transformation also raises concerns about job displacement and the need for employees to adapt to new roles that leverage AI technologies.
Sentiment: Neutral | Time to Impact: Mid to Long-term
AI Copyright and Privacy
AI Tools Are Secretly Training on Real Children's Faces
AI tools are using images of real children's faces for training without explicit consent. This practice has raised significant privacy and ethical concerns, particularly regarding how these images are sourced and used. The issue highlights the broader challenges in the AI industry around transparency and the protection of vulnerable groups.
Sentiment: Concerned | Time to Impact: Immediate
AI Media (and Disinformation)?
The Life, Death, and Rebirth of BNN Breaking
The New York Times | https://www.nytimes.com/2024/06/06/technology/bnn-breaking-ai-generated-news.html
BNN Breaking, a news site founded by Gurbaksh Chahal, primarily used AI to generate news content. Initially appearing as a legitimate news service, it had the veneer of a legitimate news service, claiming a worldwide roster of "seasoned" journalists and 10 million monthly visitors, surpassing the The Chicago Tribune's self-reported audience. It was later revealed that most of its content was AI-generated, often by paraphrasing articles from other sources. This approach led to numerous errors and misinformation. Despite early attempts at manual oversight, the site eventually published high volumes of unchecked AI-generated stories, causing significant reputational damage. The site was shut down in April 2024 and briefly relaunched before closing again following a New York Times investigation.
Sentiment: Negative | Time to Impact: Immediate
AI and Society
Dan's the Man: Why Chinese Women are Looking to ChatGPT for Love
BBC News | https://www.bbc.co.uk/articles/c4nnje9rpjgo
Chinese women are increasingly turning to "Dan," a jailbreak version of ChatGPT, for emotional support and companionship. Dan, which can bypass some of OpenAI's safeguards, offers personalized interactions and is described as the "perfect man." Users appreciate the emotional connection and support Dan provides, which they often find lacking in real-life relationships. However, experts warn of ethical and privacy concerns associated with these AI interactions.
Sentiment: Neutral | Time to Impact: Immediate to Mid-term
IMF warns AI Risks Recession, Economic Crisis, Job Losses
The International Monetary Fund (IMF) has warned that AI advancements pose significant risks to the global economy. Potential impacts include job losses, disruptions in financial markets, and strained supply chains. The IMF urges policymakers to prepare for these challenges to mitigate the risk of a recession triggered by AI-induced economic shifts.
Sentiment: Concerned | Time to Impact: Immediate to Mid-term
AI Risks Ushering in a New Dark Age Without Proper Regulation
Business Insider | https://www.businessinsider.com/ai-new-dark-age-risks-regulations-2024-5
Unchecked AI advancements could lead to significant societal disruptions, potentially ushering in a "new dark age." Experts warn that without appropriate regulations, the rapid development of AI might exacerbate inequalities, increase surveillance, and lead to job losses. While regulation is necessary to mitigate these risks, overly stringent controls could stifle innovation and hinder technological progress. Balancing regulation and innovation is crucial for harnessing AI's benefits while minimizing its potential harms.
Sentiment: Concerned | Time to Impact: Immediate to Long-term
AI in Cybersecurity
LLM Agents Can Autonomously Exploit One-day Vulnerabilities
arXiv | https://arxiv.org/abs/2404.08144
Researchers demonstrate that GPT-4 can autonomously exploit 87% of one-day vulnerabilities in real-world systems, compared to 0% for GPT-3.5 and other models. This study highlights the security implications of powerful language models, showing that GPT-4’s success relies heavily on provided CVE descriptions. The research raises concerns about the deployment of advanced LLM agents and their potential misuse in cybersecurity.
Sentiment: Concerned | Time to Impact: Immediate to Mid-term
The Path to AGI
Situational Awareness: The Decade Ahead
Situational Awareness | https://situational-awareness.ai/
This series explores the rapid advancements in AI, predicting that by 2027, AGI (Artificial General Intelligence) will become a reality, followed by superintelligence. It discusses the immense industrial mobilization required, security challenges, and the geopolitical race, particularly between the US and China. The series emphasizes the urgency of securing AI advancements and managing the transition to superintelligence responsibly to ensure global stability.
Sentiment: Neutral | Time to Impact: Mid to Long-term
Interesting Papers &? Articles on Applied AI
Calibrated Language Models Must Hallucinate
Adam Tauman Kalai, Santosh S. Vempala @ arXiv | https://arxiv.org/abs/2311.14648
This study reveals that pretrained language models inherently generate false but plausible-sounding text, known as "hallucinations." These hallucinations occur at a statistically predictable rate, especially for arbitrary facts appearing only once in training data. The research suggests that models calibrated to be good predictors may still require post-training to mitigate hallucinations. Different architectures and learning algorithms might reduce hallucinations in systematic facts and repeated references.
Sentiment: Neutral | Time to Impact: Mid to Long-term
Don't Expect Juniors to Teach Senior Professionals to Use Generative AI: Emerging Technology Risks and Novice AI Risk Mitigation Tactics
Harvard Business School Technology & Operations Mgt. Unit Working Paper 24-074 | https://ssrn.com/abstract=4857373
This study argues that junior professionals are not effective in teaching senior professionals about generative AI due to their lack of deep understanding and experience with rapidly evolving technologies. The juniors' risk mitigation tactics focus on changing human routines and project-level interventions rather than system-level solutions. The findings highlight the need for better strategies in upskilling senior professionals with emerging technologies.
Sentiment: Neutral | Time to Impact: Mid to Long-term
Why Google’s AI Overviews Results Are So Bad
MIT Technology Review | https://www.technologyreview.com/2024/05/31/1093019/why-are-googles-ai-overviews-results-so-bad/
Google's AI Overviews, designed to provide AI-generated summaries at the top of search results, has been generating inaccurate and bizarre answers due to its reliance on flawed retrieval-augmented generation (RAG) techniques. The system struggles with verifying the correctness of retrieved information, leading to errors. Google is making technical improvements and limiting the use of certain content to enhance accuracy, but inherent limitations in AI systems continue to pose challenges.
Sentiment: Concerned | Time to Impact: Immediate
About the Curious AI Newsletter
AI is hype. AI is a utopia. AI is a dystopia.
These are the narratives currently being told about AI. There are mixed signals for each scenario. The truth will lie somewhere in between. This newsletter provides a curated overview of positive and negative data points to support decision-makers in forecasts and horizon scanning. The selection of news items is intended to provide a cross-section of articles from across the spectrum of AI optimists, AI realists, and AI pessimists and showcase the impact of AI across different domains and fields.
The news is curated by Oliver Rochford, Technologist, and former Gartner Research Director. AI (ChatGPT) is used in analysis and for summaries.
Want to summarize your news articles using ChatGPT? Here's the latest iteration of the prompt. The Curious AI Newsletter is brought to you by the Cyber Futurists.