The Curious AI #37
Oliver Rochford
Evangelist @ Auguria | Technologist | Cyberfuturist | Startup Advisor | Former Gartner Analyst
Welcome to issue #37 of the Curious AI Newsletter, curated by Oliver Rochford , Cyber futurist, and former Gartner Research Director, and synthesized and summarized using AI.
AI Tribe of the Week
Analog Anarchist
Rejects AI entirely and longs for the days of typewriters and rotary phones. This tribe is often found at flea markets hunting for vinyl records. The retro rebels of the digital age. “Why settle for a digital copy when you can have the real thing? Long live vinyl and pinball!”
Tagline:"Keep it classic, keep it analog!"
Want to discuss AI, quantum, and other emerging technologies?
Join our Slack
Quote of the week
“If OpenAI is right on the verge of AGI, why do prominent people keep leaving?”
Benjamin De Kraker, AI Developer (source )
Most Relatable: Customers reject AI for Customer Service
It looks like the robot revolution hit a snag in customer service. It seems that when people are having problems, they would rather complain to another person.?
According to a new report from CX Today, customers are shockingly choosing to talk to real people instead of chatbots. It's almost as if people value empathy and nuanced understanding when dealing with complex issues. But hey, who needs emotional intelligence when you can have a glorified FAQ machine, right?
While companies are busy patting themselves on the back for their cutting-edge AI solutions, customers are desperately mashing the "speak to a human" button. It seems we're not quite ready to outsource our humanity just yet. Take note, tech overlords. Sometimes, people just want to talk to people.?
Unless you don’t have a choice, of course. That could be because you are getting a loan . Beggars can’t be choosers, after all. Or you are stuck dealing with a service provider that has no real competition or alternatives. For example, public services . There seems to be a contradiction here: On the one hand, people with more money and education will benefit the most from AI. This is already causing what some people are calling "AI poverty" or "AI inequality," and there are worries that this will make inequality worse or entrenched. On the other hand, for many tasks and problems, it looks like that in the future, only the wealthy will be able to get help from actual human beings.
Most Morally Confused: OpenAI empowers dishonesty when it helps its customers.
It turns out that OpenAI, the company that's supposedly all about "beneficial AI", has developed a nifty little watermarking system for ChatGPT. The problem is that they're not actually using it. Why? Because their users could get caught passing off text made by AI as their own.
The technology is apparently 99.9% effective at detecting AI-generated text. (aside from the fact that it might be easy to get around by rewording) .But implementing it might hurt user engagement. After all, those sweet, sweet user growth metrics are worth a little academic dishonesty and disinformation.?
I already reported on the challenges involved in developing AI detection and evasion capabilities back in Curious AI #15 . It sounds like OpenAI took the "easy route," right down the slippery slope of ethical compromise. But hey, at least their users can keep churning out indistinguishable AI content without fear of getting caught.
Most Orwellian: Argentina to use AI to predict future crimes?
Hold onto your dystopian novels, folks! Argentina's taking a page right out of "Minority Report" and diving headfirst into the murky waters of AI-powered crime prediction. Because why solve actual crimes when you can prevent imaginary ones, right?
The country's Ministry of Security, in a move that would make Philip K. Dick roll in his grave, has announced a new "Artificial Intelligence Unit Applied to Security." Their cunning plan is to use AI to "predict future crimes" before they happen. Because that's worked out so well in every sci-fi story ever.
This cutting-edge unit will be tasked with "prevention, detection, investigation, and prosecution of crime," using AI, drone surveillance, social media patrolling, and facial recognition. It's like they're running a proof of consent for the creation of an AI surveillance state.
Supporters claim it could significantly reduce crime rates. Critics warn of potential misuse and the risks of profiling. Meanwhile, civil liberties are quietly packing their bags and booking a one-way ticket out of the country.
But hey, who needs privacy when you can have the illusion of safety, right? Welcome to the brave new world, where your thoughts are crimes waiting to happen and Big Brother is an algorithm. Sleep tight, citizens!
Most Anticlimactic: OpenAI's AGI Dreams Hit Snooze Button as Big Brains Bail
It looks like the AI paradise promised by OpenAI might be delayed. Who could have seen that coming? [Editors note: Me, the Editor,, and anyone else not drinking the e/acc and? AGI Kool-Aid.]
Ars Technica reports that OpenAI is experiencing a bit of a brain drain, with key figures like Greg Brockman and John Schulman either jumping ship or taking extended vacations. It's almost as if creating godlike artificial intelligence isn't as easy as writing a few fancy algorithms and crossing your fingers.?
This mass exodus of big thinkers has sparked a wildfire of skepticism about OpenAI's bold claims of imminent AGI. It turns out that when the people supposedly on the verge of creating an artificial god decide to peace out, folks start to wonder if maybe, just maybe, we're not quite as close to the singularity as we've been told.
Some are even suggesting that OpenAI might be facing internal challenges rather than putting the finishing touches on their world-dominating AI.
So, while OpenAI scrambles to update its "Days Since Last AGI Prediction" board, the rest of us can take a moment to appreciate the beautiful irony. The company that promised to create intelligence to surpass all human minds can't even keep its big brains from heading for the exits.
But don't worry, I'm sure AGI is still just around the corner. And if you believe that, I've got a bridge powered by blockchain in the metaverse to sell you.
Intrigued by the most recent developments in quantum technology?
Check out the Intriguing Quantum Newsletter by Daniella Pontes, CISSP and me.
AI Warbots
AI and National Security | ASPI Strategist (Australian Strategic Policy Institute)
AI is increasingly becoming integral to national security strategies worldwide. The article discusses the potential of AI in enhancing defense capabilities, including predictive analytics, surveillance, and decision-making processes. However, it also raises concerns about ethical implications, cybersecurity risks, and the need for robust international regulations to prevent misuse. As AI evolves, balancing technological advancement with security and ethical considerations is critical for national and global stability.
Sentiment: Neutral | Time to Impact: Long-term
Sovereign AI & AI Nationalism
Von der Leyen Backs €100 Billion 'CERN for AI' Proposal | Euractiv
European Commission President Ursula von der Leyen has endorsed a proposal for a €100 billion AI research initiative modeled after CERN. The plan aims to consolidate AI efforts across Europe, but details remain vague, raising concerns about its execution and necessity. Critics suggest that the large investment may not be justified without a clear, gradual strategy.
Sentiment: Positive | Time to Impact: Long-term
AI Copyright, Regulation, and Antitrust
OpenAI won’t watermark ChatGPT text because its users could get caught | The Verge
OpenAI has developed a watermarking system for ChatGPT to detect AI-generated text but has hesitated to implement it due to concerns that it might deter users. While the technology is 99.9% effective, internal debates focus on its impact on user engagement and the potential for easy circumvention by rewording.
Sentiment: Neutral | Time to Impact: Short-term
Suno Audio to RIAA: Your Music is Copyrighted, You Can't Copyright Styles | TorrentFreak
Suno Audio, an AI music company, has pushed back against the Recording Industry Association of America's (RIAA) claim that the company infringes on copyrights by mimicking popular artists' styles. Suno argues that while specific songs are copyrighted, musical styles and techniques are not, making it legal to create new works inspired by those styles. This debate underscores ongoing tensions between AI-generated content and traditional copyright law.
Sentiment: Neutral | Time to Impact: Short-term
AI-Generated Financial News Site Accused of Copying Competitors | Semafor
A financial news website is reportedly using AI to replicate entire articles from rival sites, raising ethical and legal concerns. The AI-driven site mirrors content from established competitors, sparking debate over intellectual property and the implications of AI in journalism. This situation highlights the growing challenges of AI's role in content creation and the potential misuse of technology in the media industry.
Sentiment: Negative | Time to Impact: Immediate
AI Business
People Are Returning Humane AI Pins Faster Than Humane Can Sell Them, Report Says | Ars Technica
Humane's AI Pins, initially launched with high expectations, are facing significant consumer dissatisfaction. A recent report indicates that returns are outpacing new sales, suggesting potential issues with the product's functionality or market fit. This trend could signal a critical challenge for the company's future in the AI wearables market.
Sentiment: Negative | Time to Impact: Immediate
Where Facebook's AI Slop Comes From | 404 Media
Creators in India, Vietnam, and the Philippines are producing AI-generated images that exploit emotional content to generate revenue on Facebook. These images, often of disturbing scenes, are promoted by influencers who teach others to create similar content for profit. This trend highlights a concerning rise in AI-driven, low-quality content designed to manipulate engagement metrics.
领英推荐
Sentiment: Negative | Time to Impact: Immediate
Google’s Character.AI Founders, Microsoft and Inflection, Amazon and Adept | Fortune
The article discusses how the founders of Character.AI , who previously worked at Google, are gaining significant attention and investment from tech giants like Microsoft, Amazon, and Inflection. These companies are betting on AI startups like Character.AI and Adept to lead the next wave of AI innovation, reflecting the growing competition in AI-powered conversational tools and personalized experiences.
Sentiment: Positive | Time to Impact: Short-term
Customers Reject AI for Customer Service, Still Crave a Human Touch | CX Today
Despite advancements in AI for customer service, many customers prefer interacting with humans, particularly for complex or sensitive issues. The report highlights that while AI can handle basic inquiries, it often falls short in providing the empathy and nuanced understanding that customers seek, leading to frustration and a preference for human agents.
Sentiment: Negative | Time to Impact: Immediate
When the AI Bubble Bursts | New Statesman
The article discusses the potential collapse of the AI market, comparing it to past tech bubbles. It highlights the overvaluation of AI companies, the lack of sustainable business models, and the growing skepticism among investors. As the hype fades, the market could see a sharp correction, affecting tech giants and startups alike.
Sentiment: Negative | Time to Impact: Short-term
Elliott Says Nvidia Is in a 'Bubble' and AI Is 'Overhyped' | Financial Times
Hedge fund Elliott Management has warned that Nvidia's share price is in a "bubble," driven by overhyped AI technologies. The firm doubts the sustainability of AI investments, citing inefficiencies and unproven applications. Despite Nvidia's market dominance, Elliott remains cautious about long-term prospects, suggesting that the AI bubble could burst if Nvidia's financial performance falters.
Sentiment: Negative | Time to Impact: Short-term
The Beginning of the End for Generative AI Boom | Blood in the Machine
The article argues that the generative AI boom is nearing its end, citing a decline in investor confidence, disappointing financial returns, and underwhelming AI applications. It highlights the recent executive exits at OpenAI and the challenges faced by companies like Nvidia as evidence of an impending "degeneration" in the AI sector. The author predicts a gradual decline in generative AI's prominence, likening it to the collapse of previous overhyped tech trends.
Sentiment: Negative | Time to Impact: Short-term
What Happens to AI Factories When AI Moves to the Edge? | Fierce Networks
As AI processing shifts from centralized data centers (AI factories) to edge devices, companies face new challenges and opportunities. Edge AI can reduce latency and improve data privacy, but it also requires robust infrastructure and security measures. This transition could decentralize AI development, affecting the traditional cloud-based model and leading to increased innovation at the edge.
Sentiment: Neutral | Time to Impact: Mid-term
Hungry for Resources, AI Redefines the Data Center Calculus | CIO
AI's growing demand for computational power is pushing data centers to the brink, leading to a surge in both new facility construction and hardware upgrades. Experts suggest that replacing outdated CPUs and GPUs can offer significant efficiency gains, freeing up capacity for AI workloads without the need for new buildings. The shift emphasizes the importance of modernizing infrastructure to meet the rising power and space demands driven by AI technologies.
Sentiment: Neutral | Time to Impact: Mid-term
AI in Law Enforcement
Argentina Plans to Use AI to Predict Future Crimes | CBS News
Argentina is planning to implement AI technology to predict and prevent future crimes by analyzing patterns and behaviors. The initiative aims to enhance public safety, but it has sparked debates over privacy and ethical concerns. Critics warn of potential misuse and the risks of profiling, while supporters believe it could significantly reduce crime rates.
Sentiment: Neutral | Time to Impact: Mid-term
AI and Robotics
World's First Robot Dentist Performs Autonomous Dental Implant Surgery | New Atlas
In a world-first, a robot dentist has autonomously performed dental implant surgery without human assistance. Developed in China, the robot successfully placed two dental implants in a patient, guided by pre-programmed data. This advancement marks a significant leap in dental technology, aiming to improve precision and reduce errors in surgery, especially in regions with a shortage of skilled dental professionals.
Sentiment: Positive | Time to Impact: Mid-term
AI Carbon Footprint
AI Is Heating the Olympic Pool | Wired
AI is being used to optimize training and performance for Olympic swimmers by analyzing vast amounts of data on athletes' techniques, biomechanics, and environmental factors. This technology aims to enhance the efficiency of training regimens and improve race strategies, offering a competitive edge. As AI becomes more integrated into sports, it raises questions about the balance between technology and human skill in competitive athletics.
Sentiment: Positive | Time to Impact: Short-term
The Path to AGI
Major Shifts at OpenAI Spark Skepticism About Impending AGI Timelines | ?Ars Technica
OpenAI is experiencing significant leadership changes, with key figures like Greg Brockman and John Schulman departing or taking breaks, raising doubts about how close the company is to achieving AGI. The moves have led to skepticism regarding the imminent development of AGI, with some suggesting that OpenAI may be facing internal challenges rather than nearing a breakthrough.
Sentiment: Negative | Time to Impact: Short-term
What is Needed for AGI by 2029? | John's Ball Substack
The article explores the key technological, ethical, and infrastructural advancements required to achieve artificial general intelligence (AGI) by 2029. It emphasizes the need for breakthroughs in machine learning algorithms, computing power, and data processing. Additionally, it stresses the importance of addressing ethical concerns, regulatory frameworks, and global collaboration to ensure AGI development is safe and beneficial for society. Ray Kurzweil's prediction that AI will reach human-level AGI by 2029 hinges on more than just Moore's Law; it's a scientific challenge, not just an engineering one. Current AI models like LLMs suffer from issues like hallucinations due to their reliance on statistical rather than true knowledge bases. John Ball argues that solving these issues requires a shift to new scientific approaches, possibly drawing from cognitive sciences, and that a pivot away from LLMs may be necessary to achieve AGI by 2029.
Sentiment: Critical | Time to Impact: Long-term
Interesting Papers & Articles on Applied AI
Achieving Human-Level Competitive Robot Table Tennis
Researchers present a robot that achieves amateur human-level performance in competitive table tennis. The robot uses a hierarchical policy architecture, including low-level skill controllers and a high-level decision-making controller. It won 45% of matches against various human opponents, demonstrating effective real-time adaptation and solid amateur-level performance.
The Adoption of ChatGPT | Becker Friedman Institute
The article explores how ChatGPT's adoption has surged across various industries, highlighting its impact on productivity and the workforce. It discusses the technology's rapid integration into business practices, the challenges of managing AI-driven tools, and the broader economic implications. The analysis underscores the potential and risks of widespread AI adoption in modern workplaces.
How I Use AI | Nicholas Carlini
Nicholas Carlini shares his personal approach to using AI in research, emphasizing the importance of understanding AI's limitations. He discusses how AI tools assist in brainstorming, coding, and automating repetitive tasks, while also cautioning against over-reliance on these technologies. Carlini stresses the need for critical thinking and human oversight when integrating AI into academic and professional work.
Taxonomies, Ontologies, and Semantics in the AI World of Technical Communicators | Firehead
The article explores the role of taxonomies, ontologies, and semantics in the evolving landscape of AI and technical communication. It highlights how these tools help in organizing and structuring information, making it easier for AI to process and deliver accurate content. The integration of these elements is becoming increasingly important for technical communicators as they adapt to AI-driven workflows.
LLMCloudHunter: Automated Detection Rule Extraction from Cloud-Based CTI | arXiv
LLMCloudHunter is a framework leveraging large language models to automate the extraction of detection rules from unstructured cloud-based cyber threat intelligence (CTI). It improves threat hunting by generating high-precision, actionable detection rules from both textual and visual CTI data. Evaluations show the framework achieves over 90% precision and recall, effectively converting these rules into actionable Splunk queries, enhancing proactive threat detection in cloud environments.
About the Curious AI Newsletter
AI is hype. AI is a utopia. AI is a dystopia.
These are the narratives currently being told about AI. There are mixed signals for each scenario. The truth will lie somewhere in between. This newsletter provides a curated overview of positive and negative data points to support decision-makers in forecasts and horizon scanning. The selection of news items is intended to provide a cross-section of articles from across the spectrum of AI optimists, AI realists, and AI pessimists and showcase the impact of AI across different domains and fields.
The news is curated by Oliver Rochford , Technologist, and former Gartner Research Director. AI (ChatGPT) is used in analysis and for summaries.
Want to summarize your news articles using ChatGPT? Here's the latest iteration of the prompt. The Curious AI Newsletter is brought to you by the Cyber Futurists .