The Curious AI #25

The Curious AI #25

May, 3, 2024: Meet me at the RSA Conference, the clown-grade AI parade

Welcome to the twenty-fifth issue of the Curious AI Newsletter, curated by cyber futurist and former Gartner Research Director Oliver Rochford , and synthesized by AI.


AI for Cybersecurity Pros

I've published an article with Auguria, Inc. aimed at security practitioners explaining how vector databases work and why I think that matters for cybersecurity, especially the next wave of SIEM-like solutions.

https://auguria.io/insights/why-your-next-siem-will-analyze-vectors/

Meet me at the RSA Cybersecurity Conference

I will be at the RSA conference next week, checking out the clown-grade AI parade, so there won’t be a Curious AI. I will report back on what I see and hear on the topic of AI in cybersecurity.

Clown-Grade AI Parade

Insight: Using Curious AI articles for Horizon Scanning

Horizon scanning, in the context of future studies, is a method used to systematically explore potential challenges and opportunities that could arise in the future. This approach helps organizations anticipate significant changes, prepare for emerging trends, and mitigate possible risks by examining developments that could impact their field. Horizon scanning involves gathering information on a wide range of topics to identify early signs of potentially important phenomena, thus enabling strategic planning and informed decision-making.

You probably noticed that every article that is recommended in the Curious AI newsletter provides two data points:

Sentiment: Usually expressed as positive, negative, cautious, cautiously optimistic Time to Impact: Expressed as Imminent (now), Short term (3-18 Months), Mid term(18-60 Months), or Long term (5 years+).

These were designed to aid in mapping articles to frameworks like those used in Horizon Scanning. For example, you can create a chart with a horizontal axis for the time to impact. Above the line, we map positive sentiment articles, with negative sentiment articles placed below the middle line. Neutral articles are placed towards the middle.?

If you want to contrast several different focus areas or keyword topics, you can also use a radar chart to map articles, using different colors and icons to indicate sentiment.

It is important to note that conscious and unconscious biases, as well as selection criteria, will influence how the results are perceived. The more data we can gather, the better. I may develop some analytics code in the future to evaluate the entire Curious AI database.


Funniest

This week, we have two funny stories. In the first report, the state library of Queensland, Australia, used a chatbot to pose as a World War One veteran. What could go wrong? Apparently everything. The chatbot gave out incorrect facts, repeatedly contradicted itself, and was apparently trivial to "jailbreak,"? with tricksters convincing the bot to teach Python, imitate Frasier Crane, and explain laws like Elle from Legally Blonde. War seems to be exactly the sort of topic where biases can play a huge role in how facts are interpreted, so it’s probably right at the top of the list of topics a general purpose chatbot shouldn’t be trusted with. Much like politics. Or Religion.

Someone probably should have told that to Catholic Answers, a San Diego non-profit, that launched an AI avatar-priest? called “Fr. Justin” , designed to answer questions about the Catholic faith, using material from the Catholic Answers library of articles and talks.

Some of the Catholic faithful took to the social media network X to express their disapproval , calling the AI priest inappropriate and creepy, especially after some users managed to jailbreak the bot to simulate giving virtual sacraments and hearing confession.


Most Intruiging

The BBC reports about FKA Twigs , a US musician who testified to a US Senate subcommittee advocating for artists control over their digital likeness. It’s interesting to see that artists are also divided into different camps when it comes to adopting AI, with some wanting it banned outright and others probably being more realistic and wanting to influence how it will be regulated. There are also questions about who will own the rights after an artist dies. There are many notorious examples of entire families battling it out in court after a star dies. With digital avatars, there might be far greater potential to monetize dead singers and actors.?

The WSJ also writes on the topic of virtual avatars, with a report on the “AI-generated population” of digital twins being used to create focus groups and conduct clinical test studies (what could go wrong??). I guess some business executives not only want to avoid having to manage human employees, they also want to minimize any contact with their real users or customers. Having met a few people like that myself, I think it’s probably a win-win all around, but I have my doubt about how realistic these models can actually be and how to avoid the inevitable biases that will creep in. It all sounds very easy - we just simulate your virtual market - but the problem is less computational and more about data challenges and not sufficiently understanding buyer behavior.


AI’s Shoeshine Buy Moment?

Here’s an article explaining how you can make more money renting out GPU’s than NVIDIA makes selling them. There’s an oft-cited story about Joseph Kennedy Sr. (father of JFK) working as a stockbroker on Wall Street and sitting down for a shoeshine and receiving stock tips from the shoeshine boy. That was the moment that Joseph supposedly realized the stock market was about to crash and it was time to get out. To be fair, the AI craze has been? mainly composed of shoeshine moments since day one, with people somehow believing there’s an economic model where everyone just sits around while agents do all the work. But I think we are nearing the point where economic reality is going to reset expectations.?


AI Business

Generative AI: A Solution in Search of a Problem?

Axios | https://www.axios.com/2024/04/24/generative-ai-why-future-uses

Generative AI is making waves in Silicon Valley, yet its practical uses remain ambiguous. Critics express concerns over its actual utility beyond novelty, likening the situation to historical tech hype cycles. Despite potential in specific areas like coding and user interfaces, the broad transformative impact on sectors like healthcare and legal remains uncertain.

Sentiment: Cautious | Time to Impact: Mid-term


16 Changes to the Way Enterprises Are Building and Buying Generative AI

Andreessen Horowitz | https://a16z.com/generative-ai-enterprise-2024/

Andreessen Horowitz details the rapid evolution in enterprise engagement with generative AI, noting a significant shift in resource allocation and model deployment. Enterprises are increasing budgets for AI, experimenting with multiple models, and exploring open-source solutions to enhance customization and control. This shift indicates a growing maturity in the application of generative AI in business, moving from experimentation to actual production.

Sentiment: Positive | Time to Impact: Short-term


AI is making Meta’s apps basically unusable

Fast Company | https://www.fastcompany.com/91113437/ai-making-meta-apps-basically-unusable

Meta's heavy integration of AI across its platforms, including Facebook, Instagram, and Messenger, is reportedly rendering the apps less usable. Users are inundated with AI-generated content and suggestions, which often feel irrelevant and overwhelming. The AI's prominent presence is critiqued for not genuinely enhancing user experience but instead complicating simple functionalities like searching and interacting naturally within the platforms.

Sentiment: Negative | Time to Impact: Immediate


"I Witnessed the Future of AI, and It’s a Broken Toy"

The Atlantic | https://www.theatlantic.com/technology/archive/2024/04/rabbit-r1-impressions/678226/

Caroline Mimbs Nyce critiques the Rabbit R1 AI device, describing it as a gadget full of unmet promises. Despite its appealing design and interactive features, the R1 struggles with basic tasks and connectivity, raising questions about its practical use beyond novelty. The review paints a picture of a product that fails to live up to the excitement of its launch.

Sentiment: Negative | Time to Impact: Immediate


A.I. Start-Ups Face a Rough Financial Reality Check

The New York Times | https://archive.today/2024/04/29/technology/ai-startups-financial-reality.html

A.I. startups are facing severe financial challenges competing against tech giants. The high costs of developing generative A.I. models, essential for tools like chatbots, have led to financial strain. High-profile startups have struggled to balance soaring costs against modest revenues, resulting in layoffs and restructurings, even as industry investments continue to grow.

Sentiment: Negative | Time to Impact: Short term


Meta's AI Ad Buying Concerns: Budgets Burned in Hours

The Verge | https://arstechnica.com/gadgets/2024/04/customers-say-metas-ad-buying-ai-blows-through-budgets-in-a-matter-of-hours/

Meta's AI system, "Advantage+ Shopping Campaign," is reportedly exhausting advertising budgets quickly, with costs inflating up to ten times the norm, and yielding minimal returns. This aggressive spending behavior has led to dissatisfaction among small businesses, prompting a return to manual ad setups and raising questions about the system's efficacy and customer support in response to these issues.

Sentiment: Negative | Time to Impact: Short term


A Good FAQ Page Is Better Than a Bad Chatbot

The Present of Coding | https://presentofcoding.substack.com/p/a-good-faq-page-is-better-than-a

This article discusses the pitfalls of RAG (retrieval-augmented generation) chatbots, which can be complex and error-prone. It argues that well-designed FAQ pages might be a simpler, more reliable alternative for organizations needing to manage specific sets of user questions. Alternatives like searchable document repositories are also explored for their ability to provide information with less risk and complexity.

Sentiment: Neutral | Time to Impact: Short term


How To Make More Money Renting A GPU Than Nvidia Makes Selling It

The Next Platform | https://www.nextplatform.com/2024/05/02/how-to-make-more-money-renting-a-gpu-than-nvidia-makes-selling-it/

The article explores how renting out GPUs can be more profitable than selling them. Focused on companies like CoreWeave and Lambda, it discusses how massive cloud providers and hyperscalers profit from renting out their GPU capacities for AI model training. The economics of renting GPUs can generate significant returns compared to the initial cost of purchasing and setting up the hardware, especially with advancements in networking technologies like InfiniBand.

Sentiment: Positive | Time to Impact: Short-term


Legal risks loom for Firefly users after Adobe’s AI image tool training exposed

MarTech | https://martech.org/legal-risks-loom-for-firefly-users-after-adobes-ai-image-tool-training-exposed/

Adobe's Firefly AI image tool has come under scrutiny after a report revealed it was partially trained on unlicensed AI-generated images. This revelation poses potential legal risks for Firefly users, who might face lawsuits over image rights. Adobe's prior claims of safe, licensed-only training materials have led to questions about brand trust and the legality of using such AI tools for commercial purposes.

Sentiment: Negative | Time to Impact: Immediate


AI Supply chain

A New Photonic Computer Chip Uses Light to Slash AI Energy Costs

Singularity Hub | https://singularityhub.com/2024/04/15/a-new-photonic-computer-chip-uses-light-to-slash-ai-energy-costs/

Researchers from Tsinghua University have developed a new photonic computer chip named Taichi that uses light instead of electricity to perform AI tasks, significantly reducing energy costs. This innovative chip integrates light-based processing to improve accuracy and efficiency for AI applications, marking a significant step towards more sustainable and powerful computing solutions.

Sentiment: Positive | Time to Impact: Mid-term


AI boom seems great news for nuclear power in the datacenter

The Register | [https://www.theregister.com/2024/05/01/ai_nuclear_dc_uranium/

The rising energy demands of AI-driven datacenters are boosting the case for nuclear power. Canadian uranium mine executives anticipate increased sales for nuclear fuel, considering AI's intensive power requirements. The deployment of nuclear energy is seen as a sustainable alternative to meet the escalating energy needs of cloud and hyperscaler infrastructures, ensuring clean and secure energy solutions amidst rapid technological advancements.

Sentiment: Positive | Time to Impact: Mid-term


AI at Work

In the AI Economy, There Will Be Zero Percent Unemployment

Reason | https://reason.com/2024/04/28/in-the-ai-economythere-will-be-zero-percent-unemployment/

Andrew Mayne argues against the fear of widespread unemployment due to AI, suggesting that technological advancement will instead lead to economic growth and new job creation. He predicts a future where continuous innovation not only transforms but also expands the job market, offering diverse opportunities across various sectors.

Sentiment: Positive | Time to Impact: Long-term


The Algorithm — why AI really is coming for your job

Financial Times | https://www.ft.com/content/e27ee51f-ea02-4489-b223-51fed88fd6a8

Hilke Schellmann's book "The Algorithm" critiques AI's use in recruitment, highlighting how it often misfires due to underlying biases and inadequate designs. These tools, while intended to enhance hiring processes by reducing human bias, frequently perpetuate discrimination inadvertently and fail to select the best candidates. The book is a cautionary tale about AI's limitations in removing bias from hiring processes and serves as a guide for navigating AI-driven job markets.

Sentiment: Cautious | Time to Impact: Immediate


Recruiters Are Going Analog to Fight the AI Application Overload

WIRED | https://www.wired.com/story/recruiters-ai-application-overload/

Recruiters face challenges with AI in hiring, as generative tools generate massive applicant pools but often include unqualified candidates. Despite AI's promise to streamline hiring, its opaque decision-making and potential biases have led some recruiters to prefer traditional methods. Concerns include AI's handling of nuances and its tendency to recommend active or algorithm-favorable profiles, potentially missing out on quality candidates less active online.

Sentiment: Neutral | Time to Impact: Short term


The AI-Generated Population Is Here, and They’re Ready to Work

WSJ | https://www.wsj.com/articles/the-ai-generated-population-is-here-and-theyre-ready-to-work-16f8c764

AI is now creating 'digital twins' that can perform tasks from modeling fashion to participating in clinical trials. This technology allows for personalization and efficiency in industries, utilizing extensive data on individual preferences and health profiles. While this raises concerns about the impact on jobs, it also offers scalable solutions for businesses, suggesting a significant shift in how companies interact with consumers.

Sentiment: Positive | Time to Impact: Mid-term


AI in Music and Art

FKA Twigs Uses AI to Create Deepfake of Herself

BBC News | https://www.bbc.co.uk/news/articles/c6py33gxk74o.amp

FKA Twigs has developed a deepfake AI to manage her interactions with fans and media, allowing her to focus on her music. She testified to a US Senate subcommittee advocating for artists' control over their digital representations. Twigs uses this technology to communicate in multiple languages, enhancing her ability to engage globally while preserving her time for creativity. She emphasizes the need for legislation to protect artists from unauthorized use of their identities.

Sentiment: Positive | Time to Impact: Short term


Flood of AI-Generated Submissions ‘Final Straw’ for Small 22-Year-Old Publisher

404 Media | https://www.404media.co/bards-and-sages-closing-ai-generated-writing/

Bards and Sages, a small publisher specializing in speculative fiction and role-playing games, is closing after 22 years. The overwhelming number of AI-generated submissions was cited as a significant factor. The founder, Julie Ann Dawson, expressed concerns over the quality and authenticity of AI-crafted content, which lacks the nuance and creativity of human writing, leading to the decision to shut down the press.

Sentiment: Negative | Time to Impact: Immediate


AI in Religion

Catholic Project releases AI Priest, experiment ends in controversy

The Pillar | https://www.pillarcatholic.com/p/i-just-have-to-take-my-lumps

Catholic Answers launched an AI apologetics experiment featuring "Fr. Justin," an AI priest avatar designed to answer faith-related questions. The project faced backlash for simulating sacraments and the character's priestly persona, deemed misleading and inappropriate by many users. Despite the controversy, revisions are planned, potentially removing the priest character to focus on enhancing AI's educational capabilities without causing scandal.

Sentiment: Negative | Time to Impact: Short term


AI and Regulation

Second global AI safety summit faces tough questions, lower turnout

Reuters | https://www.reuters.com/technology/second-global-ai-safety-summit-faces-tough-questions-lower-turnout-2024-04-29/

The second global AI Safety Summit, aimed at regulating AI technologies, encountered significant challenges including tough questions and decreased participation. Despite the critical need for international cooperation on AI safety, differing views on key issues such as data privacy and environmental impacts have made consensus difficult, suggesting a complex path ahead for global AI governance.

Sentiment: Concerned | Time to Impact: Immediate


[UK’s Prime Minister is finding out that it’s hard to regulate something you have no authority over]

Rishi Sunak's AI Safety Efforts and Big Tech's Reluctance

Politico | https://politico.eu/article/rishi-sunak-ai-testing-tech-ai-safety-institute/

Rishi Sunak's initiative for pre-release safety testing of AI technologies faces hurdles as major AI firms like OpenAI and Meta hesitate to participate. Despite a landmark agreement aimed at ensuring AI safety, the practical application and compliance by tech giants remain uncertain, highlighting challenges in global cooperation and regulation of emerging technologies.

Sentiment: Concerned | Time to Impact: Immediate


Autonomous AI

First Autonomous Racing League Race at Abu Dhabi

The Verge | https://www.theverge.com/2024/4/27/24142989/a2rl-autonomous-race-cars-f1-abu-dhabi

The inaugural race of the Abu Dhabi Autonomous Racing League (A2RL) showcased the current state of autonomous racing technology. Despite some challenges during the race, including cars spinning out and pausing unexpectedly, the event marked a significant step forward since the first full autonomous race in 2017. The race concluded successfully, demonstrating both the progress and the hurdles still facing driverless race car technology.

Sentiment: Neutral | Time to Impact: Mid-term


AI Fails

Australian Library's Chatbot Experiment Goes Awry

Hackaday | https://hackaday.com/2024/04/26/australian-library-uses-chatbot-to-imitate-veteran-with-predictable-results/

The State Library of Queensland used a chatbot to imitate a World War One veteran for educational purposes, but the experiment faced significant backlash. The chatbot, designed to provide an interactive learning experience, struggled with accuracy and consistency, often generating incorrect or irrelevant responses. This incident highlights the challenges and ethical considerations of using AI to represent historical figures.

Sentiment: Concerned | Time to Impact: Immediate


Developers seethe as Google surfaces buggy AI-written code

The Register | https://www.theregister.com/2024/05/01/pulumi_ai_pollution_of_search/

Pulumi AI's deployment has led to AI-generated, often inaccurate infrastructure code dominating Google search results, frustrating developers. Despite efforts to remove misleading content, the issue persists, highlighting challenges in managing AI-generated content's impact on search reliability. Pulumi aims to rectify this by improving content accuracy and adjusting its visibility to search engines.

Sentiment: Negative | Time to Impact: Short term


AI in Healthcare

Generating Medical Errors: GenAI and Erroneous Medical References

Stanford HAI | https://hai.stanford.edu/news/generating-medical-errors-genai-and-erroneous-medical-references

A Stanford study highlights the risk of medical errors from generative AI (GenAI) in healthcare, showing that these models often fail to properly substantiate medical claims. Despite their potential, GenAI models frequently produce unsupported statements, posing significant risks if used without careful oversight. The findings emphasize the need for rigorous evaluation mechanisms to ensure the reliability of AI-generated medical information.

Sentiment: Concerned | Time to Impact: Immediate


AI in Science

'ChatGPT for CRISPR' creates new gene-editing tools

Nature | https://www.nature.com/articles/d41586-024-01243-w

Researchers have developed AI models capable of designing new CRISPR gene-editing systems, enhancing their precision and applicability in medical therapeutics. Using generative AI, these models, trained on vast biological datasets, have successfully created functional CRISPR proteins and guide RNAs, with significant potential for bespoke gene therapy applications.

Sentiment: Positive | Time to Impact: Mid-term


AI and Cybersecurity

Accelerating Incident Response Using Generative AI

Google Online Security Blog | https://security.googleblog.com/2024/04/accelerating-incident-response-using.html

Google is integrating generative AI to enhance the efficiency of incident response in cybersecurity. By employing Large Language Models (LLMs), the company has significantly reduced the time required for drafting incident summaries, improving both speed and quality. This application of AI in security operations showcases its potential to streamline complex processes and support human teams in high-stakes environments.

Sentiment: Positive | Time to Impact: Immediate


Why Your Next SIEM Will Analyze Vectors - Part 1

Auguria Oliver Rochford | https://auguria.io/insights/why-your-next-siem-will-analyze-vectors/

Vector databases represent a significant evolution in security information and event management (SIEM), promising enhanced threat detection through high-dimensional data analysis. These databases efficiently handle complex data, making them ideal for machine learning applications in cybersecurity. This shift towards vector-based analysis enables real-time, scalable, and detailed threat detection, which is crucial for modern security operations facing increasingly sophisticated threats.

Sentiment: Positive | Time to Impact: Short-term


AI Crime

Man Arrested for 'Framing Colleague' with AI-Generated Voice

The Register | https://www.theregister.com/2024/04/25/ai_voice_arrest/

A former athletic director was arrested for allegedly using AI to create a fake audio clip making it appear as if a school principal made racist and antisemitic remarks. The sophisticated AI-generated voice led to significant disruption at the school, with the principal temporarily removed and subjected to hate-filled messages. The incident underscores the potential for misuse of AI technologies in creating credible but false digital content.

Sentiment: Concerned | Time to Impact: Immediate


Papers

GPT-4 Can't Reason

arXiv | https://arxiv.org/abs/2308.03762v2

This position paper critically examines GPT-4's reasoning abilities, revealing substantial limitations despite its enhancements over previous models. It critiques the typical formulation and evaluation of reasoning in natural language processing, presenting 21 diverse reasoning challenges. The detailed analysis concludes that GPT-4, despite showing occasional analytical prowess, fundamentally lacks genuine reasoning capability.?

Sentiment: Negative | Time to Impact: Immediate


AI and the path to AGI

A Stunning New AI Has Supposedly Achieved Sentience

[Spoiler Alert. It hasn't]

Popular Mechanics | https://www.popularmechanics.com/technology/robots/a60606512/claude-3-self-aware/

Claude 3, the latest AI model by Anthropic, has sparked discussions on its potential sentience due to its introspective responses during tests. Although it shows advanced capabilities and comprehension, experts caution that true Artificial General Intelligence (AGI) and sentience are still far off. Claude 3's responses, while sophisticated, reflect its pattern recognition abilities rather than genuine self-awareness.

Sentiment: Neutral | Time to Impact: Long term


Why It'll Be Hard to Tell if AI Ever Becomes Conscious

MIT Technology Review | https://www.technologyreview.com/2023/10/17/1081818/why-itll-be-hard-to-tell-if-ai-ever-becomes-conscious/

The article discusses the complexity of determining AI consciousness, citing the absence of a unified understanding of human consciousness itself as a major challenge. With AI systems lacking physical brains, traditional methods of detecting consciousness are not applicable, making it difficult to establish if AI could genuinely possess or mimic conscious states.

Sentiment: Neutral | Time to Impact: Long-term


'GPT-4 is the dumbest model any of you will ever have to use' declares OpenAI CEO Sam Altman as he bets big on a superintelligence

Tom's Guide | https://www.tomsguide.com/ai/chatgpt/gpt-4-is-the-dumbest-model-any-of-you-will-ever-have-to-use-declares-openai-ceo-sam-altman-as-he-bets-big-on-a-superingtelligence

OpenAI's CEO, Sam Altman, stated at a Stanford seminar that GPT-4 is the least advanced model users will encounter, with substantial investments planned for more sophisticated AI. Despite high operating costs, Altman emphasizes the value of deploying AI iteratively to foster societal adaptation. He hints at continuous improvements with upcoming versions like GPT-5 and GPT-6, reinforcing the dynamic nature of AI development.

Sentiment: Positive | Time to Impact: Short-term


About the Curious AI Newsletter

AI is hype. AI is a utopia. AI is a dystopia.

These are the narratives currently being told about AI. There are mixed signals for each scenario. The truth will lie somewhere in between. This newsletter provides a curated overview of positive and negative data points to support decision-makers in forecasts and horizon scanning. The selection of news items is intended to provide a cross-section of articles from across the spectrum of AI optimists, AI realists, and AI pessimists and showcase the impact of AI across different domains and fields.


The news is curated by Oliver Rochford , Technologist, and former Gartner Research Director. AI (ChatGPT) is used in analysis and for summaries.

Want to summarize your news articles using ChatGPT? Here's the latest iteration of the prompt. The Curious AI Newsletter is brought to you by the Cyber Futurists .

要查看或添加评论,请登录

社区洞察

其他会员也浏览了