Curious AI #31
Oliver Rochford
Evangelist @ Auguria | Technologist | Cyberfuturist | Startup Advisor | Former Gartner Analyst
June 28, 2024: A lot of boards have AI FOMO and will ask for LLMs to be implemented, but the data strategy is almost never in place” (source)
Welcome to the thirty-first issue of the Curious AI Newsletter, which Oliver Rochford , Cyber futurist and former Gartner Research Director, has curated and AI has synthesized.
“A lot of people, I think, are having their first initial encounters with the technology and being a little bit disappointed”?
Jared Spataro, Corporate Vice President of AI at Work at Microsoft (source)
Want to discuss AI, quantum, and other emerging technologies?
Join our Slack ?? Curious AI & Intriguing Quantum Slack.
Chatbots in Cybersecurity
I am speaking on July 18 with Alex Hurtado from Anvilogic on the fantastic Detection Engineer Dispatch podcast about Chatbots in security operations.
We will be talking about where and how AI chatbots can provide significant value across various roles within SOCs, including delving into some compelling use cases where AI chatbots have proven particularly effective in bolstering security operations, as well as scenarios where they may fall short.
RSVP here:
July 18, 2024 - 11AM PT | 2PM ET
Who needs AI surveillance Jewelry? ??♂?
Even after the recent Rabbit R1 security breach and the disastrous failed launch of Human's AI Pin, we do not appear to have reached peak stupidity in terms of ridiculous wearable AI gadget ideas. For example, Based Hardware is launching the "Friend" AI necklace.
?According to the startup, "Friend" allows you to remember people you meet, conversations you have had, and commitments you have made.
Most sensible people, however, understand that you are essentially hanging a surveillance device around your neck and becoming a walking privacy nightmare.?
Please contact me if you are funding this. I have got some magic AI-infused beans for you.?
Digital Twins: hyped, but little thought through
There is a lot of buzz surrounding Digital Twins. Whether it is creating an avatar for a deceased loved one, artists creating them of themselves, or museums using AI to recreate historical personalities, it is a fascinating use case with numerous applications.?
Just this week, Zoom CEO Eric Yuan spoke about his vision for users to be able to send their digital twins to meetings on their behalf. This raises some obvious awkward questions about what exactly Eric believes a digital twin will be. We are unlikely to have the sort of technology to transfer a replica of your mind into a GPT in the next couple of years. Most, if not all, people do not have access to sufficient amounts of data to train a model to reliably act as someone. We’re not even sure how to capture the full mental deliberation and decision-making processes we’d require for something like that.
We also have not figured out any reliable ways to delegate and limit authority among agents. One day, you may have the equivalent of a data contract that specifies what an agent can and cannot do in your name. But the tools we have today for this are primitive and manual, as far as they exist at all. That type of knowledge transfer is difficult, even among humans, so we have evolved numerous systems, including laws, to regulate it.
What people are creating today are not digital twins. They are chatbots that regurgitate personalized soundbites to increase the emotional impact of the Eliza effect. We will probably end up with chatty transcription bots masquerading as our colleagues.?
It sounds like the future of meetings will be just you and your colleagues' bots, similar to the "Dead Internet" but with Zoom. It also sounds like a technological solution to a social problem, attempting to improve bad meeting culture and practices. Your attendance is either necessary or it isn’t. To capture information from a meeting, other AI-driven solutions are far more appropriate, such as simply sharing a recording of the meeting and allowing individual users who were not present to mine it with their agents instead. Why would we want everyone to make their recordings??
Is Europe regulating itself out of an AI future?
If you believe the "E/ACC" crowd on X, it may sound as though whoever achieves the early mover advantage will win the AI race. “Wait and see" as an AI strategy is painted as not just potentially extremely risky, but as catastrophic; AI dominance implies dominance, period. Better AI means better materials science, better ballistics, better missiles, and better drones. Better everything.
AI accelerationalists believe that superintelligence is inevitable; we are not far away from achieving superintelligence; if we aren't the first to achieve it, an adversary will.?
The question of who the "we" are in all of this remains difficult to resolve in today's increasingly networked and polarized society. But most importantly, all of this is based on really questionable assumptions. For example, superintelligence is not a foregone conclusion. We still do not know how to accomplish it.?
The AI scaling hypothesis is the most prominent theory today, and it says that increasing the scale of processing, data, and model parameters can dramatically increase AI model performance. The scaling hypothesis (as the name implies) has not been confirmed, and there are numerous counterarguments and skeptics.
Even if the idea is sound, there are still issues with feasibility and, most importantly, the timeline. As we've discussed regularly in Curious AI, some difficult limits must be solved before we have enough energy, data, hardware, and even data centers. Scaling AI requires the creation of a completely new supply chain that simply doesn't exist today.?
Tighter AI regulation is a risk that differs from what the "E/ACC" faction wants us to take. It is based on the notion that cultures that do not govern AI will disintegrate before reaching anything close to AGI. They may even contribute to their collapse by misallocating resources or making poor decisions. For example, former Open AI researcher Leopold Aschenbrenner has advocated that we make up for the AI energy gap by increasing fossil fuel usage.
If the race for AGI is still in its early stages, waiting it out could be an entirely reasonable option. Allow others to exhaust themselves while establishing the groundwork.?
Is this a fair bet? That depends on how optimistic you are about AI progress. Betting that AGI or at least dominance-level AI, is on the horizon is equally dangerous. Both rely heavily on guesswork and faith.
But if you are already behind and lack the resources to catch up, the waiting game may be the only viable option left for you.?
Want to learn all about the latest in Quantum Technology?
Check out the Intriguing Quantum Newsletter with Daniella Pontes, CISSP and me
Sovereign AI
Mistral AI warns of lack of data centres, training capacity in Europe
Mistal AI has raised concerns about Europe's insufficient data center and AI training infrastructure. The company emphasizes that the current capacity is inadequate to meet the growing demands of AI development and deployment. This shortage risks hindering the region's competitiveness in the global AI landscape. Mistal AI calls for urgent investment in expanding data center facilities and training programs to support the AI sector's growth and innovation.
Sentiment: Concerned | Time to Impact: Mid to Long-term
OpenAI to Pull Plug on Unsupported Nations – Cough, China – from July 9
OpenAI will block access to its services in unsupported countries starting July 9. This includes China, Russia, Iran, and North Korea. The company cites the use of ChatGPT for malicious activities in these regions as a reason for the move. Rising US-China tensions and efforts for technological independence in China might also influence this decision. The ban could negatively impact developers in these regions.
Sentiment: Negative | Time to Impact: Short-term
OTHER
AI Regulation
Meta pauses plans to train AI using European users' data, bowing to regulatory pressure
Meta has halted its plans to train AI systems using data from European users in response to increasing regulatory pressure. The decision follows heightened scrutiny over data privacy and compliance with EU regulations. This pause aims to address regulatory concerns and align with European data protection standards, reflecting ongoing challenges tech companies face in navigating regional regulatory landscapes.
Sentiment: Concerned | Time to Impact: Short to Mid-term
OpenAI expands lobbying team to influence regulation
Financial Times | https://www.ft.com/content/2bee634c-b8c4-459e-b80c-07a4e552322c
OpenAI is significantly expanding its lobbying team to influence AI regulation globally. The company has grown its global affairs staff from three to 35, aiming for 50 by the end of 2024. This expansion targets AI legislation worldwide, including in the EU, UK, and US. OpenAI seeks to ensure regulations support innovation and safety. The move comes as governments scrutinize AI more closely.
Sentiment: Neutral to Positive | Time to Impact: Short to Mid-term
AI Business
Can AI Startups Outrun Dot-Com Bubble Comparisons? Investors Aren’t So Sure
The Wall Street Journal | https://www.wsj.com/articles/can-ai-startups-outrun-dot-com-bubble-comparisons-investors-arent-so-sure-6e7d90c0
Investors are drawing parallels between the current AI startup boom and the dot-com bubble of the late 1990s. Despite substantial investments, there are concerns about the sustainability of the rapid growth and valuations in the AI sector. Skeptics worry that many AI startups may face a harsh reality if they fail to deliver on their ambitious promises, potentially leading to a market correction.
Sentiment: Concerned | Time to Impact: Short to Mid-term
This $1 Billion AI Chatbot Has Been Accused of Stealing Content and Lying
TechSpot | https://www.techspot.com/news/103500-1-billion-ai-chatbot-has-accused-stealing-content.html
Perplexity, a popular AI chatbot endorsed by Nvidia CEO Jensen Huang, faces accusations of content theft and dishonesty. Publications like Forbes and Wired have criticized Perplexity for rewriting articles without proper attribution and generating false information. Perplexity's practices include ignoring web standards to scrape content, leading Forbes to threaten legal action for copyright infringement. The controversy highlights significant legal and ethical concerns in the AI content generation space.
Sentiment: Concerned | Time to Impact: Immediate to Short-term
Microsoft CTO Kevin Scott says early previews of newer AI models surpass OpenAI GPT-4's reasoning capabilities and can even pass a PhD qualifying exam
Windows Central | https://www.windowscentral.com/software-apps/microsoft-cto-kevin-scott-says-early-previews-of-newer-ai-models-surpass-openai-gpt-4s-reasoning-capabilities
Microsoft CTO Kevin Scott stated that early previews of new AI models demonstrate reasoning and memory capabilities surpassing OpenAI's GPT-4. These advancements suggest that newer models can handle more complex tasks, including passing PhD qualifying exams. Scott highlighted improvements in AI's episodic memory and durable reasoning, enhancing productivity and problem-solving efficiency.
Sentiment: Positive | Time to Impact: Short to Mid-term
Microsoft AI CEO Mustafa Suleyman Says GPT-6 Will Come in Two Years
Analytics India Magazine | https://analyticsindiamag.com/microsoft-ai-ceo-mustafa-suleyman-says-gpt-6-will-come-in-two-years/
Microsoft AI CEO Mustafa Suleyman revealed that GPT-6 is expected to be released within the next two years. He highlighted advancements in AI capabilities and emphasized the importance of safety and ethical considerations in developing powerful AI models. Suleyman discussed the transformative potential of GPT-6 in various industries and its anticipated impact on enhancing productivity and innovation.
Sentiment: Optimistic | Time to Impact: Mid-term
OTHER
Funding and Project Announcements
AI at Work
As Employers Embrace AI, Workers Fret—and Seek Input
Klarna's use of AI to save costs and boost profits exemplifies the potential of generative AI in business. However, while AI can streamline tasks and improve efficiency, its impact on jobs concerns many workers. Companies like Accenture involve employees in AI implementation to mitigate fears and ensure AI aids rather than replaces human roles. The evolving AI landscape emphasizes the need for AI skills, highlighting both opportunities and challenges for businesses and workers.
Sentiment: Neutral | Time to Impact: Short to Mid-term
AI Took Their Jobs. Now They Get Paid to Make It Sound Human
AI's impact on jobs is evident as it replaces human roles, particularly in copywriting. Workers like Benjamin Miller are now tasked with editing AI-generated content to make it sound more human. This new type of job often pays less and is more tedious. While AI can enhance productivity for experienced writers, it poses significant challenges for those early in their careers, highlighting a shift towards human-AI collaboration amid concerns about job security and quality of work.
Sentiment: Concerned | Time to Impact: Immediate to Short-term
AI Doesn’t Kill Jobs? Tell That to Freelancers
Wall Street Journal | https://www.wsj.com/tech/ai/ai-replace-freelance-jobs-51807bc7
AI's impact on freelancers is substantial, with many experiencing significant income drops as AI tools like ChatGPT replace tasks in writing, coding, and design. Studies show a decline of up to 21% in job postings for AI-replaceable tasks on platforms like Upwork. While some freelancers benefit from increased productivity and demand in specialized fields, many struggle with reduced opportunities and lower pay, highlighting the broader implications of AI on the gig economy.
Sentiment: Concerned | Time to Impact: Immediate to Short-term
AI Detectors Get It Wrong. Writers Are Being Fired Anyway
Freelance writers are losing jobs due to false positives from AI detectors like Originality.AI, which often misidentify human-written content as AI-generated. Writers like Kimberly Gasuras have been unjustly flagged and dismissed, despite their extensive experience and efforts to prove their innocence. The AI detection tools, marketed as highly accurate, have significant flaws and contribute to job insecurity and distrust among clients.
Sentiment: Concerned | Time to Impact: Immediate
领英推荐
Study: GPTs are GPTs: Labor market impact potential of LLMs
Researchers propose a framework to evaluate the impacts of large-language models (LLMs) on jobs by assessing their relevance to the tasks performed. Using this framework, they estimate that about 1.8% of jobs could have over half their tasks affected by LLMs with basic interfaces and training. This share rises to over 46% when considering current and future software developments that enhance LLM capabilities. The study highlights the necessity for comprehensive evaluations and policy measures to address the potential labor market effects of LLMs and related technologies.
Sentiment: Neutral | Time to Impact: Mid-term
New "Emotion-Canceling" AI Tech Aims to Shield Call Workers from Angry Customers
Ars Technica | https://arstechnica.com/information-technology/2024/06/new-emotion-canceling-ai-tech-aims-to-shield-call-workers-from-angry-customers/
A new AI technology designed to protect call center workers from abusive and angry customers has been developed. This "emotion-canceling" AI analyzes the tone and content of customer interactions in real-time, filtering out aggressive language and moderating the conversation to maintain a calmer environment. This innovation aims to reduce stress and emotional strain on customer service representatives, potentially improving job satisfaction and overall productivity.
Sentiment: Positive | Time to Impact: Short-term
OTHER
AI in Finance
Morgan Stanley Debuts OpenAI-Powered Assistant for Wealth Advisors
CNBC | https://www.cnbc.com/2024/06/26/morgan-stanley-openai-powered-assistant-for-wealth-advisors.html
Morgan Stanley has introduced an AI-powered assistant developed in collaboration with OpenAI to support its wealth advisors. This assistant aims to enhance the efficiency and effectiveness of advisors by providing quick access to information and data analysis, ultimately improving client service. The new tool reflects a broader trend in the financial industry toward integrating advanced AI technologies to streamline operations and bolster advisory services.
Sentiment: Positive | Time to Impact: Immediate to Short-term
Citi Accused of Implementing AI on Excel Spreadsheets
eFinancialCareers | https://www.efinancialcareers.co.uk/news/citi-stands-accused-of-attempting-to-implement-ai-on-the-back-of-excel-spreadsheets
Citi is facing criticism for attempting to implement AI solutions using Excel spreadsheets, raising concerns about the effectiveness and sophistication of its AI strategies. Employees reportedly have to manipulate Excel data extensively to enable AI functionalities, reflecting a potentially inadequate approach to leveraging advanced technologies in financial operations. This situation highlights broader challenges in adopting AI in established financial institutions.
Sentiment: Concerned | Time to Impact: Immediate to Short-term
Other
AI in Law
GenAI Hallucinations: Lawyers Aren’t Perfect Either
Artificial Lawyer | https://www.artificiallawyer.com/2024/06/24/genai-hallucinations-lawyers-arent-perfect-either/
This article discusses the phenomenon of generative AI (GenAI) "hallucinations" in legal contexts, where AI outputs incorrect or nonsensical information. It compares these errors to mistakes made by human lawyers, highlighting that both AI and humans are fallible. The piece argues that while GenAI can enhance legal work, it should be used with caution and under human supervision to mitigate the impact of these errors.
Sentiment: Neutral | Time to Impact: Immediate
AI in Recruitment
ChatGPT AI Introduces Bias Against Disabilities in Resumes, Study Finds
University of Washington News | https://www.washington.edu/news/2024/06/21/chatgpt-ai-bias-ableism-disability-resume-cv/
A study by the University of Washington found that ChatGPT can introduce bias against people with disabilities in resumes and CVs. The AI often omits crucial disability-related information or fails to present it appropriately, potentially affecting the employment prospects of disabled individuals. Researchers emphasize the need for better AI training to prevent such biases and advocate for inclusive practices in AI development to ensure fair representation of all users.
Sentiment: Concerned | Time to Impact: Immediate to Short-term
AI in Education
Surge in young people's use of generative AI prompts concerns about literacy skills
The Bookseller | https://www.thebookseller.com/news/surge-in-young-peoples-use-of-generative-ai-prompts-concerns-about-literacy-skills
There is growing concern over young people's increasing reliance on generative AI tools like ChatGPT, raising alarms about potential negative impacts on literacy skills. Educators and experts warn that these tools may hinder the development of critical reading and writing abilities, as students might rely too heavily on AI for tasks that traditionally build these skills. There is a call for balanced use and educational strategies to ensure that young people continue to develop strong literacy foundations.
Sentiment: Concerned | Time to Impact: Mid-term
AI can beat university students, study suggests
A study by the University of Reading found that AI-generated answers to psychology exams outperformed real student submissions. Researchers created 33 fictitious students using ChatGPT to generate exam responses, which scored half a grade higher on average compared to real students. The AI essays were largely undetected by markers, with only 6% raising concerns. The study suggests a need for the education sector to adapt to the implications of AI on academic integrity.
Sentiment: Concerned | Time to Impact: Short to Mid-term
AI in Society
How Kids Are Using Generative AI
Children are increasingly using generative AI for creativity and learning, but this trend raises concerns about their exposure to misinformation and the ethical implications of AI-generated content. Parents and educators are tasked with guiding kids to use these tools responsibly while leveraging their potential to enhance education. The article explores the balance between fostering innovation and ensuring safety in an AI-driven digital environment.
Sentiment: Neutral | Time to Impact: Immediate to Short-term
Black Founders are Creating Tailored ChatGPTs for a More Personalized Experience
TechCrunch | https://techcrunch.com/2024/06/16/black-founders-are-creating-tailored-chatgpts-for-a-more-personalized-experience/
Black founders are developing AI models tailored to their communities, addressing the cultural biases in mainstream AI. These new models, such as Latimer.AI and ChatBlackGPT, focus on accurately reflecting the experiences and languages of Black and brown people. Innovations in Africa, like CDIAL.AI, are also enhancing AI's cultural relevance by supporting African languages and dialects.
Sentiment: Positive | Time to Impact: Short to Mid-term
AI in Death
I became an orphan at 26. So I turned my parents into ghostbots
The Times | https://www.thetimes.com/magazines/the-times-magazine/article/ai-grief-tech-ghostbots-lwnjmf8xl
Lottie Hayton shares her experience using AI to create "ghostbots" of her deceased parents, hoping it would help her grieve. She describes the emotional impact and technical challenges of interacting with AI versions of her parents. Despite initial curiosity, Hayton finds the experience ultimately distressing and questions the ethical implications and effectiveness of AI in processing grief.
Sentiment: Mixed | Time to Impact: Immediate to Short-term
OTHER
AI in Medicine
Pillbot Begins Clinical Trials
The Pillbot, a swallowable robotic capsule designed to diagnose and monitor gastrointestinal issues, has entered clinical trials. Developed by a team at NYU Abu Dhabi, the Pillbot uses advanced imaging technology to capture detailed internal images of the digestive tract. This innovation aims to provide a less invasive and more comfortable alternative to traditional endoscopic procedures, potentially revolutionizing gastrointestinal diagnostics.
Sentiment: Positive | Time to Impact: Mid-term
AI in Art and Media
Award-Winning Photo Disqualified from AI Category for Being Real
A photo that won an award in the AI category of a photo competition was disqualified after judges discovered it was a real photograph, not AI-generated. The incident highlights the challenges of distinguishing between AI-generated and real images in competitions, raising questions about the criteria and verification processes used to assess entries.
Sentiment: Neutral | Time to Impact: Immediate
Meta Tagging Real Photos Made with AI
Meta has announced new measures to tag real photos that were enhanced or created with AI. This initiative aims to combat misinformation by clearly identifying AI-modified images on its platforms. The company will use advanced algorithms to detect AI usage in photos, applying visible tags to inform users. This move is part of Meta's broader efforts to ensure transparency and maintain trust in digital content.
Sentiment: Positive | Time to Impact: Short-term
OTHER
AI Carbon Footprint
Microsoft Bets on Fusion Power Amid AI Energy Demands
The Washington Post | https://www.msn.com/en-us/money/technology/ai-is-exhausting-the-power-grid-tech-firms-are-seeking-a-miracle-solution/ar-BB1oDl5z
Microsoft is investing in atomic fusion near the Columbia River to meet AI's growing energy needs. Despite skepticism, the tech giant aims to harness fusion by 2028, amid rising fossil fuel use due to AI's electricity demands. Big Tech's experimental energy projects, like small nuclear reactors and geothermal energy, face long odds. Critics argue these efforts distract from immediate environmental concerns as AI data centers drive increased fossil fuel reliance.
Sentiment: Concerned | Time to Impact: Mid to Long-term
AI Data Centers Consume Vast Energy, Impact Environment
The rise of AI is significantly increasing energy consumption at data centers, leading to substantial environmental impacts. These centers, which power AI technologies, are becoming major electricity consumers, often relying on non-renewable energy sources. This trend exacerbates carbon emissions and environmental degradation, countering the tech industry's sustainability promises. As AI's demand grows, finding green energy solutions becomes crucial to mitigate its ecological footprint.
Sentiment: Concerned | Time to Impact: Immediate to Short-term
AI and Cybersecurity
Neuralink's First Human Patient Noland Arbaugh Says His Brain Chip Can Be Hacked: 'It Is What It Is'
Hindustan Times | https://www.hindustantimes.com/business/neuralinks-first-human-patient-noland-arbaugh-says-his-brain-chip-can-be-hacked-it-is-what-it-is-101719211218257.html
Noland Arbaugh, Neuralink's first human patient, acknowledges that his brain chip could potentially be hacked but remains accepting of the risk. Arbaugh's perspective underscores ongoing security and ethical concerns surrounding Neuralink's brain-computer interface technology. Despite these concerns, the advancements in neural technology promise significant medical breakthroughs.
Sentiment: Neutral to Concerned | Time to Impact: Short to Mid-term
AI in Practice
AI Work Assistants Need a Lot of Handholding
Wall Street Journal | https://www.wsj.com/articles/ai-work-assistants-need-a-lot-of-handholding-500c2bd8
AI work assistants, despite their potential, often require significant oversight and management to function effectively. These tools can misinterpret data, make errors, or lack the nuanced understanding necessary for complex tasks. Companies find that human intervention is frequently needed to correct and guide AI outputs, highlighting the current limitations of AI in professional settings. This dependence on human support underscores the ongoing challenges in integrating AI seamlessly into the workplace.
Sentiment: Neutral | Time to Impact: Immediate to Short-term
Typing to AI Assistants Might Be the Way to Go
Apple's iOS 18 introduces a more integrated "Type to Siri" feature, allowing users to type commands instead of speaking them, which can be awkward in public. This update addresses the common discomfort and privacy concerns associated with voice commands. The feature provides a more discreet way to interact with AI assistants, offering quick suggestions for ease of use, and enhancing the practicality of AI in noisy or public settings.
Sentiment: Positive | Time to Impact: Immediate
About the Curious AI Newsletter
AI is hype. AI is a utopia. AI is a dystopia.
These are the narratives currently being told about AI. There are mixed signals for each scenario. The truth will lie somewhere in between. This newsletter provides a curated overview of positive and negative data points to support decision-makers in forecasts and horizon scanning. The selection of news items is intended to provide a cross-section of articles from across the spectrum of AI optimists, AI realists, and AI pessimists and showcase the impact of AI across different domains and fields.
The news is curated by Oliver Rochford, Technologist, and former Gartner Research Director. AI (ChatGPT) is used in analysis and for summaries.
Want to summarize your news articles using ChatGPT?
Here's the latest iteration of the prompt. The Curious AI Newsletter is brought to you by the Cyber Futurists.
Detection Engineering Dispatch | x-IBM QRadar
5 个月Can't wait for our session ?? ?? ??