Curious AI 33

Curious AI 33

Welcome to issue 33 of the Curious AI Newsletter, curated by Oliver Rochford , Cyber futurist, and former Gartner Research Director, and synthesized and summarized using AI.


AI Tribe of the Week

Biochauvinist

Prefers organic, biological evolution and sees AI as a tool rather than a potential equal or superior. Think of them as the hipsters of evolution, always insisting, “I was evolving biologically before it was cool.” They might say, “AI is nice and all, but I prefer my intelligence organic, free-range, and non-GMO.”

Tagline: “Real intelligence: No batteries required”

Get your AI Tribe Infographic here

Biochauvinist

“I remain an AI optimist and am confident that we’ll get there. It’s just taking a little longer than perhaps we thought”

Sharon Mandell, chief information officer @ Juniper Networks (source)


Want to discuss AI, quantum, and other emerging technologies?

Click here to join the Curious AI & Intriguing Quantum Slack.


Amazing breakthrough technology can still cause a bubble?

AI stocks are experiencing unprecedented growth, with companies such as Nvidia seeing massive gains, with a 166% surge this year. There is a clear disparity between investment and returns, as well as growing disillusionment and negative sentiment about the practical utility of many AI features and products that have been rushed to market. Some observers and industry commentators are starting to wonder how big of a bubble AI will become, citing concerns about sustainability and potential market volatility. Goldman Sachs reported that $1 trillion is currently being invested in AI, but only a few tangible results have emerged.? Massive investments in AI infrastructure, such as data centers and GPUs, show a long-term commitment to AI development, despite concerns about short-term returns. Anthropic CEO Dario Amodei, for example, revealed that models costing $1 billion to train are being developed, and XAI intends to build a factory with 100,000 Nvidia H100 GPUs. Sequoia Capital reports that AI is now "shovel ready," and that Amazon, Microsoft, Meta, and Google will invest hundreds of billions of dollars in data centers over the next few years.

Companies such as Uber and AWS have successfully used AI to streamline processes and dramatically improve efficiency, demonstrating the power of AI-driven innovation. However, this has not come quickly or cheaply. Uber, for example, has been undergoing continuous transformation for the past eight years. Many businesses are turning to consultants to help them accelerate their AI transformations, but it remains unclear where they are getting their experience and expertise.?

Most concerning: People can’t distinguish GPT-4 from other Humans

A study comparing GPT-4, GPT-3.5, and ELIZA in a Turing test discovered that GPT-4 was judged human 54% of the time. It is not the first indication that current AI systems can trick people into thinking they are human. However, it is concerning because it appears that susceptibility to the Eliza effect is actually quite common. It is reasonable to assume that outside of a research group and in the wild, the number may even increase. After all, the test subjects were aware that both options were viable. It is not always clear when chatting with someone online. But when you throw human-parity speech into this mix, the result may be truly combustive. Which is probably why Microsoft has decided not to release their new AI speech generator, having determined it too dangerous for public release.

Most Cool: New AI technique accelerates AI training, reducing energy consumption

A research team at Google’s DeepMind have developed a new training method called JEST. JEST is reportedly 13 times faster and 10 times more efficient than other known techniques. It’s fantastic to see that we are beginning to optimize efficiencies and costs across the entire AI training-to-inference pipeline. Generative AI's resource-intensive operations are already straining power grids and water supplies, as today's AI models are around 100 to 1,000 times more computationally intensive than traditional digital services. Alongside other improvements, for example, the type of control systems? we learned Phaidra has started developing, energy use in data centers may be tamed over time. Which is just as well,?

Most predictable: Trying to fix biases in AI models introduces new biases

A study discovered that attempts to correct gender biases in GPT models resulted in or reversed discrimination, such as attributing stereotypically masculine phrases to females, and, more concerningly, bias in moral dilemmas, which found it more acceptable to abuse men over women in high-stakes situations. Unfortunately, much of the training data contains biases because it is derived from a fundamentally biased and diverse world. Differences in values, ethics, morals, and worldviews determine whether someone considers an unfair bias or a fact, and users will want their values reflected in the AI models they use. It will be impossible to reach an agreement on how to resolve this contentious issue, especially given the polarization of many societies. It also appears to be a surrogate activity, attempting to fix the world using an AI model. More ethical or moral models cannot exist unless the world itself becomes more ethical or moral. In the meantime, as we are already seeing, for example, with X’s Grok, people will create and seek out AI’s that validate their biases.


Want to learn about the latest trends and events in? Quantum Technology?

Check out the Intriguing Quantum Newsletter.


AI Warbots

The Era of Killer Robots Is Here

The New York Times | https://www.nytimes.com/2024/07/02/technology/ukraine-war-ai-weapons.html

The article examines how the Ukraine war has become a testing ground for AI-powered weapons, highlighting their effectiveness and ethical concerns. AI technology, including drones and automated targeting systems, is being used extensively. The deployment raises questions about the future of warfare, accountability, and the potential for AI to make autonomous decisions in combat scenarios.

Sentiment: Neutral | Time to Impact: Short-term

Russian State-Sponsored Media Uses AI-Enhanced Software for Influence Operations

Canadian Centre for Cyber Security | https://www.cyber.gc.ca/en/news-events/russian-state-sponsored-media-organization-leverages-ai-enhanced-meliorator-software-foreign-malign-influence-activity

Russian state-sponsored media has used AI-enhanced "Meliorator" software to create fake personas and spread disinformation on social media. The tool allows for the management and dissemination of false information through these fictitious profiles. Authorities urge social media companies to identify and mitigate these activities to reduce foreign influence operations.

Sentiment: Negative | Time to Impact: Short-term


Sovereign AI

Chinese Developers Scramble as OpenAI Blocks Access in China

The Guardian |? https://www.theguardian.com/world/article/2024/jul/09/chinese-developers-openai-blocks-access-in-china-artificial-intelligence?

OpenAI has blocked access to its services in China, prompting Chinese AI developers to pivot to domestic alternatives like SenseTime's SenseNova 5.5. This move, driven by US-China tensions and export restrictions on advanced semiconductors, has spurred Chinese companies like Baidu, Zhipu AI, and Tencent Cloud to offer free tokens and migration services to attract former OpenAI users. While this poses challenges, it also accelerates the development of Chinese AI technologies.

Sentiment: Neutral | Time to Impact: Short-term

OpenAI ban: China gets brand new AI claimed to rival GPT-4 power

Interesting Engineering | https://interestingengineering.com/culture/chinese-developers-launch-new-ai-model

Chinese developers have introduced a new AI model demonstrating advanced capabilities in natural language processing and computer vision. This model competes with leading international AI technologies and underscores China's rapid advancements in the AI sector. The launch highlights the ongoing AI race and China's strategic investments in developing cutting-edge AI solutions to compete globally.

Sentiment: Positive | Time to Impact: Short-term

Chinese AI Stirs Panic at European Geoscience Society

Science | https://www.science.org/content/article/chinese-ai-stirs-panic-european-geoscience-society

The European Geosciences Union (EGU) is embroiled in controversy over the use of the AI-powered chatbot GeoGPT, developed by Alibaba's CTO Jian Wang. Concerns about transparency, state censorship, and potential copyright infringement led to internal conflicts, ultimately resulting in the firing of EGU President Irina Artemieva. The issue highlights the growing anxiety around AI and China's influence in scientific research.

Sentiment: Negative | Time to Impact: Short-term

China Filed Most Gen AI Patents Since 2013

The Register | https://www.theregister.com/2024/07/04/china_dominates_ai_ip_wipo/?

China has dominated generative AI patents and scientific publications from 2014 to 2023, with 38,210 inventions, significantly outpacing the US's 6,276. Chinese companies like Tencent, Ping An Insurance, and Baidu lead in patent filings. OpenAI, while low in publication volume, is highly cited. The rapid increase in AI-related patents, especially in image and video tech, highlights China's prioritization of AI research.

Sentiment: Concerned | Time to Impact: Mid-term

The Underground Network Sneaking Nvidia Chips Into China

The Wall Street Journal | https://www.wsj.com/tech/the-underground-network-sneaking-nvidia-chips-into-china-f733aaa6?st

An underground market has emerged to circumvent U.S. export controls, smuggling Nvidia's advanced AI chips into China. A 26-year-old Chinese student transported six Nvidia A100 chips in his luggage from Singapore to China in November 2023. These chips are in high demand due to their restriction by U.S. export policies, highlighting supply-chain blind spots and informal networks enabling their illicit transport.

Sentiment: Concerned | Time to Impact: Immediate

Chinese Self-Driving Cars on U.S. Roads Raise National Security Concerns

Fortune | https://fortune.com/2024/07/08/chinese-self-driving-cars-us-roads-data-collection-surveillance-national-security-concerns-investigation/

Chinese self-driving cars have traveled 1.8 million miles on U.S. roads, collecting extensive data with cameras and sensors. These vehicles, owned by companies like WeRide and Pony.ai, raise concerns about data privacy and potential surveillance. Despite the significant data collected, there is a lack of oversight on what is being gathered and how it is used, sparking national security concerns.

Sentiment: Negative | Time to Impact: Short-term


AI Copyright, Regulation and Antitrust

Microsoft and Apple Drop OpenAI Seats Amid Antitrust Scrutiny

Financial Times | https://www.ft.com/content/ecfa69df-5d1c-4177-9b14-a3a73072db12)

Microsoft and Apple have resigned from their observer seats on OpenAI's board amid increased antitrust scrutiny. This move comes as regulators intensify their examination of major tech companies' influence over emerging technologies like AI. Both companies aim to avoid potential conflicts of interest and regulatory issues as the competitive landscape in AI continues to evolve.

Sentiment: Neutral | Time to Impact: Short-term

Peloton Faces Lawsuit Over AI Training on User Chat Data

ITPro | https://www.itpro.com/security/privacy/peloton-faces-lawsuit-amid-claims-it-allowed-marketing-firm-to-train-ai-on-user-chat-data

Peloton is being sued for allegedly allowing a marketing firm to train AI models on user chat data without consent. The lawsuit claims this practice violated user privacy and data protection regulations. The case highlights growing concerns about data misuse in AI training and the need for stringent privacy safeguards in handling user-generated content.

Sentiment: Negative | Time to Impact: Short-term


AI Business

AI Models Costing $1 Billion to Train Are Underway

Toms Hardware | https://www.tomshardware.com/tech-industry/artificial-intelligence/ai-models-that-cost-dollar1-billion-to-train-are-in-development-dollar100-billion-models-coming-soon-largest-current-models-take-only-dollar100-million-to-train-anthropic-ceo?

Anthropic CEO Dario Amodei reveals AI models costing $1 billion to train are in development, with future models potentially reaching $100 billion. Current models like ChatGPT-4 cost around $100 million. These costs are driven by exponential growth in AI power and hardware requirements, anticipating even larger expenses soon.

Sentiment: Neutral | Time to Impact: Mid-term

Elon Musk's xAI Plans to Build 'Gigafactory of Compute' by Fall 2025

Interesting Engineering | https://interestingengineering.com/innovation/musk-supercomputer-350000-nvidia-gpus

Elon Musk's AI startup xAI plans to build a supercomputer named "Gigafactory of Compute" by fall 2025, using 100,000 Nvidia H100 GPUs. This initiative aims to significantly surpass existing GPU clusters and enhance xAI's AI capabilities, particularly for training large language models like Grok.

Sentiment: Positive | Time to Impact: Mid-term

Andreessen Horowitz Is Building a Stash of More Than 20,000 GPUs to Win AI Deals

The Information | https://www.theinformation.com/articles/andreessen-horowitz-is-building-a-stash-of-more-than-20-000-gpus-to-win-ai-deals?

Andreessen Horowitz is accumulating over 20,000 GPUs to secure a competitive edge in the AI industry. This strategic move aims to attract AI startups by providing essential computational resources. The firm's investment underscores the growing importance of GPUs in AI development and reflects the increasing competition among venture capital firms to support AI-driven innovations.

Sentiment: Positive | Time to Impact: Mid-term

AI is Now Shovel Ready

Sequoia Capital | https://www.sequoiacap.com/article/ai-data-center-buildout/

Sequoia Capital predicts a significant increase in data center construction driven by AI demands in 2025. This boom will impact energy sectors, with advancements in solar, battery, and nuclear energy. New industrial AI players will emerge to address data center needs, potentially causing delays due to technical issues. The growth will stimulate the economy, benefiting various industries. Major tech companies like Amazon, Microsoft, and Google are heavily investing in new data centers, which will ultimately reduce AI training and inference costs.

Sentiment: Positive | Time to Impact: Mid-term

Nvidia Stock Is Still Booming, but Is the Bubble About to Burst?

Yahoo Finance | https://uk.finance.yahoo.com/news/nvidia-stock-still-booming-bubble-131949213.html?

Nvidia's stock has soared 166% in 2024, driven by the AI boom. However, analysts express concerns about its sustainability, suggesting that the current surge might resemble a bubble due to high valuation and market volatility. Investors are advised to monitor market trends closely.

Sentiment: Neutral | Time to Impact: Short-term

Goldman Sachs: $1tn to be spent on AI data centers, chips, and utility upgrades, with "little to show for it so far"

Data Center Dynamics | https://www.datacenterdynamics.com/en/news/goldman-sachs-1tn-to-be-spent-on-ai-data-centers-chips-and-utility-upgrades-with-little-to-show-for-it-so-far/

Goldman Sachs reports that $1 trillion will be invested in AI-related infrastructure, including data centers, chips, and utility upgrades. Despite the massive expenditure, tangible results have been limited so far. The investments aim to support the growing demands of AI technologies, but the slow return on investment raises concerns about the efficiency and impact of such a large financial commitment.

Sentiment: Neutral | Time to Impact: Long-term

Gen AI: Too Much Spend, Too Little Benefit?

Goldman Sachs Research | https://web.archive.org/web/20240629140307/https://goldmansachs.com/intelligence/pages/gs-research/gen-ai-too-much-spend-too-little-benefit/report.pdf?

Goldman Sachs' report debates the economic potential of generative AI. While some experts remain optimistic about AI's long-term transformative capabilities, others argue that current investments may not yield significant short-term benefits. Key concerns include high costs, technical limitations, and uncertain productivity gains. The discussion highlights both potential and skepticism surrounding AI's impact on the economy.

Sentiment: Neutral | Time to Impact: Mid-term

64% of Customers Not Keen on AI-Powered Customer Service

The Register | https://www.theregister.com/2024/07/09/gartner_simply_replacing_hold_music/

A Gartner survey reveals 64% of customers prefer not to use AI for customer service, fearing it may provide incorrect answers and make it harder to reach human agents. Despite business leaders' interest in AI for cost efficiency, customers remain skeptical. Effective AI integration should assure customers of human support when needed, to maintain trust and satisfaction.

Sentiment: Negative | Time to Impact: Short-term

AI is Effectively ‘Useless’—and it’s Created a ‘Fake it Till You Make it’ Bubble

Fortune | https://finance.yahoo.com/news/ai-effectively-useless-created-fake-194008129.html

James Ferguson of MacroStrategy Partnership warns that AI's current hype resembles the dot-com bubble, predicting a potential downfall due to unresolved issues like hallucinations and high energy consumption. He argues AI remains unproven outside niche applications, likening Nvidia's valuation to past tech disappointments. Ferguson suggests investors seek value in U.S. small-cap stocks as the AI bubble could burst.

Sentiment: Negative | Time to Impact: Short-term

The Risk of "Good Enough" in Large Language Models

Forbes ?| https://www.forbes.com/sites/forbesbooksauthors/2024/07/08/the-risk-of-good-enough-in-large-language-models/?

The article discusses the potential dangers of relying on "good enough" performance in large language models (LLMs). While these models can perform impressive tasks, their underlying mechanisms are not fully understood, leading to concerns about their reliability and the risk of misuse. The piece emphasizes the importance of continuing to refine and understand these models to mitigate risks associated with misinformation, privacy, and security.

Sentiment: Neutral | Time to Impact: Short-term

From Predictive to Generative – How Michelangelo Accelerates Uber’s AI Journey

Uber | https://www.uber.com/en-GB/blog/from-predictive-to-generative-ai/

Uber's AI platform, Michelangelo, has evolved from supporting predictive models to incorporating generative AI. This transition has enhanced Uber's machine learning capabilities, improving real-time applications across rider and Eats apps, fraud detection, and customer service. The platform now supports deep learning, collaborative model development, and advanced ML tools, streamlining the entire ML lifecycle for better performance and scalability.

Sentiment: Positive | Time to Impact: Short-term

Etsy Adds AI-Generated Item Guidelines in New Seller Policy

TechCrunch | https://techcrunch.com/2024/07/09/etsy-new-seller-policy-2024-generative-ai/

Etsy's new seller policy introduces guidelines for AI-generated items, emphasizing transparency and originality. Sellers using AI to create products must clearly disclose AI involvement, ensuring buyers are informed. The policy aims to maintain trust and authenticity on the platform while embracing technological advancements in product creation.

Sentiment: Neutral | Time to Impact: Short-term


Releases and Announcements

Meta AI Develops Compact Language Model for Mobile Devices

VentureBeat | https://venturebeat.com/ai/meta-ai-develops-compact-language-model-for-mobile-devices/

Meta AI has created a compact language model specifically designed for mobile devices. This model aims to enhance AI functionalities on smartphones, improving performance while maintaining efficiency. The development represents a significant step in making advanced AI accessible on portable devices, addressing the growing demand for mobile AI applications.

Sentiment: Positive | Time to Impact: Short-term

AI Startup Hebbia Raised $130M at a $700M Valuation

Yahoo Style | https://uk.style.yahoo.com/ai-startup-hebbia-raised-130m-230429894.html

AI startup Hebbia has secured $130 million in funding, valuing the company at $700 million. The funding round aims to bolster Hebbia's AI-driven research tools that enhance data analysis and information retrieval. With profitable revenue of $13 million, Hebbia plans to expand its offerings and improve its AI capabilities to assist professionals in extracting insights from complex data sets.

Sentiment: Positive | Time to Impact: Mid-term

AWS App Studio Promises to Generate Enterprise Apps from a Written Prompt

TechCrunch | https://techcrunch.com/2024/07/10/aws-app-studio-promises-to-generate-enterprise-apps-from-a-written-prompt/?

AWS App Studio introduces a new service that allows users to generate enterprise applications using simple written prompts. This innovation aims to streamline the app development process, making it accessible to non-technical users and enhancing productivity. The tool leverages AWS's robust infrastructure to support scalable and efficient app creation.

Announcement: https://aws.amazon.com/app-studio/?

Sentiment: Positive | Time to Impact: Short-term


AI in Journalism

Editor Questions Use of AI in Local Journalism

HoldtheFrontPage | https://www.holdthefrontpage.co.uk/2024/news/editor-questions-use-of-ai-in-local-journalism/

An editor has raised concerns about the integration of AI in local journalism, arguing that it could undermine the quality and integrity of news reporting. The discussion highlights fears that AI might replace human journalists, leading to a loss of nuanced, community-focused journalism. The editor calls for a balanced approach that leverages AI for efficiency while maintaining the human touch essential to quality journalism.

Sentiment: Negative | Time to Impact: Short-term


AI Healthcare

AI-Driven Behavior Change Could Transform Health Care

[EDITORS NOTE: This is Sam Altman’s and Ariana Huffington’s new project, Thrive AI Health]

Time | https://time.com/6994739/ai-behavior-change-health-care/

AI can drive behavior change to improve health outcomes by offering hyper-personalized recommendations for sleep, diet, exercise, stress management, and social connections. Projects like Thrive AI Health aim to develop AI health coaches that use personal data to provide real-time nudges and suggestions, making healthy habits accessible to all and potentially reversing trends in chronic diseases.

Sentiment: Positive | Time to Impact: Mid-term


AI in Education

What Aspects of Teaching Should Remain Human?

Wired | https://www.wired.com/story/what-aspects-of-teaching-should-remain-human

The article discusses the irreplaceable aspects of human teaching, emphasizing the importance of empathy, personalized feedback, and the ability to inspire and connect with students on an emotional level. While AI can assist with administrative tasks and provide data-driven insights, the human elements of teaching foster critical thinking, creativity, and a deeper understanding of complex subjects. Maintaining a balance between AI assistance and human interaction is crucial for effective education.

Sentiment: Positive | Time to Impact: Long-term


AI in Software Development

How Good Is ChatGPT at Coding, Really? Study finds that while AI can be great, it also struggles due to training limitations

IEEE Spectrum | https://spectrum.ieee.org/chatgpt-for-coding

A study in IEEE Transactions on Software Engineering evaluated ChatGPT's coding capabilities. ChatGPT demonstrated an extremely broad range of success when it comes to producing functional code, with a success rate ranging from anywhere as poor as 0.66 percent and as good as 89 percent, depending on the difficulty of the task, the programming language, and a number of other factors. ChatGPT performed better on pre-2021 problems but struggled with newer ones, highlighting its training limitations. It also generated code with runtime and memory efficiency but faced issues fixing its own errors and produced some vulnerable code, necessitating developer oversight.

Sentiment: Neutral | Time to Impact: Immediate to Short-term


AI Ethics

Surprising Gender Biases in GPT

PsyArXiv | https://osf.io/preprints/psyarxiv/mp27q

The study explores gender biases in GPT models through seven experiments. It reveals that GPT often attributes stereotypically masculine phrases to females and shows bias in moral dilemmas, finding it more acceptable to abuse men over women in high-stakes scenarios. These biases suggest that inclusivity efforts may unintentionally create new forms of discrimination, emphasizing the need for careful management in AI training.

Sentiment: Neutral | Time to Impact: Mid-term

Microsoft's AI speech generator achieves human parity but is too dangerous for the public

TechSpot | https://www.techspot.com/news/103761-microsoft-ai-speech-generator-achieves-human-parity-but.html

Microsoft's latest AI speech tool, Vall-E 2, has achieved human parity in speech naturalness, robustness, and similarity. Despite its success, Microsoft will not release it publicly due to potential misuse risks, like impersonation. Vall-E 2's advancements include grouped code modeling and repetition-aware sampling, making it highly lifelike. Potential applications include education and accessibility, but its release remains restricted.

Sentiment: Concerned | Time to Impact: Mid-term


AI Carbon Footprint

The Unknown Toll of the AI Takeover

The Lever | https://www.levernews.com/the-unknown-toll-of-the-ai-takeover?

AI technologies, such as Google's AI-driven search engine, significantly increase electricity consumption, with estimates suggesting that AI-generated search results could match Ireland's entire energy usage. The article highlights concerns about the environmental impact and resource consumption associated with AI advancements, questioning why these costs aren't more closely monitored.

Sentiment: Negative | Time to Impact: Short-term

AI's Energy Demands Are Out of Control. Welcome to the Internet's Hyper-Consumption Era

Wired | https://www.wired.com/story/ai-energy-demands-water-impact-internet-hyper-consumption-era/

Generative AI's resource-intensive operations are straining power grids and water supplies, leading to significant environmental impacts. AI models are around 100 to 1,000 times more computationally intensive than traditional services, consuming massive amounts of electricity and water. Companies like Google and Microsoft face challenges in managing their sustainability goals while advancing AI technology, emphasizing the need for efficient hardware and renewable energy solutions.

Sentiment: Negative | Time to Impact: Mid-term

Generative AI is a Climate Disaster

Disconnect Blog | https://disconnect.blog/generative-ai-is-a-climate-disaster/

Generative AI's energy consumption is exacerbating climate issues, as data centers required for AI operations use immense electricity and water resources. Companies like Microsoft and Google have seen significant increases in emissions, undermining their environmental pledges. Despite the high resource demands, tech giants continue to expand AI capabilities, driven by market competition. Critics argue that this push towards AI is prioritizing corporate profits over environmental sustainability and could accelerate the climate crisis.

Sentiment: Strong Negative | Time to Impact: Immediate to Mid-term

New AI Training Technique Is Drastically Faster, Says Google

Decrypt | https://decrypt.co/238730/new-ai-training-technique-is-drastically-faster-says-google

Google's DeepMind researchers have developed a new AI training method, JEST, which is up to 13 times faster and 10 times more efficient than previous techniques. This approach reduces computational resources and energy consumption significantly, potentially mitigating the environmental impact of AI development.

Sentiment: Positive | Time to Impact: Short-term


AI in Robotics

China’s Laws of Robotics: Shanghai Publishes First Humanoid Robot Guidelines

South China Morning Post | https://www.scmp.com/tech/policy/article/3269500/chinas-laws-robotics-shanghai-publishes-first-humanoid-robot-guidelines?

Shanghai introduced China's first humanoid robot guidelines, emphasizing human security and dignity. The guidelines call for risk controls, emergency response systems, and ethical training. Published during the World Artificial Intelligence Conference, they advocate for international collaboration and a global governance framework. The initiative aims to advance China's leadership in robotics, with plans for mass production by 2025 and economic integration by 2027.

Sentiment: Positive | Time to Impact: Mid-term


Interesting Papers &? Articles on Applied AI

Mind-Reading AI Recreates What You're Looking At With Amazing Accuracy

New Scientist | https://www.newscientist.com/article/2438107-mind-reading-ai-recreates-what-youre-looking-at-with-amazing-accuracy/?

Researchers have developed AI systems that can accurately reconstruct images based on brain activity recordings. By focusing on specific brain regions, these systems have achieved unprecedented accuracy in recreating what a person or animal sees. This technology could advance the development of brain implants for vision restoration.

Sentiment: Positive | Time to Impact: Mid-term

Can AI Be Superhuman? Flaws in Top Gaming Bot Cast Doubt

Nature | https://www.nature.com/articles/d41586-024-02218-7)

A study on AI systems like KataGo reveals that these seemingly superhuman AIs can be easily defeated by adversarial attacks. Despite attempts to strengthen KataGo through various defensive strategies, adversarial bots exploited weaknesses, raising doubts about the robustness and reliability of advanced AI systems. The findings suggest that achieving truly superhuman AI capabilities might be more challenging than previously thought.

Sentiment: Neutral | Time to Impact: Mid-term

Tackling Hallucination in Large Language Models: A Survey of Cutting-Edge Techniques

Unite.ai | https://www.unite.ai/tackling-hallucination-in-large-language-models-a-survey-of-cutting-edge-techniques/

The article surveys techniques to mitigate hallucinations in large language models (LLMs), which generate factually incorrect content. Methods include retrieval augmentation, feedback loops, and prompt tuning. Model development approaches focus on improving knowledge grounding and supervised fine-tuning. Despite progress, challenges like computational costs and ensuring generalizability persist. Future directions emphasize hybrid techniques and causality modeling to enhance model reliability and safety.

Sentiment: Neutral | Time to Impact: Mid-term

People Cannot Distinguish GPT-4 from a Human in a Turing Test

arXiv | https://arxiv.org/abs/2405.08007

A study evaluating GPT-4, GPT-3.5, and ELIZA in a Turing test found that GPT-4 was judged to be human 54% of the time, outperforming GPT-3.5 and ELIZA. However, humans were correctly identified as human 67% of the time. The findings suggest that current AI systems can deceive people into believing they are human, highlighting potential risks for online interactions and misinformation.

Sentiment: Neutral | Time to Impact: Short-term


The Path to AGI

LLMs and the Curious Notion of Panprotopsychism

Psychology Today | https://www.psychologytoday.com/gb/blog/the-digital-self/202407/llms-and-the-curious-notion-of-panprotopsychism

The article explores the concept of panprotopsychism, suggesting that consciousness may be a fundamental aspect of all entities, including large language models (LLMs). While LLMs' complex architectures might hint at proto-consciousness, this remains speculative and distinct from human consciousness. The discussion encourages interdisciplinary dialogue to better understand consciousness in both AI and human contexts.

Sentiment: Neutral | Time to Impact: Long-term


About the Curious AI Newsletter

AI is hype. AI is a utopia. AI is a dystopia.

These are the narratives currently being told about AI. There are mixed signals for each scenario. The truth will lie somewhere in between. This newsletter provides a curated overview of positive and negative data points to support decision-makers in forecasts and horizon scanning. The selection of news items is intended to provide a cross-section of articles from across the spectrum of AI optimists, AI realists, and AI pessimists and showcase the impact of AI across different domains and fields.

The news is curated by Oliver Rochford, Technologist, and former Gartner Research Director. AI (ChatGPT) is used in analysis and for summaries.


Want to summarize your news articles using ChatGPT? Here's the latest iteration of the prompt. The Curious AI Newsletter is brought to you by the Cyber Futurists.

要查看或添加评论,请登录

Oliver Rochford的更多文章

  • Curious AI 63 - 2025-07-03

    Curious AI 63 - 2025-07-03

    Welcome to issue 63 of the Curious AI Newsletter, curated by Oliver Rochford , Cyberfuturist and former Gartner…

    2 条评论
  • Curious AI 62 - 2025-02-28

    Curious AI 62 - 2025-02-28

    Welcome to issue 62 of the Curious AI Newsletter, curated by Oliver Rochford, Cyberfuturist and former Gartner Research…

  • Curious AI 61 - 2025-02-21

    Curious AI 61 - 2025-02-21

    Welcome to issue 61 of the Curious AI Newsletter, curated by Oliver Rochford , Cyberfuturist and former Gartner…

    1 条评论
  • Curious AI 60 - 2025-02-07

    Curious AI 60 - 2025-02-07

    Welcome to issue 60 of the Curious AI Newsletter, curated by Oliver Rochford , Cyberfuturist and former Gartner…

    3 条评论
  • Curious AI 59 - 2025-01-31

    Curious AI 59 - 2025-01-31

    Welcome to issue 59 of the Curious AI Newsletter, curated by Oliver Rochford , Cyber futurist and former Gartner…

    2 条评论
  • Curious AI 58 - 24 January 2025

    Curious AI 58 - 24 January 2025

    Welcome to issue 58 of the Curious AI Newsletter, curated by Oliver Rochford , Cyber futurist and former Gartner…

  • Curious AI 57- 17 January 2025

    Curious AI 57- 17 January 2025

    Welcome to issue 57 of the Curious AI Newsletter, curated by Oliver Rochford , Cyber futurist and former Gartner…

    5 条评论
  • Curious AI 56

    Curious AI 56

    Welcome to issue 56 of the Curious AI Newsletter, curated by Oliver Rochford, Cyber futurist and former Gartner…

    2 条评论
  • Curious AI 55

    Curious AI 55

    Welcome to issue 55 of the Curious AI Newsletter, curated by Oliver Rochford , Cyber futurist and former Gartner…

    1 条评论
  • Why there won’t be a convergence of security and observability pipelines any time soon.

    Why there won’t be a convergence of security and observability pipelines any time soon.

    Still. Again.

    10 条评论

社区洞察

其他会员也浏览了