The Curious AI #28
Oliver Rochford
Evangelist @ Auguria | Technologist | Cyberfuturist | Startup Advisor | Former Gartner Analyst
May 24, 2024: The Uncertain Impact of AI
Welcome to issue #28 of the Curious AI Newsletter, curated by cyber futurist and former Gartner Research Director Oliver Rochford and synthesized by AI.
"AI Is Getting A Bad Reputation, Really Quickly"
Bindu Reddy, CEO of @abacusai (source: X )
The Uncertain Impact of AI according to surveys
Twenty months have passed since ChatGPT was released on November 30, 2022, and we still seem no closer to agreeing on what value and impact GenAI has achieved. The data remains ambiguous.? According to the Work Trend Index 2024 published by Microsoft and LinkedIn, 75 percent of the survey participants (N = 31,000 employees from 31 countries) already use AI at work. A recent Gartner survey found that 29% of respondents used general AI. Another study, this time by the Reuters Institute and Oxford University (N = 12,000 people in six countries), found that frequent use of ChatGPT is rare, with just 1% using it daily in Japan, 2% in France and the UK, and 7% in the USA.?
Data is King
We may be on safer ground with a report about the FinTech company Klarna cutting marketing costs by $10 million. $6 million was saved in image production and $4 million in reduced external marketing spending. Interestingly, Klarna said they are doing more work now using design collateral due to the lower production costs, showing Jevon's paradox in action. Klarna is bullish on AI and recently deployed GenAI customer services agents, claiming it does the equivalent work of 700 full-time human agents.
AI Regulation
AI firms mustn't govern themselves, say ex-members of OpenAI's board
The Economist | https://www.economist.com/by-invitation/2024/05/26/ai-firms-mustnt-govern-themselves-say-ex-members-of-openais-board
Former members of OpenAI's board argue that AI companies should not be allowed to self-regulate. They stress the need for independent oversight to ensure that AI development aligns with societal values and public safety. The ex-board members highlight the risks associated with unchecked AI growth, advocating for robust external governance structures to mitigate potential harms and ensure the ethical use of AI technologies.
Sentiment: Cautious; Time to Impact: Immediate to Mid-term
Mapping the Regulatory Landscape for New Technologies
Sarah Kreps discusses the complexities of regulating emerging technologies like AI and quantum computing. Congress is often ill-equipped to manage these regulations due to expertise gaps and slow legislative processes. Instead, non-legislative actors, including scientists, the media, investors, and the public, play crucial roles in shaping tech policy. This diverse regulatory ecosystem helps balance innovation with oversight, addressing potential risks without relying solely on new legislation.
Sentiment: neutral; time to Impact: Mid-term to Long-term
AI Geopolitics
New AI Chatbot Trained on 'Xi Jinping Thought'
China has developed a new AI chatbot trained on "Xi Jinping Thought," reflecting the ideological parameters set by the Chinese government. The chatbot is based on seven databases, primarily focusing on information technologies and Xi's doctrine. Intended for cybersecurity and IT research, the chatbot demonstrates China's commitment to integrating ideological education with AI development.
Sentiment: neutral; time to Impact: Mid-term
The next wave of AI hype will be geopolitical. You’re paying
Financial Times: https://www.ft.com/content/a60c3c7b-1c48-485d-adb7-5bc2b7b1b650
The article discusses how governments worldwide are heavily investing in AI, driven by geopolitical competition rather than pure economic return. Nations are pouring billions into AI infrastructure, with public spending expected to exceed $25 billion annually. This geopolitical race is anticipated to drive significant spending, particularly on hardware, benefiting companies like Nvidia. The focus on AI safety and security underscores its importance in national strategies, despite its questionable immediate practical utility.
Sentiment: Cautious; Time to Impact: Immediate to Mid-term
OpenAI Exposes 5 Propaganda Networks
Interesting Engineering: https://interestingengineering.com/culture/openai-exposes-5-propaganda-networks
OpenAI identified and exposed five extensive propaganda networks leveraging AI to disseminate disinformation. These networks utilize automated systems to generate and spread misleading content on a massive scale, aiming to manipulate public opinion and destabilize societies. The revelation underscores the growing challenge of combating AI-driven disinformation and highlights the need for robust measures to detect and counteract such threats effectively.
Sentiment: Concerned; Time to Impact: Immediate
Tinfoil AI
Bilderberg 2024: Google, DeepMind, Microsoft AI, and Anthropic Among Elite Guests
The 2024 Bilderberg meeting features elite attendees from tech giants including Google, DeepMind, Microsoft, and Anthropic, highlighting AI's growing influence on global affairs. The gathering brings together leaders to discuss critical geopolitical and economic issues, with AI and technology playing a central role. This meeting underscores the importance of AI in shaping future policies and strategies among the world's most powerful individuals.
Sentiment: Neutral; Time to Impact: Immediate to Short-term
AI Game of Thrones
OpenAI Researcher Who Resigned Over Safety Concerns Joins Anthropic
The Verge: https://www.theverge.com/2024/5/28/24166370/jan-leike-openai-anthropic-AI-safety-research
Jan Leike, a key researcher who resigned from OpenAI due to safety concerns, has joined Anthropic. At Anthropic, Leike will focus on scalable oversight, generalization, and alignment research. Anthropic, founded by former OpenAI employees, aims to prioritize ethical AI development. Leike's departure underscores concerns about OpenAI's shift towards commercial interests. His move to Anthropic, which employs "constitutional AI" principles, highlights ongoing debates about AI safety and ethics.
Sentiment: Concerned; Time to Impact: Immediate to Mid-term
OpenAI’s Helen Toner Explains Why Sam Altman Was Fired
The Verge | https://www.theverge.com/2024/5/28/24166713/openai-helen-toner-explains-why-sam-altman-was-fired
Helen Toner from OpenAI explains that Sam Altman was fired due to differences in vision and governance concerns. The decision was driven by a need for stronger oversight and alignment with the organization's long-term goals. The board felt that new leadership was necessary to navigate the ethical and strategic challenges posed by AI development.
Sentiment: Neutral; Time to Impact: Immediate to Short-term
AI Business
Moving Past Gen AI’s Honeymoon Phase: Seven Hard Truths for CIOs to Get from Pilot to Scale
McKinsey outlines seven crucial steps for CIOs to scale generative AI, from pilot projects to full deployment. These include focusing on impactful pilots, integrating technology components effectively, managing costs, standardizing tools and infrastructure, building multidisciplinary teams, prioritizing relevant data, and creating reusable code. Successful scaling requires strategic investments, cross-functional collaboration, and robust governance to realize AI's potential business value.
Sentiment: Positive; Time to Impact: Immediate to Mid-term
Klarna using GenAI to cut marketing costs by $10 million annually
Channel News Asia | https://www.channelnewsasia.com/business/klarna-using-genai-cut-marketing-costs-10-million-annually-4368361
Klarna employs generative AI tools like Midjourney, DALL-E, and Firefly to reduce marketing expenses by $10 million annually. This includes $6 million in image production savings and $4 million from reduced external marketing services. The company also utilizes an OpenAI assistant for customer service tasks, equivalent to 700 full-time agents.
Sentiment: Positive; Time to Impact: Immediate
Big Tech develops AI networking standards, but without chip leader Nvidia
Major tech companies, including Meta, Microsoft, AMD, and Broadcom, have developed a new AI networking standard called the "Ultra Accelerator Link" to improve communication between AI accelerators in data centers. Notably, Nvidia, a dominant player in the AI chip market, is not part of this initiative. The new standard aims to reduce dependence on Nvidia and promote interoperability among different systems.
Sentiment: Neutral; Time to Impact: Immediate to Short-term
Daron Acemoglu is not having all this AI hype
Financial Times | (https://www.ft.com/content/b375115f-278f-43a3-9a26-31d75e5cd319 )
MIT economist Daron Acemoglu criticizes the optimistic projections for AI's economic impact, suggesting modest gains in productivity and GDP over the next decade. He highlights the potential negative implications of AI, such as deepfakes, and doubts AI's effect on inequality. Acemoglu stresses the need for AI to focus on providing reliable information rather than developing human-like conversational tools.
Sentiment: Cautious; Time to Impact: Mid-term
OpenAI CTO Says Generative AI's Economic Impact Only Starting
OpenAI's CTO emphasizes that the economic impact of generative AI is just beginning. The technology's potential to revolutionize various industries is significant, with applications extending beyond current uses. As AI continues to evolve, its integration into business processes and daily life will drive substantial economic growth, innovation, and efficiency improvements. The CTO's remarks highlight the transformative power of AI and the importance of continued investment and development in this field.
Sentiment: Optimistic | Time to Impact: Mid-term to Long-term
This Record Stock Market Is Riding on Questionable AI Assumptions
The Wall Street Journal: https://www.wsj.com/finance/stocks/this-record-stock-market-is-riding-on-questionable-AI-assumptions-cb890703
The stock market's current highs are largely driven by optimistic projections about AI's future economic impact. Investors are heavily betting on AI-related companies, driving stock prices up despite uncertainties about AI's real-world applications and profitability. Some analysts warn that these assumptions may be overly optimistic, potentially leading to a market correction if AI technologies do not deliver expected returns.
Sentiment: Cautious | Time to Impact: Immediate to Short-term
Market Moves
AI at Work
75% of Workers are already using AI at Work
A recent study reveals that 75% of employees are already utilizing AI tools in their workplace, reflecting a significant shift towards integrating AI in daily tasks. This adoption is driven by the need to enhance productivity, creativity, and efficiency amidst increasing work demands. Despite this widespread use, many organizations lack comprehensive strategies to maximize AI's potential, leaving employees to independently incorporate AI into their routines.
Sentiment: Positive; Time to Impact: Immediate
Generative AI is Now the Most Frequently Deployed AI Solution in organizations. Gartner Survey
NDTV Profit | https://www.ndtvprofit.com/technology/generative-ai-is-now-most-frequently-deployed-solution-in-organizations-gartner-survey
Generative AI has become the most widely deployed AI solution in organizations, according to a Gartner survey. Conducted in the fourth quarter of 2023, the survey revealed that 29% of respondents from the U.S., Germany, and the U.K. have implemented generative AI. This technology is being embedded into existing applications such as Microsoft's Copilot and Adobe Firefly, proving to be more common than other AI solutions like graph techniques and optimization algorithms.
Sentiment: Positive; Time to Impact: Immediate to Mid-term
AI products like ChatGPT much hyped but not much used, study says
A survey by the Reuters Institute and Oxford University reveals that only 2% of UK respondents use AI tools like ChatGPT daily. While young people (18–24) are more frequent users, there is a general "mismatch" between AI hype and public usage. Despite expectations of AI's significant societal impact, many remain unaware or skeptical about its benefits, especially concerning news and job security.
Sentiment: Mixed | Time to Impact: Mid-term
Hardly any of us are using AI tools like ChatGPT, study says. Here’s why
A study by the Reuters Institute and Oxford University found that most people rarely use AI tools like ChatGPT, with only 2% of UK respondents using them daily. Many have never heard of these tools, and those who have typically use them infrequently. The study highlights a "mismatch" between AI hype and actual usage, suggesting that current AI tools aren't yet integrated into daily tech routines.
Sentiment: neutral; time to Impact: Mid-term
Tech Workers Retool for Artificial-Intelligence Boom
Wall Street Journal | https://www.wsj.com/tech/ai/ai-skills-tech-workers-job-market-1d58b2dd
Tech workers are upskilling in AI to adapt to the job market, where AI proficiency is crucial. Despite high demand for AI skills, particularly with large language models, many find securing roles difficult. Companies are shifting towards AI, creating an unbalanced labor market with intense competition. Firms like OpenAI prioritize willingness to learn over specific AI experience, and AI job postings are growing, though still a small part of the overall tech market.
Sentiment: Positive; Time to Impact: Immediate to Mid-term
AI Is Making Economists Rethink the Story of Automation
Harvard Business Review | https://hbr.org/2024/05/ai-is-making-economists-rethink-the-story-of-automation
AI is prompting economists to reassess the traditional narrative of automation. Unlike past technologies, AI's capabilities in decision-making and creativity could reshape various sectors beyond just manufacturing. Economists now exploring how AI could influence job creation, economic productivity, and wage distribution. This shift emphasizes the need for new policies and educational strategies to address the broader implications of AI on the workforce and economy.
Sentiment: neutral; time to Impact: Mid-term to Long-term
Top VC Kai-Fu Lee says his prediction that AI will displace 50% of jobs by 2027 is ‘uncannily accurate’
Kai-Fu Lee, chairman and CEO of Sinovation Ventures, reaffirms his 2017 prediction that AI will displace 50% of jobs by 2027, describing it as "uncannily accurate." Speaking at the Fortune Innovation forum, Lee highlights the rapid advancement of generative AI like ChatGPT. He stresses the importance of adapting to AI, encourages the use of AI tools, and emphasizes human skills like trust, empathy, and emotional intelligence as essential and irreplaceable by AI.
Sentiment: Concerned | Time to Impact: Mid-term to Long-term
AI Carbon Footprint
领英推荐
The Ugly Truth: AI like ChatGPT is Guzzling Resources and Harming the Environment
The Guardian | https://www.theguardian.com/commentisfree/article/2024/may/30/ugly-truth-ai-chatgpt-guzzling-resources-environment
AI technologies such as ChatGPT consume vast amounts of energy and resources, contributing significantly to environmental degradation. The extensive computational power required for training and operating these models leads to substantial carbon footprints. Critics argue that while AI advancements offer many benefits, the environmental costs are often overlooked. To mitigate these impacts, there is a call for more sustainable practices and policies within the AI industry.
Sentiment: Concerned | Time to Impact: Immediate to Short-term
AI in Crime
How Criminals Are Leveraging AI to Create Convincing Scams
Tripwire: https://www.tripwire.com/state-of-security/how-criminals-are-leveraging-AI-to-create-convincing-scams
Cybercriminals are using generative AI tools like ChatGPT and Google Bard to create sophisticated scams, including pig butchering, inheritance, humanitarian relief scams, and triangulation fraud. These scams leverage AI to overcome language barriers, personalize communications, and create realistic fake accounts and online stores. This trend increases the scale and effectiveness of scams, making it crucial for users to remain vigilant and for organizations to enhance their cybersecurity measures.
Sentiment: Concerned; Time to Impact: Immediate
Deepfake scams have looted millions; experts warn it could get worse
Experts warn that deep-fake scams have resulted in millions of dollars in losses, with the potential for increased sophistication and scale. Deepfakes, which involve synthetic media generated using AI, have been used to create convincingly fraudulent videos and audio. This technology poses significant risks to financial security and personal privacy, prompting calls for stronger regulations and improved detection methods to combat the growing threat.
Sentiment: Concerned | Time to Impact: Immediate to Short-term
AI and Software Development
How AI assistants are already changing the way code gets made
MIT Technology Review | https://www.technologyreview.com/2023/12/06/1084457/ai-assistants-copilot-changing-code-software-development-github-openai/
AI coding assistants like GitHub's Copilot are transforming software development by offering real-time code suggestions, significantly improving productivity. While some firms hesitate due to privacy concerns, many developers find these tools valuable for learning new languages and speeding up coding tasks. However, challenges remain, including potential security flaws and the need for human oversight. The overall impact of these tools on the industry is still being evaluated.
Sentiment: Optimistic | Time to Impact: Immediate to Mid-term
ChatGPT Gives Wrong Answers to Programming Questions 52% of the Time, Study Finds
A study revealed that ChatGPT provided incorrect answers to programming questions 52% of the time. Researchers found that while ChatGPT's responses often appeared convincing, they were frequently wrong, raising concerns about its reliability for coding tasks. This highlights the importance of human oversight when using AI for technical problem-solving.
Sentiment: Concerned; Time to Impact: Immediate
AI and Copyright
OpenAI Didn’t Copy Scarlett Johansson’s Voice for ChatGPT, Records Show
Washington Post | https://www.washingtonpost.com/technology/2024/05/22/openai-scarlett-johansson-chatgpt-ai-voice/
Scarlett Johansson accused OpenAI of copying her voice for ChatGPT’s new voice feature, Sky, after she declined to license it. However, records and interviews reveal that OpenAI hired a different actress to provide the voice, which naturally resembled Johansson's. OpenAI stated that the casting and development of Sky's voice occurred months before contacting Johansson, refuting the claims of unauthorized use.
Sentiment: Concerned; Time to Impact: Immediate to Mid-term
NVIDIA denies pirated e-book sites are shadow libraries to shut down lawsuit
Ars Technica | https://arstechnica.com/tech-policy/2024/05/nvidia-denies-pirate-e-book-sites-are-shadow-libraries-to-shut-down-lawsuit/
NVIDIA is defending itself in a lawsuit that accuses the company of operating pirate e-book sites, which are alleged to function as "shadow libraries." NVIDIA denies these claims, arguing that these sites do not infringe on copyright laws. The company seeks to dismiss the lawsuit, maintaining that it does not support or engage in the distribution of pirated content.
Sentiment: Negative | Time to Impact: Immediate to Short-term
AI Trust, Risk and Security Management
CEO of Google Says It Has No Solution for Its AI Providing Wildly Incorrect Information
Google CEO Sundar Pichai acknowledges that AI models like those used in Google's AI Overviews still produce incorrect information, an issue known as "hallucinations." Despite improvements, this problem remains unresolved. Pichai stresses the utility of AI while admitting ongoing errors, which have sparked criticism and concerns over the reliability of AI-generated information.
Sentiment: Concerned; Time to Impact: Immediate
The Emerging Artificial Intelligence Era Faces a Growing Threat from Directed Energy Weapons
Scientific American: https://www.scientificamerican.com/article/the-artificial-intelligence-era-faces-a-threat-from-directed-energy-weapons/
AI-enabled systems, reliant on optical and radio frequency sensors, are increasingly vulnerable to directed-energy weapons (DEWs) like lasers and microwaves. These weapons can disrupt or destroy the sensors and electronics crucial to autonomous systems. The U.S. Department of Defense invests heavily in DEW technologies, which are now operational. With the rise of autonomous platforms in various sectors, addressing these vulnerabilities during the design phase is essential to ensuring their robustness against DEWs.
Sentiment: Concerned; Time to Impact: Immediate to Mid-term
OpenAI's Latest Blunder Shows the Challenges Facing Chinese AI Models
MIT Technology Review | https://www.technologyreview.com/2024/05/22/1092763/openais-gpt4o-chinese-ai-data/
OpenAI's new AI model, GPT-4o, faces criticism for using poorly filtered Chinese training data, filled with spam content related to pornography and gambling. This issue highlights the broader challenge of obtaining high-quality Chinese datasets due to China's fragmented internet. The lack of reliable data impacts AI performance and increases the risk of errors, underscoring the need for better data curation practices.
Sentiment: Concerned; Time to Impact: Immediate to Mid-term
The Hazards of Putting Ethics on Autopilot
MIT Sloan Management Review: https://sloanreview.mit.edu/article/the-hazards-of-putting-ethics-on-autopilot/
This article explores the risks of relying too heavily on automated systems for ethical decision-making. It argues that while AI can enhance efficiency, it lacks the nuanced understanding required for ethical judgments. The authors stress the importance of human oversight to ensure decisions align with societal values and moral standards. They recommend integrating ethical frameworks into AI development processes and maintaining a balance between automation and human intervention.
Sentiment: Cautious; Time to Impact: Immediate to Mid-term
AI in Science
AI Chatbots Have Thoroughly Infiltrated Scientific Publishing
Scientific American: https://www.scientificamerican.com/article/chatbots-have-thoroughly-infiltrated-scientific-publishing/
A recent analysis found that 1% of scientific articles published in 2023 showed signs of generative AI involvement. Researchers fear misuse of AI, such as ChatGPT, in scientific literature due to issues like factual inaccuracies and fabricated citations. Indicators of AI-generated content include specific phrases and stylistic patterns. The prevalence of such usage has sparked concerns about the integrity of scientific publishing.
Sentiment: Concerned; Time to Impact: Immediate to Mid-term
Google DeepMind’s weather AI can forecast extreme weather faster and more accurately
MIT Technology Review: https://www.technologyreview.com/2023/11/14/1083366/google-deepminds-weather-ai-can-forecast-extreme-weather-quicker-and-more-accurately/
Google DeepMind's AI model, GraphCast, can predict weather conditions up to 10 days in advance more accurately and quicker than traditional models. It significantly outperformed the European Centre for Medium-Range Weather Forecasts model in 90% of test areas and offered early warnings for extreme weather events. GraphCast uses graph neural networks and four decades of historical data to make predictions in under a minute, enhancing preparedness for natural disasters.
Sentiment: Optimistic | Time to Impact: Immediate to Mid-term
AI in Law
Promises and pitfalls of artificial intelligence for legal applications
This article explores the potential benefits and challenges of integrating artificial intelligence (AI) in legal practices. It highlights how AI can improve efficiency, accuracy, and accessibility in legal processes, but also warns of ethical concerns, biases in AI systems, and the need for regulatory frameworks to manage AI's impact on the legal field.
Sentiment: Mixed | Time to Impact: Mid-term
AI and Society
The AI Mirror: How Technology Blocks Human Potential
Financial Times: https://www.ft.com/content/67d38081-82d3-4979-806a-eba0099f8011
In "The AI Mirror," Shannon Vallor argues that AI technologies like ChatGPT, which reflect human behavior and values, limit our practical wisdom and human potential. She emphasizes that these tools, despite their intelligence, lack true understanding and threaten our ability to address significant issues like climate change. Vallor calls for a focus on human creativity and wisdom to redirect technology for societal benefit.
Sentiment: Cautious; Time to Impact: Mid-term
AI Cyborgs
Neuralink rival sets brain chip record with 4,096 electrodes on human brain
Ars Technica | https://arstechnica.com/science/2024/05/neuralink-rival-sets-brain-chip-record-with-4096-electrodes-on-human-brain/
A competitor to Neuralink has set a new record by implanting a brain chip with 4,096 electrodes into a human brain. This achievement marks a significant advancement in brain-machine interface technology, potentially enhancing the ability to read and interpret brain activity with higher precision. The development promises improvements in neuroprosthetics and treatments for neurological disorders.
Sentiment: Optimistic | Time to Impact: Mid-term
AI and Robotics
Crushing It: Autonomous AI Robot Creates a Shock-Absorbing Shape No Human Ever Could
SciTechDaily | https://scitechdaily.com/crushing-it-autonomous-AI-robot-creates-a-shock-absorbing-shape-no-human-ever-could/
Boston University's autonomous AI robot, MAMA BEAR, has developed a highly efficient shock-absorbing structure with 75% energy absorption, surpassing previous records. Utilizing Bayesian optimization and continuous learning, the robot iteratively 3D prints and crushes plastic shapes to improve its design. This innovation has applications in protective gear, packaging, and automotive safety, showcasing the potential of autonomous robots in advanced material design and engineering.
Sentiment: Positive; Time to Impact: Immediate to Mid-term
The Path to AGI
No, Today’s AI Isn’t sentimental. Here’s How We Know
Artificial general intelligence (AGI) implies that an AI is as intelligent as a human in all ways. Current AI, like ChatGPT, lacks sentience— the ability to have subjective experiences. While some argue AI's responses indicate sentience, AI lacks the physiological states necessary for true subjective experiences. An LLM's responses are probabilistic, not genuine expressions of consciousness or emotion.
Sentiment: neutral; time to Impact: Long-term
The Dangerous Illusion of AI Consciousness
Shannon Vallor argues that the latest AI models, like OpenAI's GPT-4o, present an illusion of consciousness, which is ethically problematic. These AI systems, though more advanced, still lack true consciousness and sentience. The anthropomorphic design can mislead users, making them vulnerable to manipulation. Vallor highlights the risks of believing AI can understand or feel like humans, urging stronger safeguards to prevent such misconceptions.
Sentiment: Concerned; Time to Impact: Immediate to Mid-term
Sci-fi Author Martha Wells on What a Machine Intelligence Might Want
New Scientist | https://www.newscientist.com/article/2432947-sci-fi-author-martha-wells-on-what-a-machine-intelligence-might-want/
Martha Wells discusses the themes of her novella "All Systems Red," focusing on the desires and autonomy of machine intelligence. She explores the concept through the character of Murderbot, a sentient construct that gains freedom by hacking its control module. Contrary to human fears of a violent rebellion, Murderbot seeks comfort in entertainment and personal autonomy, reflecting a nuanced view of AI behavior and ethics.
Sentiment: reflective; time to Impact: Long-term
Mapping Human Consciousness: A Breakthrough Study
A recent study has made significant strides in mapping human consciousness, providing new insights into how different brain regions interact to produce conscious experience. Researchers utilized advanced neuroimaging techniques to track brain activity patterns, revealing distinct neural pathways linked to various states of consciousness. This breakthrough enhances our understanding of the brain's complex functions and could inform future treatments for neurological disorders.
Sentiment: Optimistic | Time to Impact: Mid-term to Long-term
Interesting Papers & Applied Articles
What We Learned from a Year of Building with LLMs (Part I)
The article shares insights from a year of developing applications with large language models (LLMs). It covers best practices and common challenges, emphasizing effective prompting techniques, retrieval-augmented generation, and structured workflows. The authors highlight the importance of proper evaluation and monitoring to ensure reliable and efficient AI implementations.
Sentiment: Informative; Time to Impact: Immediate to Mid-term
About the Curious AI Newsletter
AI is hype. AI is a utopia. AI is a dystopia.
These are the narratives currently being told about AI. There are mixed signals for each scenario. The truth will lie somewhere in between. This newsletter provides a curated overview of positive and negative data points to support decision-makers in forecasts and horizon scanning. The selection of news items is intended to provide a cross-section of articles from across the spectrum of AI optimists, AI realists, and AI pessimists and showcase the impact of AI across different domains and fields.
The news is curated by Oliver Rochford , a technologist and former Gartner Research Director. AI (ChatGPT) is used in analysis and for summaries.
Want to summarize your news articles using ChatGPT? Here's the latest iteration of the prompt:. The Curious AI Newsletter is brought to you by the Cyber Futurists .
Great content Oliver Rochford!
???? ???? ?? I Publishing you @ Forbes, Yahoo, Vogue, Business Insider and more I Helping You Grow on LinkedIn I Connect for Promoting Your AI Tool
5 个月It's fascinating to see the diverse range of topics covered in The Curious AI newsletter.