One-Third Budget for Safety
NOVEMBER 2 2023
Hi there,
welcome to this week's edition of our newsletter. Our focus this time is a thought-provoking paper authored by Turing Award winners Yoshua Bengio, Geoffrey Hinton, Andrew Yao, and others, addressing the burgeoning concerns surrounding the power and potential risks of AI systems.
Also, we discuss methods like retrieval augmentation and model compression that address AI hallucinations and democratize AI technology. We also delve into the latest AI news and practical tips for leveraging AI in everyday business.
AI top story
??AI Titans Demand One-Third Budget for Safety
Are we on the verge of an AI Apocalypse? Turing Award Winners demand a radical change in AI budgets. The paper authored by Turing awardees Yoshua Bengio, Geoffrey Hinton, Andrew Yao, and others highlights growing concerns about the potential risks posed by increasingly powerful AI systems. Here are some key points from the article:
1. The authors emphasize that their concerns are based on research and facts, distinguishing their approach from previous vague declarations or media appearances by experts. They back their warnings with evidence and documentation.
2. The authors call for major AI companies to allocate at least one-third of their budgets specifically for safety considerations in AI research and development. They argue that this allocation should happen before it's too late to address potential existential risks.
3. The paper primarily focuses on the most powerful AI systems, particularly those being trained on billion-dollar supercomputers, such as the Frontier Models GPT-4 and Google Gemini. These systems are seen as having the most hazardous and unpredictable capabilities.
4. The authors suggest that regulators should have access to AI systems before deployment to assess them for dangerous capabilities, like autonomous self-replication or hacking into computer systems. They note that the U.S. already has a voluntary agreement regarding this. The paper suggests that governments should have comprehensive insight into AI development, require model registration, whistleblower protections, incident reporting, and monitoring of model development and supercomputer usage. Biden signed an executive order to develop AI safety guidelines.
5. The paper expresses deep concern about the development of autonomous AI systems that can plan, act, and pursue goals independently. The authors highlight that there is a lack of understanding about how to reliably align AI behavior with complex values, even with well-meaning developers.
6. The authors acknowledge the possibility that generalist AI systems could outperform human abilities in critical domains within the next decade or so, and they emphasize the urgency of addressing AI safety.
7. The authors call on major AI companies to outline specific safety measures they will take if certain red-line capabilities are found in their AI systems. In our previous newsletter, we unveiled some data on AI transparency - numbers are surprisingly low.
Latest news
??What we’ve been reading
4. Companies aren't spending big on AI. Here's why that cautious approach makes sense (4-minute read)
领英推荐
Expert talks
?? Taming AI hallucinations and democratizing AI power
We recently had an engaging discussion with Jonathan Hodges about various aspects of AI. Here are some intriguing highlights from our conversation
Control of AI "Hallucinations": One of the significant challenges with AI models, particularly large language models (LLMs), is their tendency to generate responses based on patterns in their training data, which can sometimes be inaccurate or irrelevant. The UC Berkeley bot is specifically designed to control these hallucinations, ensuring that the information it provides is accurate and relevant to the user's queries. The bot employs a method called retrieval augmentation. This involves indexing a database of information that the bot can reference when answering questions. When a query is received, the AI breaks down the question into components and uses algorithms like cosine similarity to find the most relevant information in its indexed database. At runtime, when it identifies relevant pieces of information in response to a query, it doesn't rely solely on its pre-trained knowledge. Instead, it focuses on these identified information chunks to generate a response. This approach helps in providing more targeted and accurate answers.
Democratization of AI: Jonathan discussed a technique called "model compression" in the context of AI advancements. Model compression is a method to make large AI models, like Meta 's LLaMA with 70 billion parameters, smaller and more manageable. This process allows these complex models to run on less powerful hardware, such as a laptop, making it easier for individuals and smaller organizations to use advanced AI technology without needing expensive, high-end computing resources. The technique aims to retain the model's capabilities while reducing its size and computational requirements, democratizing access to powerful AI tools.
What are your thoughts?
Tips and tricks
??Create stunning logos similar to your favorite brands in seconds
1. Go to OpenAI 's ChatGPT, upload a picture of a logo, and write a prompt "I want to generate a logo similar to this one. Help me craft a DALL-E 3 prompt to recreate a similar logo". In our case, we uploaded the logo of our favorite coffee brand 星巴克 .
2. Result: "Generate a logo with a green circular background, featuring a stylized siren or mermaid with wavy hair, a crown with a star, and a welcoming expression, reminiscent of the Starbucks style."
3. Open a new chat and enable DALLE-3. Paste the prompt and wait for the magic to happen!
Have you found other creative ways to use DALLE-3?
??This week's number
$420,000 - that's how much companies with 100+ employees spend on average for unproductive meetings according to Golden Steps ABA . Have you found yourself at a meeting that does not add any value to you? Everyone has at some point.
The solution? AI meeting assistants offer a promising solution to mitigate these costs and enhance productivity.
We've had a look at a couple, such as Reclaim.ai and Scheduler AI .
??Save up to $10,000 with a tailored Gap Analysis
Do you struggle to assess the potential return on investment (ROI) and conduct a thorough cost-benefit analysis, given the high costs and uncertain outcomes associated with AI projects?
A Tailored Gap Analysis by LyRise is a process that helps businesses understand how to make the most out of Artificial Intelligence (AI) for their business. It is like checking what you have (your car's current state) and what you need (your dream speed), and figuring out what's missing in between. The Gap Analysis helps businesses see the big picture, spot the gaps, plan for the future, stay on track, and stay in control.
To get your Free Gap Analysis, reach out to Ivan Draganov , who will guide you through the process.