The pace at which Generative AI (Gen AI) is evolving is nothing short of extraordinary. We're witnessing innovations that are pushing the boundaries of creativity, efficiency, and problem-solving. Yet, while the technology races forward, our regulatory frameworks are struggling to keep up. This gap raises significant questions about the protection of creativity, the environmental impact, and the financial implications of implementing these advanced technologies. How long will it take for regulations to catch up? What will the real cost be of integrating Gen AI into our daily operations? As according to the 2024 Goldman Sachs report, the promise of generative AI technology to transform industries is leading to an estimated $1 trillion in capital expenditures over the next few years. However, the spending has yet to show proportional returns.?
Meanwhile, companies like OpenAI and Google are rolling out increasingly sophisticated models. But at what cost? The environmental footprint of training these models is massive, with estimates suggesting that a single large model can emit as much CO2 as five cars over their lifetimes.
The European Union recently took a significant step by signing the AI Act, which aims to provide a clear regulatory framework for the development and deployment of AI technologies. This landmark legislation sets out to ensure that AI is used ethically and responsibly, with stringent requirements for high-risk AI systems. This is a crucial development as it aims to balance innovation with necessary safeguards to protect citizens and the environment.
For those who don't know me, I am a technologist with 12 patents on Open RAN, AI, and 5G. As an award-winning CMO who created a new market category in telco, and the author of many industry articles and also an award-winning leadership book for women. My newsletter is your go-to resource for a roundup of news, updates on models, regulatory developments, partnerships, and practical insights from the past week.
If you enjoyed this newsletter, please share it with your network!
Thank you for being a subscriber,
News about models and everything related to them
Meta researchers are advancing AI by integrating System 2 thinking into large language models (LLMs), significantly enhancing complex reasoning tasks. SynCode improves LLM output by adhering to syntax rules, reducing errors.
Patronus AI
Lynx excels in hallucination detection, enhancing reliability. AI-driven search engines like Perplexity and Google face challenges with cost and quality. Stanford's In-Context Vectors improve task adaptation, while Hugging Face's SmolLM offers efficient small models. Anthropic's Claude app brings AI to Android, and Google's Gemini 1.5 Pro enhances robot navigation. Finally, Mistral AI's NEMO and HackerNoon's article on securing LLMs highlight ongoing AI advancements and security needs.
- Meta researchers distill System 2 thinking into LLMs, improving performance on complex reasoning | VentureBeat
Building on these advancements, the next frontier in AI is reasoning AI, which incorporates human-like System 2 thinking.
Meta
researchers are making strides in this area, improving the performance of large language models (LLMs) on complex reasoning tasks. This evolution represents a significant leap from pattern recognition to sophisticated cognitive functions, potentially transforming AI capabilities even further. This aligns with OpenAI’s vision of advancing AI to tackle more complex and nuanced problems.???
- https://ui.adsabs.harvard.edu/abs/2024arXiv240301632U/abstract
SynCode is a framework designed to enhance large language model (LLM) outputs by ensuring they adhere to specific syntax rules, such as those for JSON, Python, and Go. SynCode leverages context-free grammar (CFG) and a DFA mask store to retain valid tokens and filter out invalid ones, significantly reducing syntax errors. Experiments demonstrate SynCode's effectiveness in generating syntax-correct outputs, outperforming current baselines. The framework addresses the challenge of hallucinations and unreliability in LLMs. SynCode represents a significant advancement in improving the syntactical precision of LLM-generated outputs, crucial for integrating AI-generated content into formal systems. By effectively eliminating syntax errors, SynCode enhances the reliability and usability of AI in applications requiring strict adherence to format rules. This innovation is particularly relevant for developers and researchers working on AI integration into programming and data serialization tasks.??
- Patronus AI Introduces Lynx: A SOTA Hallucination Detection LLM that Outperforms GPT-4o and All State-of-the-Art LLMs on RAG Hallucination Tasks - MarkTechPost
Patronus AI has launched Lynx, a state-of-the-art hallucination detection language model that outperforms GPT-4 and other leading models. Lynx excels in detecting hallucinations across various domains, including medicine and finance, using innovative approaches like Chain-of-Thought reasoning. Its superior performance is highlighted by results on the HaluBench evaluation benchmark. Lynx integrates with
英伟达
’s NeMo-Guardrails for enhanced deployment and provides an interpretable decision-making process. Patronus AI’s release includes the HaluBench dataset and evaluation code for public use. Lynx represents a significant advancement in improving the reliability of AI models, particularly in critical areas where accuracy is paramount. Its innovative design and robust performance metrics suggest it will be a valuable tool for mitigating AI hallucinations, making AI applications more trustworthy and effective.
- Perplexity, Google, and the battle for AI search supremacy
The shift towards AI-driven search engines signifies a major evolution in how information is accessed online. While the convenience of direct answers is appealing, the reliability and quality of these answers need improvement. The high operational costs and potential impact on ad revenue models pose significant challenges for search engine companies. The ongoing competition and innovation in this field will likely lead to further advancements and refinements, potentially reshaping the digital information landscape. Despite AI advancements, issues like hallucinations and low-quality content remain challenges. The economic model for AI-powered search engines is still unclear, particularly in terms of ad revenue and the high costs of running LLMs.?
- Researchers at Stanford Introduces In-Context Vectors (ICV): A Scalable and Efficient AI Approach for Fine-Tuning Large Language Models - MarkTechPost
Stanford researchers have introduced In-Context Vectors (ICV), a novel method to enhance large language models (LLMs) by improving in-context learning (ICL). ICV generates a concise vector from demonstration examples, which is then used to adjust the model's latent states, significantly improving task adaptation without extensive context windows. The method reduces computational overhead and enhances performance across tasks like safety and style transfer. ICV outperforms traditional ICL and fine-tuning methods, showing improvements in reducing toxicity and preserving content similarity. ICV represents a breakthrough in making LLMs more efficient and adaptable for diverse applications. By reducing the need for extensive context windows, ICV addresses major limitations in current ICL methods, making LLMs more practical and scalable. This innovation is particularly impactful for tasks requiring nuanced understanding and precise control over AI-generated content.?
- SmolLM - blazingly fast and remarkably powerful
Hugging Face
introduces SmolLM, a series of high-performance small language models available in 135M, 360M, and 1.7B parameters. These models are trained on the SmolLM-Corpus, a high-quality dataset that includes educational content from Cosmopedia v2, FineWeb-Edu, and Python-Edu. SmolLM models outperform other small models in various benchmarks, demonstrating efficiency in training and application. They are designed to run on local devices, making them suitable for deployment on a wide range of hardware.? Users can integrate SmolLM into their projects by utilizing Hugging Face's transformers library, which provides easy-to-use APIs for model loading and inference. The models are particularly suitable for deployment on resource-constrained hardware, allowing broader accessibility and practical use cases.
- Anthropic releases Claude app for Android | TechCrunch
?
Anthropic
has released its Claude app for
Android
bringing its AI chatbot to a broader user base. The app allows users to interact with Claude, benefiting from its advanced capabilities in understanding and generating text, code, and other content. The app aims to enhance user experience through features like high-quality content generation and context-sensitive support. This release is part of Anthropic's strategy to make AI technology more accessible and practical for everyday use, following its earlier successful launches on other platforms.?
- Google says Gemini AI is making its robots smarter - The Verge
Google DeepMind has introduced Gemini 1.5 Pro, an advanced AI model enhancing robot navigation capabilities. The model allows robots to understand and respond to natural language commands, navigate complex environments, and perform tasks with a high success rate. This innovation is part of Google's efforts to integrate AI into practical applications, aiming to improve efficiency and functionality in robotics. The Gemini 1.5 Pro showcases significant advancements in AI, highlighting its potential to revolutionize various industries through improved automation and interaction capabilities.??
- Also, Google Announces Gemma 2, Their Latest Open LLM For Developers - TechRound
Google has announced Gemma 2, its latest open large language model (LLM) available in 9 billion and 27 billion parameter versions. Designed for efficiency, Gemma 2 can run on a single piece of hardware like the NVIDIA H100 GPU and is compatible with popular AI tools such as Hugging Face and PyTorch. It aims to simplify AI development for researchers and developers, promoting responsible use through tools like the Responsible Generative AI Toolkit and the LLM Comparator for in-depth model evaluation.??
- Mistral AI Launches Codestral Mamba 7B: A Revolutionary Code LLM Achieving 75% on HumanEval for Python Coding - MarkTechPost
Mistral AI has launched Codestral Mamba 7B, a revolutionary language model designed for code generation. Achieving a 75% score on the HumanEval benchmark for Python coding, it demonstrates exceptional performance in advanced coding and reasoning tasks. The model, based on the Mamba2 architecture, offers linear time inference and the ability to handle sequences of infinite length, making it highly efficient for coding applications. It is available for free under the Apache 2.0 license, promoting accessibility and innovation in AI research and development.?
- ?Aitomatic announces SemiKong, an open-source LLM for the semiconductor industry
Aitomatic has launched SemiKong, an open-source large language model (LLM) designed specifically for the semiconductor industry. Built on Meta's Llama3 model, SemiKong enhances semiconductor processes and fabrication technology, outperforming general-purpose models in industry-specific tasks. Developed through the AI Alliance, SemiKong aims to drive down production costs and foster innovation, making advanced semiconductor technologies more accessible. The model will be available on platforms like HuggingFace and GitHub, with future versions expected to further improve performance and applicability.??
- Companies from Microsoft, Google, Mistral, Anthropic, to Cohere are all providing their own mini-versions, which are more efficient and environmentally friendly.?
- GPT-4o mini: advancing cost-efficient intelligence | OpenAI
OpenAI has introduced GPT-4o mini, a cost-efficient AI model that excels in textual intelligence and multimodal reasoning. Priced at 15 cents per million input tokens and 60 cents per million output tokens, it significantly reduces costs compared to previous models. GPT-4o mini supports text and vision inputs, and future updates will include support for image, video, and audio inputs. The model has a context window of 128K tokens and performs better than its predecessors and competitors on various benchmarks, making it ideal for a wide range of applications. Offering advanced capabilities at a significantly reduced cost democratizes access to powerful AI, making it feasible for more businesses and developers to integrate sophisticated AI solutions. The model's support for multimodal inputs and a large context window enhances its applicability across various industries, potentially accelerating innovation in fields like healthcare, education, and customer service. This move aligns with the broader trend of making AI more accessible and practical for everyday use.?
- Additionally. Prover-Verifier Games improve legibility of language model outputs | OpenAI
OpenAI introduced Prover-Verifier Games to enhance the clarity and transparency of AI model outputs. These games involve a "Prover" generating a solution to a problem and a "Verifier" validating its correctness. This approach aims to improve the legibility and reliability of AI decisions, ensuring that outputs are understandable and trustworthy for users. The initiative seeks to address challenges in AI transparency and foster greater trust in AI-generated results.??
- Also, Mistral NeMo
is being introduced, a new foundation model designed to enhance natural language understanding and generation. NEMO aims to improve the efficiency and accuracy of AI-driven applications across various industries, including finance, healthcare, and technology. The model is engineered to handle complex language tasks and offers improved scalability and customization options for developers. This launch underscores Mistral AI's commitment to advancing AI capabilities and providing cutting-edge tools for developers and businesses.??
- And last, but not least: Lock Up Your LLMs: Pulling the Plug | HackerNoon
discusses the increasing need to secure large language models (LLMs) due to their potential misuse. It emphasizes the importance of implementing robust safety measures, monitoring, and access controls to prevent unauthorized use and mitigate risks associated with AI-generated content. The discussion includes the challenges of balancing innovation with security and the ethical considerations of AI deployment.
News and partnerhsips
Eureka Labs, founded by AI expert Andrej Karpathy, offers an AI-native educational platform combining expert-designed courses with AI Teaching Assistants for personalized education. Their initial course, LLM101n, teaches undergraduates to develop AI models. Meanwhile, McKinsey partners with
Cohere For AI
to help clients adopt generative AI, and
富士通
collaborates with Cohere for enterprise AI solutions. Intel’s "AI Everywhere" optimizes the 2024 Paris Olympics, Fei-Fei Li's World Labs advances AI for visual data processing, and
OpenAI
works with
博通
on new AI chips. Additionally,
微软
enhances Excel with AI, and CMA CGM partners with Google for AI-driven logistics improvements.
- Eureka Labs
? Eureka Labs is an AI-native educational platform aimed at transforming learning experiences by combining expert-designed courses with AI Teaching Assistants for scalable and personalized education. Their initial offering, LLM101n, is an undergraduate-level course focused on training students to develop their own AI models. Eureka Labs envisions leveraging AI to significantly enhance human learning potential. The CEO of Eureka Labs is Andrej Karpathy, a notable figure in the AI and deep learning community. Prior to founding Eureka Labs, Karpathy held key roles such as the Senior Director of AI at Tesla and a researcher at OpenAI.
- McKinsey partners with startup Cohere to help clients adopt generative AI | Reuters
McKinsey has partnered with AI startup Cohere to help clients adopt generative AI solutions. This collaboration aims to integrate generative AI into various business operations, enhancing customer engagement, automating workflows, and improving internal efficiencies. Cohere, founded by former Google AI researchers, specializes in providing enterprise-grade AI solutions that are not tied to specific cloud providers, making it a flexible option for businesses. This partnership marks McKinsey's first with a large language model provider, reflecting the growing trend among consulting firms to leverage AI technology for competitive advantage. Other firms like Bain & Company and Deloitte have made similar moves, partnering with OpenAI and Nvidia, respectively.??
- And one more for Cohere. Fujitsu and Cohere launch strategic partnership to provide GenAI for enterprises, ET CIO SEA
Fujitsu and Cohere have announced a strategic partnership to provide generative AI solutions for enterprises. This collaboration aims to leverage Cohere's advanced large language models and Fujitsu's expertise in enterprise technology to offer customizable and scalable AI solutions. The focus is on creating high-performance AI models that can handle complex tasks and large datasets, ensuring data privacy and compliance with regulations. The partnership will help enterprises integrate generative AI into their operations, enhancing efficiency and innovation. Key technologies include knowledge graph extended retrieval-augmented generation (RAG) for accurate data referencing, generative AI amalgamation for creating specialized models, and generative AI auditing for compliance and explainability.
- ?5 Ways Intel’s 'AI Everywhere' Is Powering the 2024 Paris Olympics
Intel's "AI Everywhere" initiative is significantly enhancing the 2024 Paris Olympics in five key ways: optimizing event logistics through AI-powered traffic management, enhancing security with advanced surveillance and threat detection, improving athlete performance analysis, delivering personalized spectator experiences, and managing energy consumption for sustainability. These AI applications are designed to streamline operations, ensure safety, and enhance the overall experience for athletes and attendees. Read more on Intel chips in the investment section of this newsletter.??
- The ‘godmother of AI’ has a new startup already worth $1 billion - The Verge
Fei-Fei Li, renowned as the "godmother of AI," has launched a startup called World Labs, which has reached a valuation of over $1 billion in just four months. Backed by
Andreessen Horowitz
and
Radical Ventures
, World Labs focuses on advanced AI capable of human-like visual data processing and reasoning, akin to the goals of generative AI models like ChatGPT. Li, known for her work in computer vision and the development of ImageNet, aims to create AI that understands the three-dimensional physical world. This innovation could revolutionize fields such as robotics, augmented reality, and healthcare. The company's rapid valuation and significant investments reflect a broader trend of venture capitalists eagerly funding ambitious AI startups inspired by the success of OpenAI's ChatGPT. Here is her TedTalk from April that talks about the concept:? Fei-Fei Li: With spatial intelligence, AI will understand the real world | TED Talk
??
- OpenAI holds talks with Broadcom about developing new AI chip, the Information reports | Reuters
OpenAI is in talks with Broadcom to develop a new AI chip aimed at meeting the rising computational demands of AI applications. This partnership seeks to create more efficient and powerful AI processors, leveraging Broadcom's semiconductor expertise. Broadcom projects AI chip sales to reach $10 billion in 2024, highlighting the growing demand for advanced AI hardware. This development is crucial to support large-scale AI operations and alleviate the strain on power grids caused by high energy consumption in data centers.?
- CrowdStrike breaks the internet and generative AI gets the side eye - SiliconANGLE
CrowdStrike's internet disruption incident has led to scrutiny of generative AI, as the failure highlighted potential vulnerabilities associated with AI-driven automation. The article suggests that the integration of generative AI into IT infrastructure might introduce new risks and complexities, contributing to the overall fragility. This has sparked debate over the reliability and safety of deploying advanced AI technologies in critical systems, with critics pointing to the recent failures as a cautionary example of over-reliance on AI.??
- Your Microsoft Excel spreadsheets could soon have a lot more AI power | TechRadar
Microsoft is enhancing Excel with AI capabilities through a new tool called SpreadsheetLLM. This tool encodes spreadsheet contents into a format that large language models (LLMs) can understand, enabling AI to better process and analyze data within Excel. This advancement aims to improve decision-making and efficiency for users by integrating powerful AI functions directly into spreadsheet workflows. Although still in research, SpreadsheetLLM represents Microsoft's ongoing efforts to embed AI across its software offerings. Love me some AI-enabled Excel and/or Google Sheets as well. You??
- Shipping giant CMA CGM signs AI deal with Google | Reuters
Shipping giant CMA CGM has signed a deal with Google to enhance its AI capabilities. The partnership aims to optimize operations, improve customer service, and reduce environmental impact through advanced AI and machine learning technologies. This collaboration reflects the increasing integration of AI in the logistics and shipping industry to streamline processes and enhance efficiency.??
Gen AI news from different industries
A study in Saudi Arabia shows positive attitudes towards AI in higher education, highlighting benefits like enhanced teaching and streamlined administration, while stressing the need for ethical considerations. AI-driven error correction advances quantum computing by addressing qubit noise. In healthcare, AI technologies are revolutionizing IBD care, improving diagnostics, personalized treatment, and patient monitoring. AI and NLP enhance drug safety monitoring by leveraging patient data. The DoD's Task Force Lima aims to integrate AI responsibly in defense, and AI innovations in telecom are optimizing customer service, network performance, and new services.?
Higher Education
- Exploring the impact of artificial intelligence on higher education: The dynamics of ethical, social, and educational implications | Humanities and Social Sciences Communications
? The study investigates the implications of AI in higher education in Saudi Arabia, focusing on stakeholders’ attitudes, perceptions, and expectations. A survey of 1,113 participants revealed positive attitudes towards AI's potential to enhance teaching, streamline administration, and foster innovation. The research emphasizes ethical considerations such as privacy, security, and bias. Stakeholders envision a future with personalized learning and ethically integrated AI. The study highlights the need for a comprehensive understanding and responsible implementation of AI in education.
Quantum Computing
- Using artificial intelligence to make quantum computers a reality
This research highlights AI's critical role in mitigating the primary hurdle of quantum computing—qubit noise. The use of AI-driven error correction represents a promising advancement towards practical quantum computers. Despite the current limitations due to high noise levels, the anticipated improvements in quantum processor technology suggest a bright future. This study underscores the transformative potential of combining AI with quantum mechanics, paving the way for groundbreaking computational capabilities.??
Healthcare
- https://www.medscape.com/viewarticle/three-ai-technologies-poised-transform-ibd-care-2024a1000cwu?form=fpf
– discusses three emerging AI technologies expected to revolutionize the management of Inflammatory Bowel Disease (IBD) in 2024. These include AI-powered diagnostic tools that enhance accuracy in detecting IBD, predictive analytics for personalized treatment plans, and AI-driven monitoring systems that track patient health in real-time. These technologies aim to improve patient outcomes, reduce healthcare costs, and provide tailored treatment approaches by leveraging advanced data analytics and machine learning algorithms.
- AI-generated messages to patients on par with clinicians, and can even be more empathetic, study finds
A study by NYU Langone Health found that generative AI can draft patient messages that are on par with those written by clinicians in terms of quality and accuracy. AI-drafted messages were even rated higher in empathy, understandability, and tone. However, AI responses were longer and more complex, which could affect patients with lower health or English literacy. Despite these challenges, the use of AI could help reduce clinicians' workload and improve efficiency in patient communications.??
- Assessing large language models’ accuracy in providing patient support for choroidal melanoma | Eye
The study evaluates the accuracy of large language models (LLMs) like ChatGPT, Bing AI, and DocsGPT in providing patient support for choroidal melanoma. The responses from these models were reviewed by ocular oncology experts for accuracy. ChatGPT provided 92% accurate medical advice responses, outperforming Bing AI and DocsGPT. However, inconsistencies and inaccuracies highlight the need for improved fine-tuning and oversight before integrating LLMs into clinical practice. The research underscores the potential of LLMs in healthcare while also highlighting the current limitations. The high accuracy of ChatGPT is promising, but the presence of errors indicates that these models should be used with caution and under professional supervision. Continued improvements and stringent evaluations are necessary to enhance their reliability and effectiveness in clinical settings.?
- Gen-AI at scale: From experimentation to industrialisation - Healthcare Leader
– discusses the transition of generative AI (Gen-AI) from experimentation to large-scale industrialization in healthcare. It highlights the importance of fostering a culture of experimentation within organizations to accelerate innovation. The piece outlines the Enterprise LLM Lifecycle, emphasizing phases like prototyping, optimization, and deployment. It also introduces LLM Ops, which adapts machine learning operations principles to manage large language models efficiently. Additionally, the article emphasizes the role of a Gen-AI Centre of Excellence in providing governance, skills, and responsible AI frameworks to ensure successful Gen-AI integration.?
Pharma
- From social media to safety signals: How AI and NLP are transforming drug safety monitoring
The integration of AI and NLP in pharmacovigilance addresses significant gaps in traditional drug safety monitoring methods. By leveraging patient-generated data from social media and other digital platforms, these technologies offer a richer, real-time understanding of drug safety profiles. This transformation not only improves the detection of adverse events but also supports more proactive and informed decision-making in drug safety management. The application of AI and NLP in this context highlights the growing importance of advanced analytics in healthcare, paving the way for more responsive and patient-centric pharmacovigilance systems.?
Defense
- https://www.defenseone.com/policy/2024/07/dods-generative-ai-task-force-will-help-set-guardrails-broader-use/398066/
The Department of Defense has launched Task Force Lima to integrate generative AI technologies responsibly within the military. Led by the Chief Digital and Artificial Intelligence Office, the task force aims to balance innovation with national security by managing risks associated with training data and adversarial misuse. It will enhance various military functions, including warfighting and policy development, through collaboration with defense and intelligence agencies, emphasizing ethical and secure AI use.?
Telecom
- Revolutionizing telecom with AI: A deep dive into conversational, predictive, and Generative AI | Communications Today
in my recent piece, I explore how AI is revolutionizing the telecommunications industry through three main areas: conversational, predictive, and generative AI. Conversational AI is making a huge impact on customer service by utilizing chatbots and virtual assistants, which automate interactions and enhance user experiences. Predictive AI helps telecom companies optimize network performance and operational efficiency by leveraging data analytics to foresee maintenance needs and manage resources more effectively. Lastly, Generative AI is driving innovation by enabling the creation of new services and content, thereby expanding the scope of telecom offerings beyond traditional communication services.
Regional and regulatory updates
Recent developments in AI include Africa's push for outcome-based regulation to ensure ethical AI growth, with only a few nations having drafted AI strategies. The USPTO's new guidance aims to clarify AI patent eligibility. The COPIED Act seeks to combat deepfakes and protect content creators. Huawei continues advancing AI despite U.S. sanctions, while Chinese regulators enforce AI models that align with socialist values. OpenAI has blocked its services in China, prompting local alternatives. Meta suspends AI tools in Brazil and the EU due to regulatory concerns, and over 40% of Japanese companies hesitate to adopt AI due to costs and security risks.
- Responsible AI Governance in Africa: Prospects for Outcomes-Based Regulation
advocates for an outcomes-based regulatory framework to promote ethical and inclusive AI development in Africa. This approach emphasizes setting desired outcomes rather than prescriptive rules, allowing flexibility and innovation. While global examples of AI regulation provide insights, Africa's unique context requires tailored solutions. The report highlights the challenges of measuring compliance and balancing flexibility with consistency, recommending the use of regulatory sandboxes and sector-specific approaches. Overall, it underscores the need for adaptive governance to harness AI's potential while mitigating risks and ensuring equitable growth.? AI regulation in Africa is still in its early stages, with only a few countries making significant strides toward formalizing AI policies and frameworks. As of now, seven African nations—Benin, Egypt, Ghana, Mauritius, Rwanda, Senegal, and Tunisia—have drafted national AI strategies, though comprehensive AI regulations are yet to be fully implemented. The African Union (AU) has taken a proactive step by endorsing a Continental Artificial Intelligence Strategy aimed at guiding AI development across the continent. This strategy emphasizes ethical AI use, minimizing risks, and leveraging opportunities for inclusive growth. However, the AU's power to enforce these policies across member states is limited, and implementation will depend on individual countries.
- PRIVACY ENHANCING TECHNOLOGY (PET): PROPOSED GUIDE ON SYNTHETIC DATA GENERATION
– discusses techniques and practices for creating synthetic data, which is often used to train and validate Gen AI models. Synthetic data helps in overcoming data scarcity, enhancing data diversity, and protecting privacy, all of which are critical for the effective development and deployment of Gen AI systems. By ensuring high-quality synthetic data, the guide supports the training of robust and unbiased Gen AI models, facilitating safer and more ethical AI applications.? The guide emphasizes the use of Privacy Enhancing Technologies (PETs) and provides recommendations for generating and managing synthetic data, including risk assessments and best practices to minimize re-identification risks. The guide highlights the benefits of synthetic data in AI model training, data analysis, and collaboration, supported by practical case studies, aiming to foster innovation while ensuring compliance with data protection regulations. Other guides similar to the "Proposed Guide on Synthetic Data Generation" include the NIST Special Publication 800-188 by the National Institute of Standards and Technology, which provides guidelines on synthetic data; the OECD Guidelines on Synthetic Data, focusing on privacy-enhancing technologies; the UK Information Commissioner's Office (ICO) Guidance on AI and Data Protection, which includes sections on synthetic data for AI training; and the European Data Protection Supervisor (EDPS) Guidelines, which address using synthetic data to ensure privacy. These guides offer frameworks and best practices for generating and using synthetic data while maintaining data protection and privacy.
- USPTO issues AI subject matter eligibility guidance
The USPTO's new guidance on AI subject matter eligibility aims to clarify how AI-related inventions are evaluated for patents. It introduces detailed examples to help determine if an AI claim is abstract or has a practical application, addressing common challenges in patenting AI innovations. The guidance is designed to encourage innovation while providing clear criteria for patent protection. The public can provide feedback on this guidance until September 16, 2024, ensuring that the criteria align with evolving AI technologies and legal standards.?
- Cantwell, Blackburn, Heinrich Introduce Legislation to Increase Transparency, Combat AI Deepfakes & Put Journalists, Artists & Songwriters Back in Control of Their Content
The COPIED Act, introduced by Senators Maria Cantwell, Marsha Blackburn, and Martin Heinrich, aims to combat AI deepfakes and protect content creators. The bill is currently in the legislative process, and its passage into law will depend on approval by both houses of Congress and the President's signature. The timeline for this process can vary widely, ranging from a few months to several years, depending on legislative priorities and political negotiations.??
- ?China Can Lead in AI Despite Hardware Restrictions, Huawei Cloud CEO
Huawei is continuing to advance its AI capabilities despite facing U.S. sanctions that restrict access to advanced hardware. The company is leveraging outdated hardware acquired before the sanctions and focusing on import substitution strategies to maintain its AI development momentum. Huawei's efforts include launching new AI models, such as the Pangu Models 3.0, which are designed for a variety of industrial and scientific applications. Additionally, Huawei is exploring innovative ways to overcome hardware limitations, such as using advanced packaging and mature process technologies.? Huawei's continued advancement in AI, despite U.S. sanctions, is significantly supported by subsidies from the Chinese government. This governmental backing enables Huawei to circumvent some of the challenges posed by restricted access to advanced hardware.
- Socialist AI: Chinese regulators are reviewing GenAI models for 'core socialist values,' FT reports
Chinese regulators have begun testing generative AI models to ensure they align with socialist values as part of a broader regulatory framework aimed at controlling AI development and application. The new measures, effective from August 2023, require AI-generated content to respect China's "social morality and ethics" and uphold "Core Socialist Values." The regulations prohibit content that could incite subversion of national sovereignty, endanger national security, or promote ethnic hatred and violence. These rules are part of China's effort to maintain tight control over technology while fostering innovation and economic growth. This regulatory approach includes strict operational requirements for AI service providers, such as using legally sourced training data, ensuring privacy rights, and preventing the creation of harmful or false information. The measures also promote the establishment of AI infrastructure and the sharing of computing resources to support AI development within China.?
- Chinese developers scramble as OpenAI blocks access in China
OpenAI has decided to block access to its generative AI tools in China starting from July 9, 2024. This decision affects Chinese developers who had been using VPNs to access OpenAI's services due to existing government restrictions. The move is seen as part of the broader tech tensions between the U.S. and China, which include export controls on advanced semiconductors crucial for AI development. In response, Chinese AI companies like SenseTime and Baidu are offering free tokens and migration services to attract OpenAI's displaced users.??
- ?Senate Bill Aims to Combat AI Deepfakes, Protect Content Creators
? A new Senate bill introduced by Senators Maria Cantwell, Marsha Blackburn, and Martin Heinrich aims to combat AI-generated deepfakes and protect content creators. The Content Origin Protection and Integrity from Edited and Deepfaked Media (COPIED) Act mandates that AI-generated content must include provenance information, which cannot be tampered with or removed. This legislation empowers content creators, such as journalists, artists, and musicians, to control and protect their work. It also authorizes the Federal Trade Commission (FTC) and state attorneys general to enforce the bill’s requirements and allows individuals to sue violators. Major industry groups, including SAG-AFTRA and the Recording Industry Association of America, have endorsed the bill, highlighting its importance in ensuring transparency and accountability in the use of AI technology.
- Meta decides to suspend its generative AI tools in Brazil | Reuters
Meta has decided to suspend its generative AI tools in Brazil due to concerns over compliance with local regulations and the potential misuse of the technology. This move is part of a broader trend where companies are becoming more cautious with AI deployments, especially in regions with strict regulatory environments. Meta's suspension aims to reassess the tools' impact and ensure they align with Brazil's legal and ethical standards. This decision reflects the growing need for responsible AI governance and the challenges companies face in balancing innovation with regulatory compliance.??
- And again, Meta. Scoop: Meta won't offer future multimodal AI models in EU
Meta has decided not to release its multimodal AI models in the European Union due to regulatory uncertainties. These models, which integrate different types of data inputs such as text and images, are part of Meta's broader AI strategy. The decision reflects concerns over compliance with the EU's evolving AI regulations, which aim to ensure transparency and accountability in AI technologies. Both actions illustrate the challenges tech companies face in adapting to diverse regulatory environments, ensuring compliance, and addressing the legal and ethical implications of AI technologies in different regions.
- More than 40% of Japanese companies have no plan to make use of AI | Reuters
More than 40% of Japanese companies have no plans to adopt artificial intelligence (AI) technologies, according to a survey. This reluctance is attributed to concerns about costs, security risks, and a lack of skilled personnel to implement and manage AI systems. Despite global trends emphasizing AI integration for operational efficiency and innovation, Japanese firms remain cautious. This cautious approach highlights potential challenges in staying competitive in the rapidly evolving tech landscape. What do you think??
Gen AI for Business Concerns, Trends, and Predictions:?
An investigation revealed Apple, Nvidia, and Anthropic used subtitles from over 173,000 YouTube videos without permission to train AI models, sparking concerns about unauthorized content use and copyright infringement. One year after the SAG-AFTRA actors' strike, AI remains a threat in the entertainment industry, with actors pressured to consent to digital replicas. The New York Times is challenging OpenAI's use of its content for AI training, highlighting intellectual property issues. The FTC scrutinizes Amazon's deal with AI startup Adept for antitrust concerns. Rising AI-related energy demands strain the U.S. power grid, necessitating substantial infrastructure investments. Lastly, addressing the misuse of generative AI requires a unified governance framework to deter malicious activities.
- Apple, Nvidia, Anthropic Used Thousands of Swiped YouTube Videos to Train AI
An investigation by Proof News found that AI companies, including Apple, Nvidia, and Anthropic, used subtitles from over 173,000 YouTube videos without permission to train their AI models. These videos came from educational channels, media outlets, and popular YouTubers. Creators were not informed, raising concerns about unauthorized use and potential exploitation of their content. This practice breaches YouTube’s rules against automated data scraping, prompting discussions on compensation and ethical AI training practices. The unauthorized use of content from platforms like YouTube raises issues of copyright infringement and breaches of platform policies, which could result in lawsuits from content creators seeking compensation for the unauthorized use of their work. Similar legal challenges have already been seen with companies like Getty Images suing AI firms for using their copyrighted images without permission. The
Federal Trade Commission
(FTC) has emphasized that AI companies must uphold privacy and confidentiality commitments, warning that companies could face enforcement actions if they fail to abide by their privacy policies and use data without explicit consent. This includes potentially having to delete any AI models or algorithms developed using unlawfully obtained data. Globally, regulations like the EU's AI Act are set to increase transparency and accountability for AI companies, potentially leading to stricter compliance requirements and penalties for violations. This regulatory environment is evolving rapidly, and companies found violating these norms could face significant legal challenges and be required to adjust their data acquisition and usage practices to avoid future infringements.
- One Year After the Actors’ Strike, AI Remains a Persistent Threat
One year after the SAG-AFTRA actors' strike, AI remains a significant concern in the entertainment industry. Many actors, like Nandini Bapat and Marie Fink, have faced pressure to consent to digital replicas of their likenesses as a condition of employment, despite the union's new contract requiring disclosure and compensation for such uses. The contract aimed to protect actors by mandating informed consent and fair compensation for digital scans and replicas. However, enforcement remains challenging, and actors continue to navigate the complex landscape of AI in their industry. Efforts are ongoing to tighten AI protections, and discussions about the ethical implications of digital replicas persist. The persistence of AI concerns in the entertainment industry, even after the SAG-AFTRA strike, highlights ongoing challenges. Actors still face pressure to consent to digital replicas, raising ethical and legal questions about consent and compensation. The industry's gradual integration of AI technology underscores the need for stronger regulations and protections. Future scenarios may include the development of more enforceable regulations and industry-wide standards, influenced by legislative initiatives like the NO FAKES Act. Technological advancements and public pressure could drive more ethical AI practices, ensuring a balance between innovation and the rights of human performers. As AI becomes more prevalent, it is crucial to address these issues to protect actors' livelihoods and maintain ethical standards in the industry.?
- New York Times Fights OpenAI's 'Unprecedented' Bid for Journalistic Materials
The New York Times is contesting OpenAI's attempt to use its journalistic content to train AI models. This dispute centers on concerns about intellectual property rights and the potential for AI to undermine traditional journalism by using their material without compensation or proper authorization. The conflict highlights the broader issues of content usage rights and ethical AI development. The conflict between
The New York Times Company
and OpenAI over the use of journalistic content for AI training could set important precedents for future AI development. Potential outcomes include stricter regulations on content usage rights, ensuring proper compensation and authorization for the use of proprietary materials. This could lead to more defined legal frameworks governing AI training practices, balancing innovation with the protection of intellectual property. Additionally, media companies may develop more robust strategies to safeguard their content from unauthorized use by AI developers. What do you think will happen?
- Amazon's deal with AI startup Adept faces FTC scrutiny
亚马逊
's deal with AI startup Adept is under scrutiny by the Federal Trade Commission (FTC). The FTC is investigating this deal as part of a broader inquiry into the investments and partnerships of major tech companies, including Microsoft and
Alphabet Inc.
, with AI startups. The concerns focus on potential antitrust issues and whether these investments might stifle competition and innovation in the AI industry. This investigation highlights the regulatory challenges and competitive pressures in the rapidly evolving AI sector, as tech giants seek to bolster their capabilities through strategic acquisitions and partnerships.?
- How the surging demand for energy and rise of AI is straining the power grid in the U.S. - CBS News
? The CBS News article discusses how the rising demand for energy, driven largely by AI advancements, is putting significant strain on the U.S. power grid. AI technologies, especially data centers, are consuming vast amounts of electricity, with data center energy use expected to triple by 2030, equating to about 7.5% of the nation's projected demand. This surge is causing utilities to reconsider retiring fossil fuel plants to prevent blackouts and meet the heightened demand. The aging infrastructure, with many components over 40 years old, is struggling to cope with this rapid increase. The analysis underscores the urgent need for substantial investments in upgrading the power grid to support AI and other energy-intensive technologies. Balancing immediate power needs with the shift to renewable energy is crucial to avoid escalating costs and potential environmental impacts. Smaller AI models can help mitigate the strain on the power grid by reducing the computational and energy demands associated with AI technologies. Techniques such as model pruning, quantization, and designing more efficient architectures can make AI models less energy-intensive without significantly sacrificing performance. Advanced energy management systems that optimize energy distribution and reduce waste, along with distributed computing models, including edge computing, can decentralize computational loads and alleviate pressure on centralized data centers.
- What is the 'perverse customer journey' and how can it tackle the misuse of generative AI?
The World Economic Forum article highlights the urgent need to address the misuse of generative AI by mapping the "perverse customer journey" of bad actors. This approach helps identify critical intervention points to increase friction and deter malicious activities. Governments, companies, and NGOs must collaborate to create a unified framework for responsible AI governance. Strategic actions, such as de-ranking harmful search results and introducing biometric verification, can significantly reduce the misuse of generative AI while fostering its beneficial applications.??
News and updates around? finance, Cost and Investments
Intel aims to overtake
AMD
in the AI chip market by expanding AI capabilities and manufacturing, with profitability expected by 2027. Generative AI boosts the smartphone sector, driving growth for
三星电子
and
苹果
. Businesses slow AI investments due to high costs and inaccuracies, with only 63% planning to increase spending in the next year. OpenAI projects $3.4 billion in revenue for 2024. Anthropic and Menlo Ventures launch a $100M fund to support AI startups in healthcare, legal, and education sectors.
- Intel Plans to Beat AMD for Second Place in the Artificial Intelligence (AI) Chip Race | The Motley Fool
? Intel is strategically positioning itself to overtake AMD in the AI chip market by leveraging its existing dominance in the CPU market and expanding its AI capabilities. Key initiatives include launching Gaudi 2 and Gaudi 3 AI accelerators for data centers and increasing U.S. manufacturing capacity, although profitability from these expansions is not expected until 2027. While Nvidia remains the leader, Intel and AMD are intensifying their competition, with AMD focusing on AI GPUs and securing major clients like Microsoft and Meta.?
- Gen AI pushes growth in smartphone sector - Developing Telecoms
– companies like Samsung and Apple leading the growth in the smartphone sector due to their integration of generative AI features. These advancements in AI are contributing to increased shipments and higher average selling prices of premium smartphones. The global smartphone market saw a 12% growth in Q2 2024, with AI-enabled devices playing a significant role in this trend.??
- Gen AI Spending Slows as Businesses Exercise Caution
Businesses are slowing down their generative AI investments due to high implementation costs and concerns over AI inaccuracies, known as hallucinations. A study by Lucidworks revealed that only 63% of businesses plan to increase AI spending in the next year, down from 93% in 2023. The cautious approach reflects challenges in deploying AI projects beyond the pilot stage, with only 25% of planned investments fully implemented and significant delays affecting ROI. Concerns about costs, accuracy, and data security are driving this more measured investment strategy.??
- OpenAI Revenue Report — FUTURESEARCH
OpenAI projects $3.4 billion in annual recurring revenue (ARR) for 2024. This estimate is derived from a combination of primary and secondary research, including sales calls, DNS records, and public statements from OpenAI executives. The analysis accounts for various revenue streams, including API usage and ChatGPT subscriptions, and uses modeling to fill in data gaps. The comprehensive report by FutureSearch details the methodology and calculations used to arrive at this revenue projection.?
- Anthropic teams up with venture capital firm to kickstart $100M AI startup fund
Anthropic has partnered with
Menlo Ventures
to launch the Anthology Fund, a $100 million fund aimed at supporting AI startups. Each startup can receive at least $100,000 in funding, along with access to Anthropic's AI models, $25,000 in credits, and additional support from Menlo Ventures. The fund aims to foster the development of AI applications in various fields such as healthcare, legal services, and education, while providing startups with guidance and networking opportunities.??
What/where/how Gen AI solutions are being implemented today?
- Artificial intelligence can be our first line of defense in limiting the effects of wildfires
AI is increasingly being used to combat wildfires by predicting, detecting, and managing them more effectively. AI algorithms analyze data from thermal imaging cameras, weather patterns, and satellite images to forecast fire behavior and detect early signs of wildfires. This allows for quicker response times and better resource allocation, potentially preventing small fires from becoming large disasters. AI-powered systems like ALERTCalifornia use cameras and sensors to monitor fire-prone areas, providing real-time data that enhances firefighting efforts and decision-making.??
- Manufacturers See Low Success Rates for Early Gen AI Initiatives
Manufacturers are facing low success rates with early generative AI (Gen AI) initiatives, with only about 20% of planned projects being successfully implemented. This has led to a decrease in enthusiasm for increasing AI investments, with only 58% of manufacturing leaders planning to boost AI spending in 2024 compared to 93% in 2023. The primary concerns hindering the adoption include accuracy, security, and cost issues. Specifically, 44% of manufacturers cited accuracy concerns, as precision is critical in manufacturing for maintaining trust and operational efficiency. Despite these challenges, nearly half of the manufacturers reported cost benefits from AI initiatives, suggesting potential for future improvements if these issues are addressed more effectively.??
Women Leading in AI?
New Podcast:? Tune in to this live recording of a? Panel Discussion Women Shaping the Future of AI
. This all-star panel of women leaders in AI covers the critical need for women in AI leadership, tackling bias in AI systems, and separating AI hype from reality. Join moderator
Pallavi Sharma ??
, founder of
witOmni AI Marketing
as she curates a conversation with @Janet George,
Chetna Mahajan
, Kim Carson, and
Sarah Benson-Konforty, MD
for an insightful discussion on the current state and future of AI.
Featured AI Leader: ??
Women And AI
’s Featured Leader - Yovana Rosales
?? Yovana demystifies AI for women of color, helping them combat burnout and boost efficiency in their businesses and personal lives.
Learning Center and How To’s
- 7 free and low-cost AWS courses that can help you use generative AI
Amazon Web Services (AWS) offers a range of free and low-cost courses to help individuals and businesses understand and implement generative AI. These courses cater to different roles, from developers and data scientists to executives and
Amazon Web Services (AWS)
partners. The offerings include hands-on training with tools like Amazon CodeWhisperer and foundational courses on generative AI. These educational resources aim to boost skills in AI and cloud computing, preparing users for the evolving technological landscape.?
- [2407.07858] FACTS About Building Retrieval Augmented Generation-based Chatbots
The paper discusses the development of enterprise chatbots using Retrieval Augmented Generation (RAG), combining large language models (LLMs) and retrieval mechanisms to enhance chatbot functionality. The authors introduce the FACTS framework (Freshness, Architectures, Cost, Testing, Security) to guide the creation of secure, efficient, and accurate chatbots. The paper presents empirical results comparing large and small LLMs, focusing on accuracy and latency trade-offs, and highlights key aspects of RAG pipeline engineering such as document retrieval, query rephrasing, and prompt design.?
- Top 10 Uncensored LLMs You Can Run on a Laptop -
StartupNews.fyi
The article from StartupNews.fyi
lists the top 10 uncensored large language models (LLMs) that can be run on a laptop. These models include Llama 2 Uncensored, WizardLM Uncensored, Llama 3 8B Lexi Uncensored, and others, which are designed to provide responses without alignment or moralizing filters. They are suitable for various applications, including general-purpose tasks and role-playing scenarios. Each model supports multiple quantization options for different hardware requirements, making them accessible for both CPU and GPU inference. To run uncensored large language models (LLMs) on a laptop, follow these steps: First, choose a suitable model from the top 10 listed, such as Llama 2 Uncensored or WizardLM Uncensored. Download the model and necessary dependencies. Set up your environment by installing required libraries and frameworks (e.g., PyTorch). Configure the model for your hardware, using quantization options for CPU or GPU inference. Finally, run the model and integrate it into your desired application. This process requires intermediate technical skills in machine learning and software setup.?
- Honey, I shrunk the LLM! A beginner's guide to quantization – and testing it
– provides a beginner's guide to quantization for large language models (LLMs). Quantization involves reducing the precision of model weights, which can significantly decrease the memory footprint and increase performance on smaller hardware, like GPUs or CPUs. The guide covers various quantization methods, their benefits, and trade-offs, demonstrating how different levels of quantization affect model size and performance. It also includes practical steps for quantizing models using tools like Llama.cpp.??
Tools and Resources
- MIT Researchers Develop Generative AI Tool to Boost Database Searches
GenSQL, developed by MIT researchers, can be used by data analysts, healthcare professionals, business analysts, and researchers to perform complex data analyses without deep technical expertise. By uploading tabular datasets and probabilistic models, users can run advanced queries to gain insights, detect anomalies, and generate synthetic data. GenSQL integrates these elements to provide accurate results, making it easier for users to analyze data and make informed decisions. This tool is particularly beneficial for sensitive areas like healthcare, where data privacy is crucial.
- Artists are taking things into their own hands to protect their work from generative AI - ABC News
Artists can protect their work from generative AI using tools like Glaze and Nightshade developed by the University of Chicago. Glaze modifies images subtly to prevent AI models from mimicking an artist's style, while Nightshade introduces "poison pills" into data to disrupt AI training on unlicensed images. These tools are available on their respective project websites. Artists can upload their images to these websites, where the tools will process them to provide the desired protection. More details can be found on the Glaze
and Nightshade
project pages.?
If you enjoyed this newsletter, please comment and share. If you would like to discuss a partnership or invite me to speak at your company or event, please DM me.
AI Engineer| LLM Specialist| Python Developer|Tech Blogger
2 个月AI & Data Enthusiasts! Unleash the power of YOUR LinkedIn data with DSPy & Pandas. This guide unlocks hidden insights to dominate your content strategy! (No advancedcoding experience needed!) Link: https://www.artificialintelligenceupdate.com/analyze-linkedin-posts-with-dspy-and-pandas/riju/ #SocialMediaAnalytics #LinkedIn #DSPy #Pandas #DataScience
AI Engineer| LLM Specialist| Python Developer|Tech Blogger
2 个月AI & Data Enthusiasts! Unleash the power of YOUR LinkedIn data with DSPy & Pandas. This guide unlocks hidden insights to dominate your content strategy! (No advancedcoding experience needed!) Link: https://www.artificialintelligenceupdate.com/analyze-linkedin-posts-with-dspy-and-pandas/riju/ #SocialMediaAnalytics #LinkedIn #DSPy #Pandas #DataScience
AI Engineer| LLM Specialist| Python Developer|Tech Blogger
2 个月AI & Data Enthusiasts! Unleash the power of YOUR LinkedIn data with DSPy & Pandas. This guide unlocks hidden insights to dominate your content strategy! (No advancedcoding experience needed!) Link: https://www.artificialintelligenceupdate.com/analyze-linkedin-posts-with-dspy-and-pandas/riju/ #SocialMediaAnalytics #LinkedIn #DSPy #Pandas #DataScience
Head of Sales and Business Development | Business Administration, CRM
3 个月What a fantastic edition of "Gen AI for Business"! ?? I’m particularly excited about the focus on women leading in AI—Yovana Rosales and other trailblazers are truly inspiring. It’s fascinating to see how AI is evolving and the steps being taken to address its environmental impact. I’ll be sharing this with my network at XLNC Technologies!
Executive Coach | Speaker | DTM | Advisory Board | Founding Member of Chief | Book of BUILD RESILIENCE | 4X Book Award Winner | Analytics & Risk Management Expert
3 个月Such comprehensive and centralized information on AI! Thanks