AI - Monday, October 28, 2024: Commentary with Notable and Interesting News, Articles, and Papers
Robert Sutor
Quantum Computing and AI, but not necessarily together: Tech Leader/Ph.D., Non-Executive Director, Author, Advisor, Pundit, Keynote Speaker, Analyst, Professor, Cat Lover
Commentary and a selection of the most important recent news, articles, and papers about AI.
Today’s Brief Commentary
Today’s links provide several points of view on topics related to Generative AI. Are open source LLMs (large language models) better than proprietary ones? In fact, is “open source” the right way of making these models available versus a Creative Commons license?
Music and the arts are highly contested areas when it comes to Generative AI. Who owns the derived intellectual property? Were you permitted to train a model on the original work in the first place? One article today discusses making it harder for AI to learn from a musical composition. Another, from Google, makes it easy to create music.
What is happening at the US Department of Defense regarding Generative AI? Will it provide great value, or are they shutting the task force down? Whatever the plan was, the upcoming US elections may change many such programs.
Packt is running a special sale until November 1 on my quantum computing book Dancing with Qubits, Second Edition. The book has twenty 5-star ratings, and the paperback will be 20% off between now and the end of the month.
Music and the Arts
New tool makes songs unlearnable to generative AI | Tech Xplore
Author: Izzie Gall
(Thursday, October 23, 2014) “This summer, Tennessee became the first state in the US to legally protect musical artists' voices from unauthorized generative AI use. While he applauded that first step, Liu saw the need to go further—protecting not just vocal tracks, but entire songs. In collaboration with his Ph.D. student Syed Irfan Ali Meerza and Lehigh University's Lichao Sun, Liu has developed HarmonyCloak, a new program that makes musical files essentially unlearnable to generative AI models without changing how they sound to human listeners. They will present their research at the 46th IEEE Symposium on Security and Privacy (S&P) in May 2025.”
New generative AI tools open the doors of music creation - Google DeepMind
(Wednesday, October 23, 2024) “Over the past year, we’ve been working in close collaboration with partners across the music industry through our Music AI Incubator and more. Their input has been guiding our state-of-the-art generative music experiments, and helping us ensure that our new generative AI tools responsibly open the doors of music creation to everyone. Today, in partnership with Google Labs, we're releasing a reimagined experience for MusicFX DJ that makes it easier for anyone to generate music, interactively, in real time.”
AI Chipsets and Infrastructure
Lightmatter's $400M round has AI hyperscalers hyped for photonic data centers | TechCrunch
Author: Devin Coldewey
领英推荐
(Wednesday, October 16, 2024) “Photonic computing startup Lightmatter has raised $400 million to blow one of modern data centers’ bottlenecks wide open. The company’s optical interconnect layer allows hundreds of GPUs to work synchronously, streamlining the costly and complex job of training and running AI models.”
Generative AI and Models
Open Source in the Age of LLMs
Author: Vicki Boykis
(Wednesday, April 10, 2024) “Like our parent company, Mozilla.ai’s founding story is rooted in open-source principles and community collaboration. Since our start last year, our key focus has been exploring state-of-the-art methods for evaluating and fine-tuning large-language models (LLMs), including continued open-source LLM evaluation experiments and establishing our GPU cluster’s infrastructure.”
Evolving the Responsible Generative AI Toolkit with new tools for every LLM | Google
Author: Ryan Mullins
(Wednesday, October 23, 2024) “Building AI responsibly is crucial. That's why we created the Responsible GenAI Toolkit, providing resources to design, build, and evaluate open AI models. And we're not stopping there! We're now expanding the toolkit with new features designed to work with any LLMs, whether it's Gemma, Gemini, or any other model. This set of tools and features empower everyone to build AI responsibly, regardless of the model they choose.”
The enterprise verdict on AI models: Why open source will win | VentureBeat
Author: Matt Marshall
(Thursday, October 24, 2024) “While closed models like OpenAI's GPT-4 dominated early adoption, open source models have since closed the gap in quality, and are growing at least as quickly in the enterprise, according to multiple VentureBeat interviews with enterprise leaders.”
Questions on DOD’s plans for generative AI swirl as Task Force Lima's possible sunset nears | DefenseScoop
Author: Brandi Vincent
(Friday, October 25, 2024) “However, a range of questions regarding Lima’s latest progress and outputs to date, the hefty volume of algorithms and use cases it’s been exploring, and the timeline for the task force’s potential decommissioning continue to linger as the deadline for its sunset deadline approaches.”
Technical Papers, Articles, and Preprints
[2410.18856] Demystifying Large Language Models for Medicine: A Primer
Authors: Jin, Qiao; Wan, Nicholas; Leaman, Robert; Tian, Shubo; Wang, Zhizheng; Yang, Yifan; Wang, Zifeng; Xiong, Guangzhi; Lai, Po-Ting; ; ...; and Lu, Zhiyong
(Thursday, October 24, 2024) “Large language models (LLMs) represent a transformative class of AI tools capable of revolutionizing various aspects of healthcare by generating human-like responses across diverse contexts and adapting to novel tasks following human instructions. Their potential application spans a broad range of medical tasks, such as clinical documentation, matching patients to clinical trials, and answering medical questions. In this primer paper, we propose an actionable guideline to help healthcare professionals more efficiently utilize LLMs in their work, along with a set of best practices. This approach consists of several main phases, including formulating the task, choosing LLMs, prompt engineering, fine-tuning, and deployment. We start with the discussion of critical considerations in identifying healthcare tasks that align with the core capabilities of LLMs and selecting models based on the selected task and data, performance requirements, and model interface. We then review the strategies, such as prompt engineering and fine-tuning, to adapt standard LLMs to specialized medical tasks. Deployment considerations, including regulatory compliance, ethical guidelines, and continuous monitoring for fairness and bias, are also discussed. By providing a structured step-by-step methodology, this tutorial aims to equip healthcare professionals with the tools necessary to effectively integrate LLMs into clinical practice, ensuring that these powerful technologies are applied in a safe, reliable, and impactful manner.”
HNPW Minister| AI Scientific Advisory -Level 3 | White House
4 周This maybe useful for immersive experience. https://developers.google.com/ar
?? 6x LinkedIn Top Voice | Sr AWS AI ML Solution Architect at IBM | Generative AI Expert | Author - Hands-on Time Series Analytics with Python | IBM Quantum ML Certified | 12+ Years in AI | MLOps | IIMA | 100k+Followers
4 周This is great book Robert Sutor. I must say this is fundamental book of the learning of Quantum Computing.