AI DIGEST: NEWS & TOOLS 15 January 2025
?? Researchers open source Sky-T1, a ‘reasoning’ AI model that can be trained for less than $450
?
So-called reasoning AI models are becoming easier — and cheaper — to develop. On Friday, NovaSky, a team of researchers based out of UC Berkeley’s Sky Computing Lab, released Sky-T1-32B-Preview, a reasoning model that’s competitive with an earlier version of OpenAI’s o1 on a number of key benchmarks. Sky-T1 appears to be the first truly open source reasoning model in the sense that it can be replicated from scratch; the team released the dataset they used to train it as well as the necessary training code.
?
?? This is an exciting step forward in making advanced AI capabilities more accessible! Sky-T1 shows how innovation and affordability can go hand in hand, unlocking incredible opportunities for businesses and research. Companies and even individuals will be able to train unique models for specific tasks and using specific datasets – this may represent a new step forward in providing unique insights and expertise to clients and professionals!
?
?
?? 41% of companies worldwide plan to reduce workforces by 2030 due to AI
?
Artificial intelligence is coming for your job: 41% of employers intend to downsize their workforce as AI automates certain tasks, a World Economic Forum survey showed. Out of hundreds of large companies surveyed around the world, 77% also said they were planning to reskill and upskill their existing workers between 2025-2030 to better work alongside AI, according to findings published in the WEF’s Future of Jobs Report. But, unlike the previous, 2023 edition, this year’s report did not say that most technologies, including AI, were expected to be “a net positive” for job numbers.
?
?? AI is clearly reshaping the workforce, and these numbers highlight both the opportunities and challenges ahead. While it’s encouraging to see companies focusing on reskilling employees, the potential for job losses reminds us how important it is to strike a balance. Let’s harness AI to empower people and create new possibilities, not just cut costs at the expense of whole industries going unemployed!
?
?
?? AI tools may soon manipulate people’s online decision-making, say researchers
?
Artificial intelligence (AI) tools could be used to manipulate online audiences into making decisions – ranging from what to buy to who to vote for – according to researchers at the University of Cambridge. The paper highlights an emerging new marketplace for “digital signals of intent” – known as the “intention economy” – where AI assistants understand, forecast and manipulate human intentions and sell that information on to companies who can profit from it.
?
领英推荐
?? This is a thought-provoking development in AI. While the potential for personalized insights is exciting, the risk of manipulation in decision-making raises serious ethical questions. What if it’s used to promote conspiracy theories or certain political agendas? It’s a reminder that as we embrace AI’s benefits, we must stay vigilant about protecting transparency, privacy, and respecting ethical boundaries.
?
?
?? AI Could Generate 10,000 Malware Variants, Evading Detection in 88% of Case
?
Cybersecurity researchers have found that it's possible to use large language models (LLMs) to generate new variants of malicious JavaScript code at scale in a manner that can better evade detection. "Although LLMs struggle to create malware from scratch, criminals can easily use them to rewrite or obfuscate existing malware, making it harder to detect," Palo Alto Networks Unit 42 researchers said in a new analysis. "Criminals can prompt LLMs to perform transformations that are much more natural-looking, which makes detecting this malware more challenging."
?
?? This highlights a critical double-edged sword of AI advancements. While AI brings incredible potential, its misuse in generating undetectable malware is a stark reminder of the importance of cybersecurity measures. Let’s ensure innovation is paired with caution to protect individuals and businesses from these emerging threats.
?
?
?? Orbit is Mozilla's wild attempt to turn AI into a privacy-focused summarization service
?
Mozilla's latest AI initiative is called Orbit, a Firefox extension designed to provide concise summaries of emails, web pages, and other lengthy documents. Orbit uses the Mistral large language model (Mistral 7B) and can work on popular websites like Gmail, Wikipedia, The New York Times, YouTube, and more. Users can interact with Orbit by requesting summaries or additional information about content, and the AI will gather relevant context (images, text, videos) to provide an answer.
?
?? Mozilla's Orbit is an exciting step toward making AI more privacy-focused and user-friendly. A tool that summarizes content locally without relying on constant cloud access feels like a win for both productivity and data security. It's great to see innovation that respects privacy – something I personally sure hope to see more of as AI evolves!
?