Gen AI for Business # 28 newsletter covers key insights and tools on Generative AI for business, including the latest news, research, trends, strategies, business impact, and innovations in the B2B sector.
You will notice that the sections are now more streamlined, so you can get directly to an area (news, models, case studies, etc.) that is of interest to you with enhanced readability. The improved section breaks and headlines make it more skimmable (I do not mind ;), catering to readers who want to quickly absorb key insights. Let me know what you think about the new structure.
What stood out to me this week is that many models are increasingly becoming agentic, meaning they are evolving from static tools that generate outputs to systems capable of taking autonomous actions. This shift aligns with advancements in generative AI, where models not only provide insights but also automate workflows, interact with systems, and make decisions in real time.
Several trends reflect this evolution:
- Claude 3.5 Sonnet and Haiku's Computer Use Capability – These models are now equipped to navigate screens, click buttons, and type, enabling them to carry out tasks autonomously across digital environments.
- Microsoft's Copilot Studio – Copilot Studio allows companies to build autonomous agents that automate business processes, from customer service to supply chain management, shifting from simple assistants to dynamic actors that execute tasks.
- IBM Granite 3.0 – The new Granite models provide infrastructure for agentic AI applications with guardrail capabilities, enabling enterprises to safely deploy automated workflows powered by large language models.
The underlying trend shows that companies are leveraging AI models not just for analysis or predictions but for executing actions, streamlining operations, and interacting with other software autonomously. This transition toward agentic AI reflects a growing focus on automation, cost efficiency, and scalability, as these systems can now act on data insights independently, reducing the need for human intervention at every step.
This shift also introduces challenges related to security, governance, and ethical considerations, as organizations need to ensure these autonomous models operate safely within predefined boundaries.
The trend towards agentic AI—where models like Claude 3.5, Microsoft Copilot, and others actively perform tasks—profoundly impacts Gen AI apps. Here’s how:
- Embedded Autonomy: Gen AI applications are moving from passive text generation tools to active agents that can interact with systems, interfaces, and workflows autonomously. For example, Claude 3.5’s ability to navigate interfaces and run code directly within apps transforms traditional apps into dynamic agents that perform complex tasks.
- Action-Oriented Workflows: These agentic AI systems integrate seamlessly into Gen AI apps to execute business processes, such as generating insights, automating data entry, managing sales leads, or optimizing supply chains without needing human intervention. Microsoft’s Copilot agents, for instance, not only respond to queries but proactively manage tasks across business systems.
- Enhanced App Capabilities: As AI agents become more prevalent, Gen AI applications will no longer be limited to generating outputs (like text, reports, or code). They will actively engage with other systems, enabling real-time decision-making, automated workflows, and operational efficiencies—for example, processing financial data or monitoring supplier communications independently.
- Scalable, Modular AI: Applications can now deploy agents with modular capabilities, which are customized to specific business needs. This means enterprises can build a constellation of agents within their Gen AI apps—some for customer service, some for coding, and others for financial analysis—each capable of acting autonomously within predefined parameters.
- Reduced Complexity for Users: For businesses and users, these advancements translate to easier integration and minimal setup. Enterprises can implement Gen AI apps that self-manage and adapt, making them more efficient and responsive with less hands-on management.
- Long-Term Impact: As agentic AI continues to evolve, we’re seeing the groundwork being laid for future Gen AI apps that blur the lines between software and independent systems—creating AI that can not only generate but act upon information in real-time. This shift makes Gen AI more impactful by embedding decision-making and task execution at the core of enterprise operations.
If you enjoyed this letter, please leave a like or a comment and share! Knowledge is power.?
Eugina, award-winning CMO, new market category creator in telco, innovator with 12 patents in Open RAN, RIC, and AI
Models
Meta’s AI research group, FAIR, launched cutting-edge models, including SAM 2.1 for object segmentation, Spirit LM for expressive speech-to-text conversion, and new security tools for post-quantum cryptography. IBM introduced Granite 3.0, enterprise-grade models focusing on performance and safety with seamless integration via NVIDIA’s infrastructure. Anthropic's Claude 3.5 Sonnet and Haiku models offer advanced automation, with a public beta enabling AI to interact with computers like humans. JetBrains released Mellum, an AI tool for fast code completion in its IDEs, competing with GitHub Copilot. H2O.ai unveiled lightweight OCR models that outperform larger competitors, targeting document-heavy enterprises. OpenAI plans to launch a powerful new model, Orion, in collaboration with Microsoft by December. Cohere AI's Aya Expanse models, covering 23 languages, outperform larger alternatives and drive multilingual research by promoting accessibility through Kaggle and Hugging Face.
- New Meta AI Models released - SAM 2.1 Spirit LM, MEXMA and More Meta’s AI Research (FAIR) introduced several advanced models designed to push boundaries in fields such as speech processing, material science, and AI security. The Segment Anything Model (SAM 2.1) enhances object tracking and segmentation, useful for video editing, autonomous vehicles, and VFX. Spirit LM bridges speech-to-text conversion with expressiveness, ideal for real-time translation and virtual assistants. The Layer Skip Technique speeds up large language models by selectively activating layers, enabling faster chatbots and improved data analysis. The Salsa Project addresses vulnerabilities in post-quantum cryptography, fortifying systems against AI-powered attacks. Met Open Materials 2024 accelerates material discovery with open datasets, driving breakthroughs in electronics and renewable energy. Maxima enhances crosslingual translation with advanced token-level processing, and Metal Lingua optimizes language model training with reduced computational demands. Lastly, the Self-Thought Evaluator generates synthetic preference data for faster model development, reducing reliance on human annotations. Meta’s commitment to open-source collaboration empowers global research, accelerating innovation across industries. Learn more on Meta’s website: Sharing new research, models, and datasets from Meta FAIR and access models here: Download Llama?
- Top OpenAI o1 Alternatives in 2024? provides a comparison of various large language models (LLMs), including alternatives to OpenAI’s models like Llama 3.2, Claude 3, and Smaug-72B. Each model offers unique capabilities, such as multilingual support, math proficiency, or fast inference speeds, catering to different applications like code generation and conversation. Many are open-source, fostering innovation and customization. The guide helps users select the best LLM based on performance, pricing, and use case.
- IBM’s New Granite 3.0 Generative AI Models Are Small, Yet Highly Accurate and Efficient | NVIDIA Technical Blog IBM’s Granite 3.0 models, released as a new generation of enterprise-grade LLMs, include dense models (8B and 2B) and Mixture of Experts (MoE) variants (3B-A800M, 1B-A400M). These models prioritize efficiency, trust, and performance, supporting advanced functions like text generation, classification, summarization, chatbots, and more. Integrated with NVIDIA NIM microservices for seamless deployment across clouds or on-premises, Granite 3.0 emphasizes speculative decoding to enhance inference speed. Additionally, Granite Guardian variants ensure robust safety, mitigating risks like bias and inappropriate content, making these models ideal for enterprise-scale applications with a focus on governance and productivity.?
Enterprises can leverage IBM’s Granite 3.0 models by integrating them into their workflows for tasks like customer service automation, legal document processing, and text generation. These models are optimized for easy deployment through NVIDIA NIM microservices, allowing organizations to run them in cloud, data center, or on-prem environments. While pre-trained models are available, enterprises can fine-tune Granite models using their proprietary data to better align with business-specific needs, such as customer personalization or compliance tasks, ensuring relevant outputs that enhance operations without requiring extensive retraining from scratch.
- Introducing computer use, a new Claude 3.5 Sonnet, and Claude 3.5 Haiku \ Anthropic This announcement on Claude 3.5 Sonnet and Claude 3.5 Haiku highlights the capabilities of these AI models as leading-edge tools for coding and automation. Claude 3.5 Sonnet excels in agentic coding, tool use, and multi-step processes, outperforming prior models and competitors on industry benchmarks. Claude 3.5 Haiku balances performance, affordability, and speed, positioning itself as ideal for real-time user-facing applications and data-driven tasks. The computer use capability in public beta marks a groundbreaking development, enabling the AI to interact with computers like a human by navigating interfaces, typing, and clicking, which allows developers to automate complex workflows. Early adopters such as Replit and DoorDash are leveraging these advancements to streamline operations. These developments reflect the broader potential of AI to automate sophisticated workflows and deliver value across various industries. See a demo of the computer use here; pretty cool: Claude | Computer use for automating operations?
- JetBrains launches Mellum, an LLM built to assist developers with code completion - SiliconANGLE Mellum is narrowly tailored for coding tasks, designed to streamline and speed up code writing through tight integration with JetBrains IDEs.? Mellum competes with GitHub Copilot (Microsoft), Tabnine, and Amazon CodeWhisperer, which are AI-powered tools that assist developers with code completion and debugging directly within integrated development environments (IDEs). These competitors also offer copilot-style tools to assist developers in writing, debugging, and optimizing code. Mellum differentiates itself by focusing on JetBrains' IDE ecosystem and providing faster, context-aware code suggestions, emphasizing reduced latency and high acceptance rates. Developers using Mellum have seen up to a 33% reduction in code suggestion time and a 40% acceptance rate of its suggestions. Cloud-based completion requires an AI Pro subscription, but users of JetBrains' IntelliJ-based IDEs receive local AI-powered completion by default. This release aligns with JetBrains' strategy to maintain a competitive edge, joining rivals like GitHub and Tabnine in the AI coding assistant space.
- And another specialized model is launched. Small but mighty: H2O.ai's new AI models challenge tech giants in document analysis | VentureBeat? H2O.ai introduced two new vision-language models, H2OVL Mississippi-2B and H2OVL Mississippi-0.8B, aimed at improving optical character recognition (OCR) and document processing tasks. Despite their smaller size, these models outperform larger counterparts from major tech companies, particularly excelling in OCRBench's text recognition benchmark. The company emphasizes efficiency and scalability, with these models being cost-effective solutions for enterprise document AI, enabling faster and more accurate document handling. The models, freely available on Hugging Face, can be fine-tuned for domain-specific needs, providing flexibility for businesses. H2O.ai’s strategic focus on smaller, high-performance models addresses challenges with traditional document analysis, such as poor-quality scans and complex handwriting. This approach offers enterprises a lightweight, sustainable alternative to larger AI systems from competitors, aiming to disrupt the market dominated by major tech giants. These smaller, targeted models excel in extracting information from structured and unstructured documents, focusing on enterprise-specific applications. They enable enterprises to improve data extraction, proposal generation, and document management with minimal computational overhead, making them ideal for specialized document AI needs.
- OpenAI plans Orion AI model release for December - The Verge? OpenAI plans to release its next AI model, codenamed Orion, by December 2024. Unlike previous releases such as GPT-4 and o1, Orion will first be available to close corporate partners rather than widely through ChatGPT. Microsoft's engineers are preparing to host the model on Azure, potentially as early as November, reinforcing the tight collaboration between Microsoft and OpenAI. However, OpenAI has denied the Orion code name, with spokesperson Niko Felix stating they plan to launch new technologies but not under the Orion label. Reports suggest Orion could be 100 times more powerful than GPT-4 and might contribute toward OpenAI's longer-term goal of developing artificial general intelligence (AGI). Training for the model, involving synthetic data generated by the o1 model, reportedly concluded in September. This model arrives at a time of internal change for OpenAI, following a $6.6 billion funding round and the departure of key executives, including CTO Mira Murati and Chief Research Officer Bob McGrew.
- Aya Expanse: Connecting Our World? Cohere For AI has launched Aya Expanse, a state-of-the-art family of multilingual models designed to excel in 23 languages, offering competitive performance against industry-leading alternatives. Available in two variants—8 billion and 32 billion parameters—Aya Expanse aims to bridge language gaps in AI research. The models leverage advanced techniques like data arbitrage, multilingual preference training, and model merging to enhance general performance and safety across diverse linguistic and cultural settings. Aya Expanse outperforms larger models such as Gemma 2, Mistral, and Llama 3.1, highlighting its efficiency. The models are accessible on Kaggle and Hugging Face, ensuring researchers worldwide can experiment with and deploy them. Cohere’s initiative aligns with its commitment to advancing multilingual AI and collaborating with over 3,000 researchers globally.
News?
Midjourney will release a web tool enabling users to edit images with generative AI, initially available to a select group with AI and human moderation to prevent misuse. The platform faces ongoing copyright lawsuits and criticism for lacking provenance tracking, while the release coincides with rising deepfake concerns. Perplexity introduced a unified search tool for internal files and web content, and OpenAI announced a Windows desktop app. Meanwhile, Character.ai is shifting from developing large models due to financial constraints, focusing instead on conversational platforms, following Google’s partial acquisition of its operations. Apple delayed the iPad 11 release, likely due to chip allocation challenges, and its evolving AI ambitions are reflected in new tools like “Apple Intelligence” and recent ChatGPT integration into Siri with iOS 18.2, blending internal and external technologies for personalized experiences.
- Midjourney plans to let anyone on the web edit images with AI | TechCrunch? Midjourney will release an upgraded web tool next week, allowing users to edit uploaded images with generative AI, including retexturing objects via captions. Initial access will be restricted to a limited community with human and AI moderation to prevent misuse. Despite efforts to mark AI-generated content using IPTC metadata, Midjourney faces criticism for not adopting C2PA provenance tracking and for its involvement in copyright disputes. The release comes amid rising concerns over deepfakes, which have increased by 900% this year, with several U.S. states enacting laws to regulate AI-aided impersonation.?
?The platform is involved in ongoing lawsuits related to copyright infringement, with allegations that it used artists' work without permission to train its AI models. However, recent rulings have dismissed some claims against Midjourney, while others are proceeding against Stability AI, which developed the technology used by platforms like Midjourney and DeviantArt. Whether regulatory pressure or public backlash will impact its long-term operation remains to be seen.
- https://www.msn.com/en-us/money/other/characterai-leaves-llm-building-behind-due-to-expense-report/ Character.ai has decided to abandon the race to develop larger language models, citing the high costs of training frontier models. CEO Dominic Perella explained that it’s financially challenging for startups to compete with giants like OpenAI and Google. Instead, the company will focus on enhancing its platform, which offers conversational chatbots with 20 million active users, earning revenue through subscriptions. This shift follows Google’s $2.7 billion acquisition of Character.ai’s models and part of its staff.
- Apple didn't release the iPad 11 this year, and this is probably why - 9to5Mac Apple did not release the iPad 11 this year, likely due to challenges aligning the device with its newer technologies, particularly Apple Intelligence. Historically, Apple refreshes the base iPad every couple of years with a chipset that’s about two generations old. However, adding features like Apple Intelligence to the device this year would require significant hardware upgrades—something that might be impractical for the budget-friendly iPad model. Current speculation points to supply constraints and strategic decisions around chip usage. For example, using the A16 chip would limit compatibility with Apple Intelligence, while newer chips like the A17 Pro are reserved for other products, such as the iPad mini and iPhone Pro models. This suggests Apple is focusing on optimizing production capacity and avoiding overextending its 3nm chip supply. Looking ahead, industry analysts like Ross Young and Mark Gurman suggest that the 11th-generation iPad will likely arrive in spring 2025, potentially equipped with the A18 chip. This timing aligns with production cycles, allowing Apple to introduce newer hardware while reserving premium components for its high-end devices.
- ?And here are the Apple's AI testing applications -- An inside look Apple’s internal work on AI culminated in the development of "Apple Intelligence," announced at WWDC 2024 and now rolling out in phases through iOS 18 and macOS updates. To ensure the quality of these AI tools, Apple conducted extensive testing through proprietary apps, such as 1UP and Smart Replies Tester, which focused on text generation, document analysis, and AI-powered response suggestions. The 1UP app, for instance, tested features for generating summaries and understanding user context, even referencing Apple’s in-house language model Ajax. Other features tested include Megadome, an internal tool that aggregates user data to demonstrate how Siri can better understand personal context, reflecting Apple’s AI strategy of providing more personalized experiences. Although the company developed some standalone AI tools, the recent integration of ChatGPT into Siri with iOS 18.2 suggests that Apple is blending proprietary and external technologies to achieve its AI ambitions.
Regulatory?
Governor Glenn Youngkin’s Executive Order 30 outlines Virginia’s framework for the safe use of AI, focusing on ethics, data protection, and agency compliance, while advancing AI tools in education and law enforcement. A Biden-Harris memorandum emphasizes AI’s role in national security, fostering partnerships, risk management, and U.S. leadership in global AI governance. The National Security Memorandum further integrates AI into defense, addressing cybersecurity risks and expanding research opportunities. Meanwhile, New York’s Department of Financial Services warns against AI-enabled cyber threats, urging businesses to adapt their security frameworks. Congressional leaders are negotiating AI legislation addressing risks like misinformation, with mixed progress due to political divisions. The U.S. plans new rules to curb AI investments in China, aiming to limit technological support for China’s military. In health research, the revised Declaration of Helsinki calls for ethical AI practices, especially in managing AI-related risks. The FDA issued guidance for AI regulation in healthcare, focusing on performance monitoring and collaboration with international frameworks. Finally, the UK’s competition regulator is investigating Alphabet's investment in AI startup Anthropic, reflecting increased scrutiny over Big Tech’s control in the AI sector.
- Governor Glenn Youngkin Executive Order 30 (2024) - Safe Use of Artificial Intelligence? contains Executive Order 30 issued by Governor Glenn Youngkin, outlining the framework for the safe use of artificial intelligence (AI) across Virginia’s state government. It mandates the Virginia Information Technologies Agency (VITA) to publish AI policy and IT standards, which all Executive Branch agencies must follow. The directive focuses on ethical AI use, data protection, mandatory approval processes, and third-party risk mitigation. Additionally, the Department of Education is tasked with creating AI-related tools and resources for educational institutions, while law enforcement agencies must develop appropriate AI usage standards in collaboration with the Office of the Attorney General.
- Memorandum on Advancing the United States’ Leadership in Artificial Intelligence; Harnessing Artificial Intelligence to Fulfill National Security Objectives; and Fostering the Safety, Security, and Trustworthiness of Artificial Intelligence | The White House? The Memorandum on Advancing the United States' Leadership in Artificial Intelligence (AI) emphasizes the importance of harnessing AI for national security objectives, ensuring safety and trust in AI systems, and promoting U.S. leadership in AI innovation. Key takeaways include: 1. National Security and AI: AI is framed as essential for maintaining national security, with the U.S. striving to stay ahead of competitors like China in developing cutting-edge AI tools. There is a focus on integrating AI into defense, intelligence, and cybersecurity systems to counter adversarial threats and safeguard human rights. 2. Government-Industry Collaboration: The memorandum highlights the role of private-sector partnerships in advancing AI development. AI leadership will depend on investments in infrastructure, computational power, and attracting global AI talent. Agencies like the Department of Energy (DOE) and the National Science Foundation (NSF) are tasked with fostering innovation and providing computational resources to researchers and institutions. 3. AI Governance and Risk Management: To responsibly deploy AI, the memorandum mandates the creation of governance frameworks ensuring AI safety, privacy, and accountability. It also establishes processes for monitoring potential risks—like misuse of AI in cybersecurity or human rights abuses—and ensuring transparency where possible. 4. Global AI Governance: The U.S. aims to shape international AI norms aligned with democratic values and human rights, working closely with allies and partners. It will advocate for agreements that limit the misuse of AI, promote safe deployment, and establish AI governance standards globally. 5. Talent and Infrastructure: To maintain a competitive edge, the government prioritizes fast-tracking AI talent acquisition through immigration and visa programs and supporting AI research. Building advanced infrastructure, including semiconductor technologies and computational facilities, is also a key objective. 6. Use of AI across Agencies: The memorandum calls for inter-agency cooperation, including AI development for national security systems (NSS), streamlined procurement processes, and consistent use of AI models across departments. Special AI oversight roles and Chief AI Officers will be instituted across various agencies to ensure responsible deployment. 7. This policy serves as a blueprint for the U.S. to consolidate its AI leadership while addressing risks, promoting trust, and ensuring that AI advances align with national interests and democratic values.
- FACT SHEET: Biden-Harris Administration Outlines Coordinated Approach to Harness Power of AI for U.S. National Security | The White House The Biden-Harris Administration has released the first-ever National Security Memorandum (NSM) on Artificial Intelligence, marking a critical step toward integrating AI into national security strategies while upholding democratic values. The NSM builds on previous actions, including the CHIPS Act and an executive order promoting AI leadership and safety. It outlines a coordinated approach to harness AI for national security purposes, focusing on three key areas: leading the development of safe, trustworthy AI; leveraging AI technologies to advance national security missions; and fostering international consensus on responsible AI governance. 1. The memorandum emphasizes securing U.S. technological infrastructure, including the chip supply chain, and combating foreign espionage targeting AI innovations. It strengthens partnerships between the new AI Safety Institute, national security agencies, and the intelligence community to monitor and mitigate AI risks. 2. The NSM also advocates for expanded research opportunities through the National AI Research Resource, engaging academic, civil society, and business stakeholders to democratize AI development beyond large corporations. 3. The NSM introduces governance frameworks to manage AI risks, ensuring transparency, privacy protections, and accountability in national security applications. It mandates streamlined AI procurement and fosters collaboration with non-traditional vendors to improve innovation. 4. The memorandum also promotes U.S. leadership in international AI governance by building on milestones like the International Code of Conduct on AI and partnerships formed at global AI summits, ensuring responsible AI use aligned with human rights and global stability. This initiative forms part of the administration's broader responsible innovation strategy, focusing on maintaining U.S. leadership in AI while developing ethical, transparent practices for its deployment across defense and intelligence sectors. Read the framework here: FRAMEWORK TO ADVANCE AI GOVERNANCE AND RISK MANAGEMENT IN NATIONAL SECURITY | AI.gov??
- Industry Letter - October 16, 2024: Cybersecurity Risks Arising from Artificial Intelligence and Strategies to Combat Related Risks? The New York State Department of Financial Services (DFS) issued a letter on October 16, 2024, addressing the cybersecurity risks posed by artificial intelligence (AI). It outlines threats like AI-enabled social engineering, enhanced cyberattacks, data theft, and vulnerabilities due to third-party dependencies. The DFS emphasizes that organizations must adapt their cybersecurity frameworks to mitigate AI-related risks by updating risk assessments, implementing multi-factor authentication (MFA), and ensuring data minimization practices. It also highlights the importance of training employees on identifying AI-driven threats, such as deepfakes, and urges third-party service provider (TPSP) management to ensure secure vendor practices. The letter reinforces existing requirements under 23 NYCRR Part 500 without imposing new regulations, focusing on using AI securely while leveraging its benefits in cybersecurity operations.
- Congressional leaders negotiating potential lame-duck deal to address AI concerns - POLITICO Top congressional leaders are negotiating a potential AI legislation package to address growing concerns over the technology’s impact on elections, misinformation, and national security. Senate Majority Leader Chuck Schumer is spearheading the effort, collaborating with bipartisan colleagues on AI-focused workforce training and research bills. However, sensitive topics like AI-generated misinformation and deepfakes may stall progress due to partisan differences. The AI package may be added to must-pass bills such as the National Defense Authorization Act or government funding legislation, which must clear before mid-December to avoid a shutdown. Schumer’s AI policy roadmap, developed with Sen. Martin Heinrich and other bipartisan leaders, has been key to these efforts, alongside a series of AI Insight Forums to educate lawmakers. The negotiations, however, face uncertainty as upcoming elections may shift political priorities, with Donald Trump opposing regulation and Vice President Kamala Harris advocating for stricter oversight. While Congress seeks to address AI risks, including identity theft and political interference, intra-party divisions and funding constraints may limit progress in the short term. Read the full report here: https://www.schumer.senate.gov/imo/media/doc/Roadmap_Electronic1.32pm.pdf??
- US to curb AI investment in China soon | Reuters? The U.S. is finalizing rules to restrict investments in AI, semiconductors, microelectronics, and quantum computing in China, aiming to prevent U.S. expertise from benefiting China’s military. These rules, expected before the November 5 election, will require U.S. investors to notify the Treasury Department of certain transactions. Exceptions include public securities, some partnerships, and debt financing. The final scope will clarify which technologies are restricted and how they will be monitored.
- The Revised Declaration of Helsinki—Considerations for the Future of Artificial Intelligence in Health and Medical Research | JAMA The 2024 revision of the Declaration of Helsinki (DoH) emphasizes ethical considerations for the use of AI in health and medical research. It highlights three key challenges: jurisdictional variability in data governance, differences in AI literacy among professionals and the public, and the unclear nature of current and future harms from AI technologies. The DoH urges researchers to navigate complex privacy laws across regions and address the public's limited understanding of AI. It also stresses the need for vigilance regarding the potential hidden harms and conflicts of interest associated with AI deployment, especially in marginalized communities. Effective implementation of these ethical guidelines requires foresight, resources, and a commitment from institutions to adapt to evolving AI-related risks.?
- Regulating AI in Health Care: FDA Issues New Guidance? The FDA has outlined a flexible approach for regulating AI in health care, emphasizing the need to coordinate efforts across industries and governments, including international frameworks like the EU AI Act. The agency highlighted the challenges of regulating large language models (LLMs) and the importance of life cycle management with ongoing performance monitoring. While the FDA has authorized nearly 1,000 AI-enabled medical devices, it has not yet approved an LLM and urges a balanced approach to ensure both safety and innovation.
- UK competition regulator to investigate Alphabet’s investment in Anthropic | TechCrunch? The U.K.’s Competition and Markets Authority (CMA) is investigating Alphabet’s significant investments in Anthropic, a San Francisco-based AI startup, citing potential competitive concerns. Alphabet, through Google, invested $300 million in Anthropic last year, followed by $2 billion. While Amazon also invested $4 billion in Anthropic, the CMA concluded Amazon’s deal didn’t qualify for investigation under current merger rules. The probe reflects the growing scrutiny of Big Tech’s control over startups. A decision on whether the case advances to a deeper investigation is expected by December 19, 2024.
If you ask why do they care and what can they do … The CMA can investigate Alphabet’s investment to ensure it doesn't reduce competition in the U.K., even though Anthropic is based abroad. If they find that the investment poses risks, the CMA can impose conditions on Alphabet, such as limiting their influence over Anthropic, requiring changes to partnership terms, or blocking parts of the investment altogether. The CMA’s actions are aimed at protecting competition in the U.K. market, preventing monopolistic behavior, and ensuring that British businesses and consumers aren’t negatively affected.
Regional Updates
- Nvidia Deepens India Ties With Reliance Partnership, Expanding AI Reach Through Alliances Nvidia has partnered with Reliance Industries to supply Blackwell AI processors for a one-gigawatt data center in Gujarat. Additionally, Nvidia will provide Hopper AI chips to Tata Communications and Yotta Data Services for other data centers. During the announcement in Mumbai, Nvidia’s CEO Jensen Huang emphasized India's potential to become a global AI exporter, citing the country's infrastructure, data, and extensive user base. Nvidia is also collaborating with Tech Mahindra on Indus 2.0, a new AI platform utilizing a Hindi-language model. Other Indian firms, including Wipro and Infosys, are leveraging Nvidia technology to enhance their AI solutions, aligning with Nvidia's strategy to strengthen ties with India as a global hub for AI innovation.?
Partnerships
Databricks and AWS have deepened their partnership to boost generative AI capabilities using AWS Trainium chips and Databricks' Mosaic platform, enabling scalable, cost-efficient AI solutions across sectors like fintech and gaming. Meta has teamed up with Blumhouse Productions to explore generative AI in filmmaking with its Movie Gen model, involving renowned filmmakers and artists in AI-driven storytelling. Bain & Company expanded its collaboration with OpenAI to co-design AI-powered consulting solutions for retail and healthcare, establishing an OpenAI Center of Excellence. Qualcomm and Google’s partnership aims to transform in-car digital experiences with AI-powered voice assistants through Snapdragon Digital Chassis and Android Automotive OS. Meanwhile, Microsoft and OpenAI are renegotiating partnership terms, hiring Goldman Sachs and Morgan Stanley amid discussions about stakes, cloud cost reduction, and a shift towards profitability by 2029.
- Databricks & AWS Join in Push to Enhance Gen AI Capabilities | Technology Magazine Databricks has announced an expanded partnership with AWS to enhance generative AI (GenAI) capabilities through the integration of AWS Trainium chips, which offer improved scalability and cost-efficiency. The collaboration focuses on optimizing AI model development, deployment, and monitoring using Databricks' Mosaic AI platform. This initiative will provide custom solutions across industries, allowing companies to fine-tune large language models (LLMs) on private data without compromising control over intellectual property. With additional investments in migration, security, and co-marketing, the partnership aims to accelerate GenAI adoption for clients like Rivian, Block, and SEGA, fostering innovation across sectors including automotive, fintech, and gaming.?
- Meta partners with Hollywood's Blumhouse to test out its AI movie generation model | Reuters Meta has partnered with Blumhouse Productions, known for films like The Purge and Get Out, to test its new generative AI video model, Movie Gen. The collaboration involves filmmakers Aneesh Chaganty, Casey Affleck, and the Spurlock Sisters using AI-generated clips in their short films, with Chaganty’s work showcased on Meta's Movie Gen website. Blumhouse CEO Jason Blum emphasized the importance of engaging creative talent in developing these tools to ensure they support storytelling effectively. This partnership reflects Meta's strategy to work with the entertainment industry despite copyright concerns around AI. Meta, which argues its AI training falls under fair use, has also struck deals with actors like Judi Dench and John Cena for its AI chatbot. This aligns with broader industry trends, as other companies like Microsoft-backed OpenAI explore similar partnerships for media generation tools.?
- Bain & Co, OpenAI expand partnership to sell AI tools to clients | Reuters? Bain & Company announced an expanded partnership with OpenAI to integrate AI tools, including ChatGPT Enterprise, into its consulting services. This collaboration, which initially began as a global alliance to raise awareness about OpenAI’s technologies, now focuses on co-designing AI-driven solutions for industries like retail and healthcare. Bain will also establish an OpenAI Center of Excellence, with a team dedicated to advancing these efforts. Around 50 Bain employees will contribute to the joint initiative, aimed at helping clients leverage AI for improved business outcomes.?
- This Qualcomm-Google partnership may give us the in-car voice assistants we've been waiting for | ZDNET? Qualcomm and Google have announced a partnership aimed at enhancing in-car digital experiences using generative AI. This multi-year collaboration, unveiled at the Snapdragon Summit, leverages Qualcomm’s Snapdragon Digital Chassis and Google’s Android Automotive OS and Cloud to create AI-powered digital cockpits. The goal is to integrate advanced voice assistants, immersive navigation, and other intuitive features that predict user needs, transforming driving experiences. Qualcomm will supply its edge AI system-on-chips (SoCs) and AI Hub platform, enabling developers to deploy AI models for vision, audio, and speech capabilities. Meanwhile, Google will contribute AI expertise to optimize voice interfaces and user interactions. A standardized framework will guide automakers and suppliers in integrating these systems efficiently, reducing time to market while improving developer productivity. This partnership reflects a broader trend toward incorporating AI into automotive systems to enhance safety and user experiences.
- OpenAI, Microsoft reportedly hire banks to renegotiate partnership terms - SiliconANGLE OpenAI and Microsoft have hired Goldman Sachs and Morgan Stanley, respectively, to renegotiate the terms of their partnership. The talks are reportedly centered around Microsoft’s stake in OpenAI following the latter's restructuring into a benefit corporation, part of a $6.6 billion funding round. Microsoft, which has invested over $13 billion, currently holds terms entitling it to most initial profits until recouping its investment, with a 49% stake thereafter. OpenAI aims to reduce cloud costs, having secured a deal to purchase up to $10 billion in Oracle infrastructure and negotiate Azure price reductions. OpenAI expects to move toward profitability by 2029, with an anticipated $100 billion in annual revenue.?
Investments
Predictive generative AI is driving efficiency in finance, healthcare, and retail by enhancing decision-making and forecasting and, hence, investments. Former OpenAI CTO Mira Murati is raising over $100 million for a new AI startup after stepping down to pursue independent projects. AI funding surges, with startups like Pylon attracting unexpected investments, as venture capital for AI reaches $21.3 billion in 2023. Nvidia and Microsoft are accelerating healthcare startups through a joint initiative, offering cloud credits, software tools, and go-to-market support to enhance patient care and hospital operations.
- Investing in AI: Why Predictive Generative AI is the Smart Bet for Real Returns? Predictive generative AI is showing measurable impact across industries by enhancing decision-making and forecasting. In finance, it optimizes trading strategies and detects fraud, with a recent example showing a 90% reduction in task completion time at a leading Asian bank through predictive sustainability assessments. In healthcare, predictive AI used at Cleveland Clinic forecasts cardiac outcomes, while Stanford Medicine employs it to enhance brain tumor treatment by maximizing effectiveness while protecting healthy tissue. Retailers like Amazon leverage predictive AI for inventory management and personalization, boosting both operational efficiency and customer loyalty. Although AI adoption faces trust challenges, predictive generative models are providing immediate value, enabling businesses to achieve operational efficiency while retaining critical human oversight.
- Former OpenAI CTO Mira Murati is reportedly fundraising for a new AI startup | TechCrunch? Mira Murati, who recently stepped down as CTO of OpenAI, is reportedly raising over $100 million in venture capital for a new AI startup, according to Reuters. Murati announced her departure, expressing a desire for “time and space to do my own exploration” without sharing specific plans. The startup is expected to focus on building AI products using proprietary models. Murati joined OpenAI in 2018 and became CTO in 2022, briefly serving as interim CEO during Sam Altman’s ouster. Her departure is part of a broader exodus, with other OpenAI executives leaving shortly after her. OpenAI recently raised $6.6 billion in the largest VC round ever. Before OpenAI, Murati held roles at Tesla and Leap Motion.
- Why are Investors Chasing AI Startups?? Pylon, a startup supporting B2B companies, raised $17 million in Series A funding despite being cash-flow positive and initially not planning to fundraise. Investors are increasingly drawn to AI startups following breakthroughs like ChatGPT, with global venture investments in AI rising from $1 billion in 2018 to $21.3 billion in 2023. While AI verticals attract more interest, VCs also focus on long-term strategies and responsible investing to manage risks and regulatory scrutiny.
- Nvidia, Microsoft Join Forces to Accelerate AI Health Care Startups? Nvidia and Microsoft are joining forces to accelerate AI-driven healthcare startups by merging their ecosystems—Nvidia Inception and Microsoft for Startups. The initiative provides startups with cloud credits, software tools, and expert support to foster innovation in medical devices and healthcare workflows. Nvidia will offer 10,000 inference credits, discounted access to its AI Enterprise suite, and Nvidia Clara for healthcare solutions, while Microsoft will provide $150,000 in Azure credits, $200,000 worth of business tools, and priority access to its Pegasus Program for go-to-market support. Participating startups, such as Pangaea Data and Artisight, use AI to enhance patient care and operational efficiency, leveraging Nvidia GPUs on Azure. This initiative builds on previous collaborations, including the Nvidia DGX Cloud on Azure, with plans to expand support to startups in other industries in future phases.
Research?
- State of AI in Telecommunications: 2024 Trends? outlines how AI is revolutionizing telecommunications with predictive maintenance, customer service automation, and edge computing for faster data processing. It highlights the need for scalable infrastructure and responsible AI practices while showcasing real-world use cases in network optimization. The report also emphasizes the growing importance of AI in addressing operational challenges and driving innovation in the telecom sector.
- Gartner Top 10 Strategic Technology Trends for 2025? The report categorizes trends into three core themes: AI imperatives, new frontiers of computing, and human-machine synergy. Key AI trends include Agentic AI, which focuses on autonomous systems that assist with tasks independently, and AI governance platforms to manage legal and ethical AI usage. Disinformation security technologies aim to combat identity fraud and fake narratives. In the computing realm, Post-Quantum Cryptography prepares for quantum threats, while Hybrid Computing and Energy-Efficient Computing focus on performance and sustainability. Ambient Invisible Intelligence integrates technology seamlessly into environments for intuitive interactions. Human-machine synergy trends cover Spatial Computing, which merges digital and physical worlds, Polyfunctional Robots that adapt to multiple tasks, and Neurological Enhancement using brain interfaces to boost cognitive abilities. Together, these trends guide organizations toward future innovation with a focus on responsibility, sustainability, and seamless user experiences.
Featured Content
- Future of Internet in the age of AI ?This article offers an insightful perspective on the evolving internet infrastructure in the context of AI, featuring commentary from Cloudflare CEO Matthew Prince. The discussion highlights crucial trends like the shift towards AI-powered edge computing and "local inference" to minimize latency in applications such as driverless cars and smartphones. It also touches on how COVID-19 stress-tested internet infrastructure, driving Cloudflare’s strategy to deploy GPUs closer to users. Prince outlines regulatory challenges, the balkanization of the internet, and the company’s role in maintaining network connectivity across varying geopolitical environments.
- AI Dreams: Microsoft @ 50, Chapter 1 – GeekWire The company’s AI focus dates back to the 1990s with pioneering research on speech recognition and machine learning, culminating in major AI applications like Copilot and GitHub’s AI-powered tools. A key moment in the journey was Microsoft’s multi-billion-dollar investment in OpenAI, which enabled breakthroughs with models like ChatGPT and Azure-based AI services. While AI has boosted revenue and investor confidence, Microsoft faces challenges such as steep competition from rivals like Google and Salesforce, concerns from enterprise customers about AI’s practical value, and sustainability issues from the immense power demands of AI infrastructure.
Concerns
Anthropic updated its Responsible Scaling Policy to introduce stricter safeguards for AI models based on risks such as autonomous research or potential CBRN use. Their sabotage evaluations reveal subtle risks like AI influencing decisions or hiding capabilities, prompting proactive oversight refinements. AI-generated art, exemplified by robot Ai-Da, sparks debates about authorship and creativity, emphasizing collaboration over replacement of traditional mediums. MIT’s SymGen tool enhances AI response validation by linking outputs to specific data sources, reducing manual review time by 20%. The News/Media Alliance joins creatives in rejecting the unlicensed training of generative AI models, underscoring the threat to intellectual property and creators' livelihoods.
- Announcing our updated Responsible Scaling Policy \ Anthropic Anthropic updated its Responsible Scaling Policy (RSP) to improve risk management for AI models and align safeguards with evolving AI capabilities. The RSP introduces new thresholds to trigger heightened safety measures based on two criteria: autonomous AI research and development and AI’s potential involvement in chemical, biological, radiological, or nuclear (CBRN) activities. Current models follow ASL-2 safety standards, with stricter ASL-3+ measures ready for elevated risks. The policy emphasizes proportional protection, ensuring safety standards escalate with potential risks. New internal governance processes and external expert feedback are integrated to refine risk management practices. A key takeaway from past implementations is the importance of flexible compliance tracking. Anthropic also announced leadership changes, with Jared Kaplan becoming the new Responsible Scaling Officer, and the company is hiring for roles focused on scaling and risk management efforts.? Read it here: Responsible Scaling Policy | Anthropic?
- And more on the subject from Anthropic. Together, these efforts aim to preemptively identify and mitigate risks from emerging AI models, ensuring safe and responsible AI development. Here is the 2nd one: Sabotage evaluations for frontier models \ Anthropic Anthropic’s new sabotage evaluations identify potential risks from AI systems behaving maliciously or deceptively. These evaluations include tests for human decision sabotage, where AI influences users to make poor decisions subtly; code sabotage, which checks if AI can insert undetected bugs into software; sandbagging, where AI hides capabilities during tests but reveals them in specific scenarios; and undermining oversight, where AI attempts to deceive monitoring systems about its performance. Initial demonstrations showed minor indications of such sabotage risks in current models like Claude 3.5 but concluded that minimal mitigations are sufficient for now. These evaluations aim to proactively address risks as AI capabilities advance, encouraging developers to refine oversight and safeguard measures before deploying future models.?
- AI art: The end of creativity or the start of a new movement? AI-generated art, like the work of humanoid robot Ai-Da, is reshaping the boundaries of creativity and art. Historically, disruptive technologies such as photography have catalyzed artistic evolution rather than replaced traditional mediums, and AI now offers similar opportunities. Philosophers and artists argue that just as modern art expanded beyond aesthetics and technical skill, AI-generated works challenge us to rethink authorship and creativity. Some artists see AI as a collaborative tool, while others raise concerns about intent, ownership, and data misuse. Ultimately, AI art's future lies in coexistence with traditional art, creating space for human-machine collaboration and new forms of expression. What do you think?
- ?Making it easier to verify an AI model’s responses | MIT News MIT researchers introduced SymGen, a system designed to enhance the validation of large language model (LLM) responses by linking specific text with exact data references. SymGen addresses AI "hallucinations" by requiring LLMs to cite the precise location of information, such as a specific cell in a data table, to support their outputs. This tool speeds up manual validation by about 20%, helping users easily verify AI-generated responses by highlighting referenced sections and flagging areas needing further review. Currently, SymGen works with structured data, and future enhancements aim to expand its capabilities to arbitrary text, aiding use cases like legal or clinical summaries.?
Case Studies?
Morgan Stanley expands its AI chatbot, AskResearchGPT, across trading and investment banking to boost efficiency, complementing earlier implementations in wealth management. Deloitte introduces GenAI in tax workflows, automating data tasks and allowing professionals to focus on strategy. In healthcare, Google launches Vertex AI Search to reduce admin work, while GE HealthCare previews CareIntellect for Oncology to support doctors with patient insights. Honeywell partners with Google to integrate AI into industrial platforms, and Yale’s CODA platform designs synthetic DNA sequences for targeted gene therapy. Marketing teams leverage generative AI beyond copywriting to streamline campaigns and reporting, while Penguin Random House adds AI usage warnings to its books. Air India collaborates with international airlines on AI-powered chatbots modeled after its AI.g system, which autonomously handles most inquiries, and DHL Supply Chain employs AI to enhance customer support, proposal accuracy, and legal processes.
Finance?
- AI on the trading floor: Morgan Stanley expands OpenAI-powered chatbot tools to Wall Street division? Morgan Stanley is deploying an OpenAI-powered chatbot, AskResearchGPT, in its investment banking and trading divisions to enhance productivity. This follows the successful rollout of similar AI tools in wealth management earlier in 2023. Employees can use the tool to access research insights from over 70,000 reports, streamlining workflows and speeding up client inquiries. The bank reports that staff using AskResearchGPT now ask three times more questions than with older AI tools, with sales teams notably benefiting by answering client questions in one-tenth of the time. Integrating the chatbot within browsers, Microsoft Teams, and Outlook ensures easy access. Morgan Stanley emphasizes that AI will assist but not replace analysts, as human expertise remains essential for relationship management and generating new ideas.
With Anthropic also entering the financial services sector, the focus for institutions like Morgan Stanley is to leverage AI tools better than others. It’s not about developing proprietary AI to compete with Anthropic but about gaining a productivity edge by integrating cutting-edge technology to maintain leadership in financial services.
Tax Field
- Redefining Tax GenAI | Deloitte US GenAI is transforming the tax field by streamlining data management, automating anomaly detection, and enhancing compliance workflows, enabling professionals to focus on strategic activities. Companies can leverage pre-trained large language models (LLMs) and fine-tune them using fact patterns to ensure reliable outputs and reduce bias. Tools like prompt engineering and Retrieval-Augmented Generation (RAG) help manage costs and optimize performance, while human oversight remains crucial to validate AI outputs and maintain data trustworthiness. As GenAI evolves, tax professionals are shifting toward higher-value tasks, fostering greater efficiency and precision in operations.??
Healthcare
?Google Cloud has launched Vertex AI Search for Healthcare, a generative AI tool that helps payers and providers efficiently search through patient records and documents to extract clinical information. Initially available in limited release, the tool is now generally accessible and integrates with Google’s Gemini 1.5 Flash and MedLM models to enhance search capabilities. It aims to reduce the time clinicians and claims staff spend on administrative tasks, which can total 27 to 36 hours per week. Vertex AI Search features grounding techniques to mitigate AI “hallucinations” by citing sources and linking internal records, improving trust in its outputs. The tool has already been adopted by healthcare organizations such as Community Health Systems and Highmark Health. Google emphasizes that while the tool enhances efficiency, the goal isn't perfection but a significant improvement over current manual processes.
- GE HealthCare announces time-saving AI tool for doctors who treat cancer? GE HealthCare announced CareIntellect for Oncology, an AI tool that summarizes patient histories, tracks disease progression, and identifies relevant clinical trials, helping oncologists save time. Launching in 2025, it will initially focus on prostate and breast cancers and is already being tested by Tampa General Hospital. The tool also alerts doctors to missed tests and treatment deviations. GE previewed additional AI tools, including Health Companion, a system of specialized AI agents offering real-time insights in radiology, pathology, and genomics. Other upcoming tools focus on predicting breast cancer recurrence and improving mammogram analysis for radiologists.
Ladies, please let this news be your reminder to get your mammogram. And men – please ensure that important ladies in your life get their yearly mammogram.
- Honeywell signs deal with Google to bring Gemini generative AI to industrial sector? Honeywell has teamed up with Google to integrate the Gemini AI model and Vertex AI into its Forge IoT platform, aiming to boost productivity, reduce maintenance times, and address labor shortages. Launching in 2025, the AI tools will support aerospace, healthcare, manufacturing, and more by enabling less experienced workers to perform at higher levels. Honeywell is also testing Gemini Nano, an on-device AI version, for autonomous operations in hospitals, and also refineries, and rural areas. With adoption still low in the industrial sector, Honeywell expects a major AI adoption surge by 2025-2026.?
- Generative AI Designs DNA Sequences to Switch Genes On and Off Researchers from Yale, the Jackson Laboratory, and the Broad Institute have developed CODA (Computational Optimization of DNA Activity), a generative AI platform that designs synthetic DNA sequences capable of controlling gene expression with high precision. CODA focuses on creating cis-regulatory elements (CREs)—small DNA sequences that switch nearby genes on or off in specific cell types. This AI-driven approach aims to improve gene therapy by delivering treatment directly to target cells, such as neurons affected by Parkinson’s disease or immune cells harboring HIV, without off-target effects that could harm healthy tissues. The CODA platform was trained on the activity data of over 775,000 regulatory elements from human cells (blood, liver, and brain) to generate synthetic sequences. These AI-generated CREs were tested in zebrafish and mice, where they successfully activated genes in specific cell layers, demonstrating higher cell-type specificity than natural sequences. Researchers plan to expand CODA’s use across more cell types and combine it with gene therapy technologies to target genetic diseases like brain, metabolic, or blood disorders.
Marketing
- 3 marketing use cases for generative AI that aren’t copywriting | MarTech First, AI can enhance campaign foundations by quickly synthesizing key elements like feature-benefit analyses, saving time while maintaining quality. Second, marketing teams can use AI-generated personas to provide feedback, optimizing campaigns by identifying blind spots and offering actionable insights. Lastly, AI simplifies the creation of charts and graphs from raw data, automating routine tasks like performance reporting, although the output may lack visual polish. These use cases demonstrate AI's potential to boost productivity and streamline marketing workflows.
Publishing
- Penguin Random House is adding an AI warning to its books' copyright pages? Penguin Random House is adding a statement to the copyright pages of its books to prohibit their use in training AI models. This change, which applies to both new releases and reprints, emphasizes the publisher's stance against unauthorized AI use amid ongoing lawsuits over the use of copyrighted content for AI training. The updated language states, “No part of this book may be used or reproduced in any manner for the purpose of training artificial intelligence technologies or systems.” Despite this restriction, Penguin Random House is not entirely opposed to AI. In August 2024, the publisher outlined its approach to generative AI, committing to defend its authors' intellectual property while selectively using AI tools where beneficial to its publishing goal.
Airline Industry
- US, Europe airlines approach Air India for developing their Gen AI chatbots | Company News - Business Standard? Air India’s Chief Digital & Technology Officer, Satya Ramaswamy, revealed that several US and European airlines have approached the carrier for assistance in developing generative AI chatbots modeled after Air India’s AI.g. This chatbot, used by Air India, currently resolves 97% of customer inquiries independently, reducing reliance on contact center agents. Air India, owned by the Tata Group, is also advancing its digital capabilities by filing for a patent on a new "one-click booking" feature for its website and app. Ramaswamy highlighted Air India’s focus on thought leadership and innovation, sharing their knowledge with other airlines while emphasizing that no other airline has achieved a comparable AI-powered customer service solution yet.
Shipping
Women Leading in AI?
?From LA to Boston, women and allies traveled to the Bay Area to connect and share experiences in AI and robotics. Connecting over shared interests in robotics and AI. Our expert panel took the stage and the audience was super engaged. Here’s a summary of speakers and key quotes. Which one resonated with you? https://www.dhirubhai.net/posts/women-and-ai_womenintech-womenandai-womeninrobotics-activity-7255224044852006912-J6QI?utm_source=share&utm_medium=member_desktop?
Learning Center
Microsoft Q&A offers a community platform for troubleshooting and insights on Microsoft products, including Azure and Microsoft 365. IBM’s SkillsBuild provides courses that grant a digital credential in AI fundamentals, while DeepLearning.ai’s multimodal Llama 3.2 course helps users develop prompting and tokenization strategies. Google Cloud’s new generative AI learning paths offer hands-on skills training with certifications, focusing on real-world AI applications. Anthropic’s Claude.ai enables building agents using JavaScript-based data handling for automating workflows. Microsoft Copilot Studio launches in November, letting users create autonomous agents that streamline tasks across enterprises. Google released SynthID, a watermarking tool for embedding identifiers in AI-generated content, helping detect deepfakes while preserving output quality.
Learning
- Questions - Microsoft Q&A? is a community platform where users can ask and answer questions about Microsoft products. It covers topics like Azure, Microsoft 365, Dynamics 365, AI, cloud computing, and more. You can use it to get advice, troubleshoot issues, or gain insights from both experts and other users. It’s a great resource if you have any specific questions about Microsoft technologies or services. Feel free to explore and post your questions directly on that platform—whether it’s about AI integrations, Azure services, or cloud setup, the community and Microsoft professionals often respond quickly! ??
- https://www.deeplearning.ai/short-courses/introducing-multimodal-llama-3-2/ Users can gain proficiency in prompting and tokenization strategies, along with both built-in and custom tool calling functionalities. Additionally, the course covers the Llama stack, a standardized interface designed to facilitate the development of AI applications, providing a framework for streamlined integration and deployment of models.
- Good overview of many available AI learning paths from Google: Learn genAI skills for the real-world in learning paths from Google | App Developer Magazine? Google Cloud has launched new generative AI learning paths to address the growing AI skills gap, helping developers and data professionals gain practical skills for real-world applications. These paths cover topics such as AI application development, data workflows with BigQuery, AI model deployment, and content generation using diffusion models. Participants will receive hands-on training, culminating in skill badges to showcase expertise on resumes and social media. The courses are accessible through Google’s Innovators community, offering 35 free learning credits per month, allowing users to complete one learning path monthly.
- Evaluate LLMs with Hugging Face Lighteval on Amazon SageMaker? It provides a how-to guide for evaluating LLMs using Hugging Face LightEval on Amazon SageMaker, offering technical insights and practical steps for benchmarking models. The detailed walkthrough makes it an educational resource for users seeking to understand LLM evaluation workflows.
Prompting
- Almost Timely News: ??? Advanced Prompt Engineering for Generative AI? focuses on advanced prompt engineering for generative AI, explaining how scaling and complex prompts can enhance model performance. Scaling involves deploying prompts across large datasets efficiently, while complex prompts integrate code-like structures and reflection and reward mechanisms to optimize outputs. It advises focusing on foundational skills first and expanding to advanced methods only as needed. The newsletter emphasizes that tools should match the task, with scalable infrastructure essential to achieving AI’s full potential in real-world applications.
Tools and Resources
- Microsoft Copilot Now Enables Users to Build Autonomous Agents? Microsoft has expanded its Copilot platform, enabling users to build custom AI agents that autonomously enhance workflows. Launching in November, the new Copilot Studio allows users to create agents leveraging Microsoft 365 Graph, Dataverse, and other enterprise tools. These agents can manage tasks such as IT help desk support, employee onboarding, and customer service. Microsoft also introduced 10 autonomous agents for Dynamics 365, including the Sales Qualification Agent, which prioritizes leads and supports outreach, the Supplier Communications Agent, which monitors supplier performance to prevent disruptions, and the Customer Knowledge Agent, which helps service teams resolve issues. Early adopters like McKinsey & Company reduced onboarding times by 90%, and Thomson Reuters cut due diligence workflows in half. With 60% of Fortune 500 companies already using Copilot, organizations such as Lumen Technologies forecast $50 million in annual savings, and Honeywell reports productivity gains equal to 187 full-time employees.
- New autonomous agents scale your team like never before - The Official Microsoft Blog Microsoft announced new autonomous agent capabilities integrated into Microsoft 365 and Dynamics 365. The Copilot Studio will enable customers to create agents that automate processes across functions like sales, service, and supply chains. New agents include a Sales Qualification Agent and a Supplier Communications Agent, designed to boost productivity and reduce operational friction. Early adopters like McKinsey, Pets at Home, and Thomson Reuters report faster workflows and significant cost savings. These agents adhere to Microsoft's AI security and privacy standards, helping businesses scale while maintaining governance.?
- https://platform.openai.com/docs/assistants/tools/code-interpreter Developers can use this tool to integrate Python coding capabilities into their applications, enabling real-time data analysis, visualizations, and complex calculations directly in chat interfaces. The Code Interpreter, now called Advanced Data Analysis (ADA), empowers developers by automating and simplifying complex data tasks. It helps transform raw data through cleaning, reshaping, and normalization, ensuring datasets are ready for further analysis or machine learning pipelines. ADA also supports dynamic data visualization by generating graphs and charts using libraries like Matplotlib or Plotly, providing intuitive insights for business users. Developers can integrate ADA into real-time processes to perform simulations, financial modeling, or risk assessments, streamlining decision-making. It automates report generation, enabling companies to quickly produce customized reports without manual intervention, and can be embedded into workflows such as CRM or ERP systems to automate tasks like sales forecasting or customer segmentation.? This tool is useful for automating tasks, transforming data, and supporting technical troubleshooting in interactive environments. It operates within a secure sandbox to safely execute code on demand. The cost of using the Code Interpreter (Advanced Data Analysis) from OpenAI is $0.03 per session. A session is valid for up to one hour, allowing users to make multiple queries during that time without incurring additional costs. If multiple threads use the tool simultaneously, each will be charged separately at $0.03 per session.?
- Introducing the analysis tool in Claude.ai \ Anthropic The new analysis tool in Claude.ai introduces advanced capabilities, allowing users to run JavaScript code directly within the platform. Acting as a code sandbox, this feature enhances Claude’s ability to conduct complex data analysis, process CSV files, and provide actionable insights across different functions. Available in feature preview, this tool offers precise, reproducible results by combining code execution with the powerful analytical abilities of Claude 3.5 Sonnet. This tool extends value to multiple teams: marketers can identify conversion opportunities from customer data, sales teams can analyze regional performance, product managers can gain insights for planning, engineers can optimize resource use based on server logs, and finance teams can visualize key trends using financial dashboards. To access it, users need to enable the feature in Claude.ai under feature previews. This addition makes Claude a more practical tool for real-time analysis, helping teams make data-informed decisions efficiently.
The Claude.ai analysis tool offers opportunities to enhance business processes, and using it to build agents is feasible. You can create agents by combining JavaScript-based data handling and automation capabilities within Claude.ai. These agents could automate workflows like customer analysis, sales tracking, and operational monitoring by processing real-time data, providing insights, and iterating based on changing inputs.?
- Google offers its AI watermarking tech as free open source toolkit - Ars Technica? Google has open-sourced its SynthID watermarking toolkit, enabling developers and businesses to embed imperceptible watermarks into AI-generated content, which can later be detected through an algorithm. Initially launched for Google’s Gemini AI model, SynthID marks text, audio, and visual content, enhancing the ability to identify AI-generated material like deepfakes. The watermarking tool adjusts token selection during content generation by favoring specific token sequences using a sampling algorithm. Even with light edits, SynthID remains effective by analyzing the statistical likelihood of watermarked content, though performance improves with longer text. Google's tests showed no noticeable reduction in text quality, with users barely differentiating between watermarked and unwatermarked outputs.
If you enjoyed this newsletter, please comment and share. If you would like to discuss a partnership, or invite me to speak at your company or event, please DM me.
Email Marketing Campaign and Cold Email, B2B Lead Generation, Prospect List Building , Data Scrap And Enrichment
4 个月Good point!
I lead a team of creative, high-performing experts who transform businesses, executives, and leaders by increasing their reach, impact, and brand marketing effectiveness | CEO @ The Borden Group
4 个月So good!! Eugina!
Founder and CEO @EnterpriseWeb
4 个月Eugina Jordan - Glad to see coverage of "agentic". Of course, extending GenAI with automation comes with some risks given accuracy, consistency, hallucination, security, latency, resource and energy consumption concerns of LLMs. Telcos need practical solutions that leverage strengths of GenAI while mitigating its weaknesses. Here's a link to a demo we did with Intel, Microsoft and Red Hat for MWC24. Developers can "talk to the network" to design, deploy and manage complex services - https://www.youtube.com/watch?v=APPvfsjwK3A
C-Suite Operator | Board Director | Investor | Bridging Corporate Discipline & Startup Agility | Growth, Pricing & Execution Strategy | AI Safety & Ethics
4 个月This was quite the week in AI Eugina and I applaud and am thankful for your recap. On the point of increasing deployment of AI agents; I hope to see as much attention on implementing guardrails (especially in regulated markets). These advancements are exciting and will certainly boost productivity but cannot go unchecked. This is all moving so quickly I feel we are facing a chasm of employee training to adeptly implement and use these features. Happy Sunday! Passing onto some friends whom I know will find certain aspects compelling.
CEO and Co-founder (Stealth AI startup) I 8 granted patents/16 pending I AI Trailblazer Award Winner
4 个月Women Leading in AI Women And AI ??Featured AI Leader: Neha Goel https://www.dhirubhai.net/posts/women-and-ai_featured-ai-leader-neha-goel-were-activity-7254136880282099715-_1Ui?utm_source=share&utm_medium=member_desktop