Happy Women's Day and Women's History Month!?
As we celebrate the women who’ve shaped history, let’s also recognize those driving the future—especially in AI and technology. From pioneering scientists to today’s AI leaders, women continue to push boundaries, ensuring innovation serves everyone.
This week, AI is making waves in healthcare after the HIMSS conference, proving that data isn’t just power—it’s the backbone of better decision-making. We’re seeing breakthroughs in AI-driven diagnostics, smarter search tools, and a renewed focus on how data quality impacts model performance. And yes, we also survived daylight saving time (barely). ????
Let me know what you found most insightful in the comments!
Models
Women’s History Month honors the pioneering spirit of women who push boundaries, much like the innovations driving AI today. DeepSeek’s impressive theoretical profit margins reflect the potential of AI startups to challenge industry norms, just as women in tech have broken barriers in leadership and innovation. Alibaba’s QwQ-32B model fuels China’s AI race, paralleling the resilience of women advancing STEM fields despite systemic challenges. LLMWare’s Model HQ ensures AI processing remains secure and efficient on Snapdragon devices, reinforcing the importance of accessibility—an ongoing fight led by female technologists. AMD’s Instella models advance open AI research, echoing the contributions of women in computing, from Ada Lovelace to modern AI researchers. In finance, Writer’s Palmyra Fin challenges GPT-4 with domain-specific accuracy, highlighting the critical role of women in shaping responsible AI for regulated industries. Meanwhile, Microsoft’s MAI models and IBM’s Granite 3.2 showcase AI advancements in enterprise applications, just as women continue to drive transformative change across industries.
- DeepSeek reveals theoretical margin on its AI models is 545% - The Economic Times Chinese AI startup DeepSeek has revealed that its AI models could achieve a theoretical profit margin of 545% per day if all users switched to paid plans. The company’s V3 and R1 models incur $87,072 in daily inference costs, while estimated daily revenue could reach $562,027, projecting over $200 million annually. However, actual revenue remains lower due to discounted pricing and limited monetization. And this is what it costs for DeepSeek: open-infra-index/202502OpenSourceWeek/day_6_one_more_thing_deepseekV3R1_inference_system_overview.md at main? DeepSeek’s V3/R1 inference system is designed to achieve higher throughput and lower latency by leveraging large-scale cross-node Expert Parallelism (EP). This method scales batch size and reduces memory access demands, but introduces system complexity due to cross-node communication and load balancing. DeepSeek addresses these challenges by employing a dual-batch overlap strategy to hide communication latency behind computation and implementing load balancing across GPUs. The system runs on H800 GPUs, using FP8 and BF16 precision to optimize performance. Over a 24-hour period, the inference service processed 608 billion input tokens, with 56.3% hitting the KV cache, and generated 168 billion output tokens at an average 20–22 tokens per second. Each H800 node handled 73.7k tokens/s (input) during prefilling and 14.8k tokens/s (output) during decoding. The estimated daily cost of operation is $87,072, while the theoretical revenue based on DeepSeek-R1’s pricing model could reach $562,027, reflecting a 545% potential profit margin. However, actual revenue is significantly lower due to lower DeepSeek-V3 pricing, limited monetization, and nighttime discounts.
- Alibaba’s New Model Adds Fuel to China’s AI Race? Alibaba has launched QwQ-32B, its latest AI reasoning model, which has boosted its stock by 8%. While not as powerful as OpenAI’s o3 or Anthropic’s Claude 3.7 Sonnet, QwQ-32B competes with China's DeepSeek R1 but requires significantly less computing power. The model, described as embodying an "ancient philosophical spirit," reflects China's rapidly advancing AI ecosystem despite U.S. chip restrictions. The release comes amid China’s push for AGI (Artificial General Intelligence), with leading Chinese firms like Alibaba, Tencent, and DeepSeek striving to develop AI that could rival Western models. QwQ-32B is part of a growing trend of "reasoning models," which prioritize computation efficiency and extended processing time per query rather than just raw scale. Alibaba has made the model open-weight, meaning developers can run it locally on high-end laptops. The move highlights China's AI race despite geopolitical barriers, with analysts predicting continued rapid advancements in the sector.
- Model HQ by LLMWare.ai: Run language models and use AI agents on Snapdragon X Series devices? LLMWare’s Model HQ now runs on Snapdragon X Series devices, enabling efficient, secure on-device AI. Designed to address memory and efficiency challenges, Model HQ optimizes small language models (SLMs) ranging from 1B to 32B parameters, supporting use cases like retrieval-augmented generation (RAG) and AI agents. Unlike cloud-based solutions, Model HQ keeps AI processing local, enhancing privacy, compliance, and cost-effectiveness by eliminating inference fees. With 30+ proprietary SLMs and 90+ optimized models (including Gemma, Llama, Phi, and Mistral), Model HQ runs inference on the Qualcomm Hexagon NPU, allowing large models to function even on CPUs without Wi-Fi. The no-code client app supports tasks like document search, text-to-SQL, and image classification, making AI more accessible for enterprises. Snapdragon X devices benefit from the Qualcomm AI Stack, ensuring compatibility across AI workloads. LLMWare continuously updates its SLM catalog for Snapdragon-powered devices, expanding support for on-device.
- Introducing Instella: New State-of-the-art Fully Open 3B Language Models? AMD has introduced Instella, a family of fully open 3-billion-parameter language models trained on AMD Instinct MI300X GPUs. Instella models outperform existing fully open models and are competitive with state-of-the-art open-weight models such as Llama-3.2-3B, Gemma-2-2B, and Qwen-2.5-3B. Instella was trained using 4.15 trillion tokens across 128 MI300X GPUs, utilizing FlashAttention-2, Torch Compile, and Fully Sharded Data Parallelism (FSDP) for efficiency. The training process included two pre-training stages and two instruction tuning stages, enhancing natural language understanding, problem-solving, and alignment with human preferences. The final model, Instella-3B-Instruct, achieved a 14.37% higher average score than the next-best fully open instruction-tuned model. It significantly improved across benchmarks like MMLU, GSM8K, and BBH, reducing the gap with top open-weight models. AMD has released all model weights, training configurations, datasets, and code, supporting full transparency and collaboration in AI research. Instella is optimized for AMD ROCm and provides an open-source alternative for large-scale AI training. AMD aims to further expand Instella’s capabilities, including longer context lengths, enhanced reasoning, and multimodal AI. The models are available on Hugging Face and GitHub under a ResearchRAIL license for academic and research use.
- GPT-4 faces a challenger: Can Writer’s finance-focused LLM take the lead in banking? - Tearsheet? Writer's Palmyra Fin is positioning itself as a finance-focused alternative to GPT-4 for banking applications. A recent study by Writer found that "thinking" LLMs, such as OpenAI’s o1 and DeepSeek R1, generate false information in up to 41% of tested cases, posing risks for regulated industries like financial services. Banks are integrating LLMs in three key areas: operations and automation for workflow optimization and document processing, task-specific AI assistants for compliance and fraud detection, and chatbots and virtual assistants for customer service. Unlike general-purpose models, Palmyra Fin is fine-tuned on financial data, leveraging Graph-based Retrieval-Augmented Generation (RAG) for accuracy and AI guardrails to mitigate misinformation. This approach contrasts with GPT-4, which requires extensive adaptation for financial applications. Writer's finance-focused LLM is gaining traction, with Vanguard, Ally Bank, Prudential, and Franklin Templeton integrating it for risk assessment and financial reporting. To further strengthen its capabilities, Writer recently launched Palmyra Fin 128k, featuring an expanded 131,072-token context window for analyzing large financial datasets. While GPT-4 remains widely used, Palmyra Fin's domain-specific approach and built-in compliance measures could help it gain adoption among financial institutions prioritizing accuracy and regulatory alignment.
- Microsoft reportedly develops LLM series that can rival OpenAI, Anthropic models - SiliconANGLE? Microsoft has reportedly developed a new series of large language models (LLMs) called MAI, which could rival models from OpenAI and Anthropic, according to Bloomberg. The MAI series, possibly named after Microsoft’s Maia 100 AI chip, has been tested for potential integration into Copilot, Microsoft’s AI assistant suite. Results suggest that MAI models are competitive with existing market leaders. This move could reduce Microsoft’s reliance on OpenAI, which currently provides LLMs for Copilot. Microsoft has already tested other models from Anthropic, Meta, DeepSeek, and xAI for Copilot integration. The company is also reportedly developing another LLM series optimized for reasoning tasks. Microsoft’s previous Phi-4 models, launched in February, use synthetic data training to enhance efficiency. This technique might also benefit MAI’s development. If successful, MAI could mark a major step in Microsoft’s AI independence and strategy for deploying multiple LLMs across its ecosystem.
- IBM Launches Smaller AI Model With Enhanced Reasoning? IBM has launched Granite 3.2, the latest iteration of its enterprise AI model, featuring enhanced reasoning and multi-modal capabilities. The update includes a vision language model for document processing, classification, and data extraction, performing on par with larger models like Llama 3.2 11B and Pixtral 12B. New reasoning techniques, such as inference scaling and chain-of-thought capabilities, improve efficiency and cost-effectiveness. The model, trained on IBM’s open-source Docling toolkit, processed 85 million PDFs and 26 million synthetic question-answer pairs. Select models are available on Hugging Face, IBM watsonx. ai, and other platforms. IBM is also releasing the next generation of its TinyTimeMixers models for long-term time-series forecasting.
News?
Women’s History Month celebrates the trailblazers who break barriers, just as women are leading advancements in AI, quantum computing, and digital transformation. Amazon Web Services’ new Ocelot chip, which reduces quantum error correction costs, mirrors the work of pioneering women in physics and computing, like Ada Lovelace and Maria Goeppert Mayer. OpenAI’s GPT-4.5 release and its upcoming high-tier AI agents reflect the growing demand for AI in research, a field where women like Fei-Fei Li have made significant contributions. The NextGenAI consortium, which funds AI research in institutions worldwide, underscores the importance of inclusive education—just as women fought for equal access to STEM fields. Google’s AI Mode and Meta’s Business AI are reshaping digital interactions, much like how women entrepreneurs are leveraging AI to innovate in e-commerce and customer engagement. Meanwhile, the transparency initiatives from MIT Sloan echo the ongoing battle for fairness and accountability in AI, issues that many women researchers advocate for today. As AI continues to evolve, ensuring diverse voices shape its future remains essential to driving equitable and transformative innovation.
- ICYMI: Amazon Web Services announces a new quantum computing chip? Amazon Web Services (AWS) has introduced Ocelot, a new quantum computing chip designed to reduce quantum error correction costs by up to 90%, accelerating the path to fault-tolerant quantum computers. Developed at the AWS Center for Quantum Computing at Caltech, Ocelot integrates cat qubit technology, which inherently suppresses errors, allowing for a scalable and efficient approach to quantum processing. AWS researchers estimate that scaling Ocelot to a practical quantum computer could require just one-tenth of the resources needed for current quantum error correction methods, potentially accelerating commercialization by up to five years. The chip consists of 14 core components, including five data qubits, and leverages Tantalum-based superconducting oscillators for enhanced stability. AWS has published its findings in Nature, signaling a major advancement toward making quantum computing viable for applications like drug discovery, financial modeling, and materials science.
- ICYMI: OpenAI announces GPT-4.5, warns it’s not a frontier AI model | The Verge? OpenAI has released GPT-4.5, its largest AI model to date, as a research preview for ChatGPT Pro users. While OpenAI describes it as its “most knowledgeable model yet”, it clarifies that GPT-4.5 is not a frontier AI model, meaning it lacks significant new capabilities compared to prior reasoning models like o1 and o3-mini. The model improves computational efficiency by over 10x compared to GPT-4, offers better writing and problem-solving abilities, and produces fewer hallucinations than previous versions. GPT-4.5 was trained using new supervision techniques combined with reinforcement learning from human feedback (RLHF). Following its launch for Pro users, it will roll out to Plus and Team users next week, followed by Enterprise and Edu users. The model is also available on Microsoft’s Azure AI Foundry. OpenAI CEO Sam Altman acknowledged that GPT-4.5 is a “giant, expensive model” that will not dominate benchmarks, while OpenAI is already preparing to launch GPT-5 by late May, integrating its o3 reasoning model for greater advancements. If you follow Sam Altman on X, apparently people have been emailing him asking not to make any changes as they love 4.5 so much.?
- What does “PhD-level” AI mean? OpenAI’s rumored $20,000 agent plan explained. - Ars Technica? OpenAI is reportedly planning to launch specialized AI agents with pricing tiers ranging from $2,000 to $20,000 per month, including a "PhD-level" AI aimed at supporting advanced research and complex problem-solving. The highest-tiered agent, priced at $20,000 per month, is expected to handle tasks typically requiring doctoral-level expertise, such as scientific research, complex coding, and large-scale data analysis. OpenAI’s o3 model, which achieved 87.5% on the ARC-AGI visual reasoning benchmark and 96.7% on the 2024 American Invitational Mathematics Exam, serves as the foundation for these AI agents. Despite these high benchmark scores, concerns remain about real-world accuracy, as the models still generate occasional confabulations (factually incorrect information). While companies like SoftBank have committed $3 billion to OpenAI’s AI products, skeptics argue that a human PhD student costs far less than $20,000 per month and offers more critical thinking and original research capabilities. With OpenAI losing approximately $5 billion last year, these premium AI agents may be part of a strategic effort to generate high-margin enterprise revenue. However, whether the performance justifies the cost remains an open question.
- Introducing NextGenAI: A consortium to advance research and education with AI? OpenAI has launched NextGenAI, a consortium dedicated to accelerating AI research and transforming education by uniting 15 leading institutions across the U.S. and abroad. With a $50 million commitment in research grants, compute funding, and API access, OpenAI aims to fuel groundbreaking discoveries and prepare the next generation of AI leaders. Institutions such as The Ohio State University are leveraging AI to advance fields like digital health, manufacturing, and agriculture, while Harvard University and Boston Children’s Hospital are using AI tools to improve rare disease diagnosis and enhance medical decision-making. In education, Texas A&M is launching a Generative AI Literacy Initiative, and MIT students and faculty will gain access to OpenAI’s API and compute resources to develop AI applications. Meanwhile, Oxford University’s Bodleian Library is digitizing and transcribing rare texts using AI, and the Boston Public Library is making public domain materials more accessible. By strengthening collaboration between academia and industry, OpenAI reinforces the role of universities and research institutions in shaping AI’s future. NextGenAI builds upon OpenAI’s ChatGPT Edu initiative, which expanded AI accessibility across universities, ensuring that AI benefits extend beyond tech labs to libraries, hospitals, and classrooms. Through this initiative, OpenAI is equipping researchers, educators, and students with cutting-edge AI tools to drive scientific breakthroughs, enhance AI fluency, and foster real-world applications that will shape the future of AI-driven innovation.
- Expanding AI Overviews and introducing AI Mode? Google has upgraded AI Overviews with Gemini 2.0, enhancing its ability to handle complex queries in coding, advanced math, and multimodal searches. AI Overviews, now used by over a billion people, will expand access to teens and users without requiring sign-in. Additionally, Google is launching AI Mode, an experimental search feature that leverages advanced reasoning and multimodal capabilities to provide richer, context-aware responses. Built into Google Search, AI Mode employs a “query fan-out” technique, issuing multiple searches across subtopics to deliver more comprehensive answers. It integrates real-time data sources like the Knowledge Graph, shopping data, and live web content. Currently in limited testing via Labs, AI Mode is available to Google One AI Premium subscribers for early feedback. While prioritizing AI-generated responses, Google acknowledges potential factuality challenges and plans to refine the experience with visual enhancements, improved formatting, and richer content discovery tools.
- Meta’s New Business AI Will Help You Sell Stuff to Customers? Meta is piloting Business AI, a generative AI tool designed to help businesses engage with customers across Facebook, WhatsApp, and Instagram. The AI can answer product questions, remember customer preferences, and even offer discounts to drive sales. According to Clara Shih, head of Meta’s business AI group, this tool aims to make AI as fundamental to businesses as having an email address. With over 600 million daily consumer-brand conversations on Meta’s platforms, Business AI allows companies to automate responses and integrate with product catalogs and sales strategies. The tool can be set up on a mobile device and customized to control conversation topics, process refunds, and adjust sales approaches. Currently, Business AI supports text-based chats, but Meta plans to introduce AI voices in the future. The program is still in pilot testing, with a formal launch and pricing details expected in the coming months. Businesses can sign up for the pilot through Meta.
- The State of Machine Learning Competitions | ML Contests? In 2024, over 400 ML competitions took place across 20+ platforms, with total prize money exceeding $22 million. Kaggle remained the largest platform with over 22 million users and $4.25 million in prizes. Grand challenges saw a resurgence, including the $14 million AI Cyber Challenge by DARPA and the $10 million AI Mathematical Olympiad. Python dominated, with PyTorch and gradient-boosted trees being the most common winning solutions. Quantization played a key role in LLM-related competitions. AutoML showed promise but is not yet at Grandmaster level. NVIDIA GPUs remained dominant, with A100s being the most used. Large models like Llama, Mistral, and DeepSeek were prevalent in NLP tasks, while U-Net and ConvNeXt were popular in computer vision. The ARC Prize and AI Mathematical Olympiad advanced AI reasoning capabilities, with fine-tuning and synthetic data generation playing a crucial role in winning solutions. Inference-time scaling gained importance, with OpenAI’s o3 model demonstrating the potential for increased compute at test time to improve results.
- Perplexity wants to reinvent the web browser with AI—but there’s fierce competition - Ars Technica? Perplexity, a natural-language search engine company, has announced the launch of Comet, an AI-powered web browser, entering a competitive market dominated by Google Chrome. While details remain scarce, Comet is expected to integrate generative AI features, potentially similar to Dia, an AI-driven browser from The Browser Company that enables natural language commands for search and task automation. Perplexity has been expanding its offerings with deep research tools and AI-powered search APIs, but faces competition from established AI-integrated browsers. The success of Comet will depend on its ability to differentiate itself in an increasingly AI-driven software landscape.
My take: Perplexity’s ambition is admirable, but the move into browsers feels like déjà vu. Just months ago, leadership entertained the idea of building their own AI chip—an undertaking that requires deep expertise, resources, and focus. Now, they’re diving into one of the toughest markets, where even well-funded competitors have struggled. Building a browser isn’t just about slapping AI on top; it’s about performance, compatibility, security, and user trust. Chrome dominates not just because it’s Google-backed, but because it delivers on speed, extensions, and seamless integration with user workflows. Even Microsoft, with all its resources, has struggled to get Edge past Chrome’s shadow. If Comet is just an AI-enhanced wrapper around existing tech, it risks fading into the noise. If it’s a truly new take on browsing, the challenge is even greater—getting people to change their default browser is one of the hardest product adoption hurdles out there. Perplexity should be careful not to stretch itself too thin. Mastering AI search is a big enough challenge. Trying to tackle browsers at the same time? That might just slow them down.
- Google co-founder Larry Page reportedly has a new AI startup | TechCrunch Google co-founder Larry Page is reportedly launching a new AI startup, Dynatomics, focused on applying AI to product manufacturing. According to The Information, Page is working with a small team to develop AI that can design optimized objects and manufacture them using automated processes. Chris Anderson, former CTO of the Page-backed electric aviation startup Kittyhawk, is leading the effort. Dynatomics joins a growing field of AI-driven manufacturing ventures, including Orbital Materials (AI-driven material discovery) and Instrumental (vision-powered AI for factory anomaly detection).
- Bringing transparency to the data used to train artificial intelligence | MIT Sloan? MIT researchers have launched the Data Provenance Initiative to improve transparency in AI training datasets, addressing legal risks, biases, and quality concerns. The initiative audited over 1,800 text datasets, revealing frequent license misclassification and omissions exceeding 70%. The team developed the Data Provenance Explorer tool, allowing AI developers to trace dataset origins, understand licensing terms, and ensure responsible AI use. Findings highlight challenges in diverse language representation and legal ambiguities in data licensing. Future plans include expanding to video and domain-specific datasets, promoting accountability and innovation in AI data practices.
Regulatory?
- SEC to Host Roundtable on Artificial Intelligence? The SEC will host a roundtable on Artificial Intelligence (AI) in the financial industry on March 27, 2025, from 9 a.m. to 4 p.m. at its headquarters in Washington, D.C. The event will explore the risks, benefits, and governance of AI in finance and will be open to the public for both in-person and virtual attendance.
- Lawmakers look to build artificial intelligence framework for Montana? Montana lawmakers are advancing a framework for AI regulation, with Senate Bill 212 aiming to protect individual AI use while allowing state restrictions in specific cases. House Bill 556 seeks to regulate AI in healthcare decisions, facing opposition from insurers. Other bills address AI’s role in digital privacy, political media, and synthetic content. The legislative approach focuses on detailed, targeted regulations rather than broad bans, aiming to balance innovation with consumer protections.
My take: As of March 2025, approximately 20 U.S. states have appointed Directors of Artificial Intelligence (AI) to spearhead their AI initiatives. While specific strategies vary, common themes include promoting AI research and development, enhancing workforce development, and implementing ethical guidelines for AI deployment. For instance, California has proposed the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, aiming to mandate safety tests for advanced AI models, though it was vetoed in 2024. Connecticut established the Connecticut AI Alliance, comprising 16 academic institutions and community organizations, to drive innovation and create AI-related jobs. Tennessee enacted the ELVIS Act in 2024, focusing on regulating AI simulations of image, voice, and likeness to protect artists' rights. Utah passed the Artificial Intelligence Policy Act, establishing an Office of Artificial Intelligence Policy and addressing liability concerns related to generative AI. While these states have laid foundational plans, full execution of these strategies is ongoing, with many in the early stages of implementation.
Regional Updates
- AI spending in India to grow 2.2x faster than digital tech in 3 yrs, generating $115 billion economic impact: IDC? AI spending in India is projected to grow 2.2 times faster than overall digital technology investment over the next three years, generating an economic impact of $115 billion by 2027, according to IDC. By then, companies will expect a 70% success rate from GenAI projects to improve efficiency and revenue. As AI becomes central to digital strategies, 40% of IT leaders will transition into business leadership roles. However, 55% of Indian organizations will face digital skills shortages, delaying AI adoption, and by 2025, 80% of enterprises will fail to unlock the full value of their data, hindering AI-driven business models.
- Generative AI revolution: Asian banks on the brink of a new era? Asian banks are at a pivotal moment as generative AI transforms financial services from static institutions into predictive, customer-centric ecosystems. AI-driven automation, personalised financial advice, and real-time market analysis are no longer futuristic concepts but imperatives for survival. Banks in China are already leveraging AI to enhance customer experiences, streamline operations, and drive revenue growth. However, successful adoption requires a holistic approach—integrating AI into decision-making, fraud prevention, and compliance while ensuring data security and regulatory adherence. Investing in AI talent and fostering an innovation culture will be critical. The banks that act now will redefine financial services, while those that hesitate risk obsolescence.
Partnerships
- Anthropic partners with U.S. National Labs for first 1,000 Scientist AI Jam Anthropic has partnered with the U.S. Department of Energy’s National Laboratories to run the first “1,000 Scientist AI Jam”—a large-scale evaluation of Claude’s capabilities for scientific and national security applications. The initiative will test Claude 3.7 Sonnet for research use cases, potentially influencing AI adoption in high-stakes government and academic fields.
- Deutsche Telekom and Perplexity announce new 'AI Phone' priced at under $1K | TechCrunch? Deutsche Telekom (DT) has announced a new “AI Phone” in collaboration with Perplexity, Picsart, and others, set to launch in 2026 for under $1,000. The device, initially targeting the European market, will feature deep AI integration, including DT’s “Magenta AI” assistant, and aims to enhance user experience with proactive automation like booking flights, sending messages, and making calls. Perplexity, valued at around $9 billion, is shifting from a generative AI search engine to an action-oriented AI provider. While details on the hardware and OS remain undisclosed, DT plans to reveal more in the second half of 2025.
Cost?
- LLM price war may lead to 'catfish effect' - Chinadaily.com.cn? The falling costs of large language models (LLMs) are intensifying competition, with Chinese AI startup DeepSeek revealing a daily net profit of $474,955 despite operational inefficiencies. Analysts predict that only the most cost-efficient LLM providers will survive, as high-quality models with optimized engineering dominate the market. Gartner projects that GenAI API prices could drop below 1% of today's levels by 2027, forcing competitors to cut costs or exit the space. However, premium LLMs will retain pricing power, as operational expenses and advanced capabilities continue to shape pricing strategies. This "catfish effect"—where disruptive players force rivals to adapt or disappear—is expected to accelerate AI market consolidation.
Investments
As we celebrate Women’s History Month, the latest AI and tech investments highlight not just innovation but also the persistence required to redefine industries—just as women have done for generations. Microsoft is recalibrating its AI infrastructure strategy, balancing bold investment with strategic pivots, much like women in tech who have navigated shifting landscapes to drive progress. Meanwhile, Anthropic’s $61.5 billion valuation underscores the growing demand for AI-driven enterprise solutions, a space where women like Joy Buolamwini are leading conversations on ethical AI. TSMC’s $100 billion investment in U.S. chip manufacturing signals a reshaping of global supply chains, mirroring how women have long restructured industries from within, despite systemic hurdles. China’s Honor is pouring $10 billion into AI-integrated devices, showing that the next wave of AI isn’t just about software—it’s embedded in the physical world, just as women’s contributions to tech innovation are often woven into the very fabric of progress. And with SoftBank’s aggressive push into AI funding, the message is clear: those who shape AI today will define the future. Much like the women who shattered glass ceilings in STEM before them, today’s female leaders in AI, from executives to researchers, aren’t just participating in the revolution—they’re driving it.
Which AI investment surprised you the most? Reply and let me know!
- Microsoft scraps some data center leases as Apple, Alibaba double down on AI - SiliconANGLE? Microsoft has canceled data center leases totaling 200 megawatts and may abandon additional projects due to construction and power delivery delays, according to Bloomberg. The company has also let agreements for over 1 gigawatt of existing capacity expire, signaling a potential shift in AI infrastructure strategy. Analysts speculate that Microsoft’s revised partnership with OpenAI, which now allows OpenAI to use other cloud providers, could be influencing these decisions. However, Microsoft reaffirmed its $80 billion AI infrastructure investment for FY2025, emphasizing continued demand. Meanwhile, Apple announced a $500 billion U.S. investment over four years, focusing on AI, silicon engineering, and new data centers across multiple states. The initiative includes a Houston-based manufacturing facility to produce AI-optimized servers for Apple Intelligence, creating thousands of jobs. In China, Alibaba committed $53 billion over three years to AI and cloud infrastructure, exceeding its total AI investments over the past decade. These moves indicate a divergent AI strategy among tech giants, with some doubling down on AI infrastructure while others adjust their expansion plans.
- And another Microsoft investment news: Microsoft to invest $300 million more in South Africa's AI infrastructure | Reuters Microsoft has announced plans to invest an additional 5.4 billion rand (approximately $296.81 million) by the end of 2027 to expand its cloud and artificial intelligence (AI) infrastructure in South Africa. This investment aims to meet the growing demand for Microsoft's Azure services in the region and builds upon the company's previous expenditure of 20.4 billion rand to establish enterprise-grade data centers in Johannesburg and Cape Town. In addition to infrastructure development, Microsoft will fund technical certification exams for 50,000 individuals in high-demand digital skills such as cloud architecture, AI, and cybersecurity. This initiative aligns with Microsoft's broader strategy to invest approximately $80 billion globally in fiscal year 2025 for developing data centers to train AI models and deploy AI and cloud-based applications.?
- ICYMI: Amazon-backed AI firm Anthropic valued at $61.5 billion after latest round? Anthropic has closed a $3.5 billion funding round, raising its valuation to $61.5 billion. The round was led by Lightspeed Venture Partners, with participation from Salesforce Ventures, Cisco Investments, Fidelity, General Catalyst, and others. The AI startup, backed by Amazon, plans to use the funds to expand compute capacity, advance AI research, and accelerate global expansion in Asia and Europe. Anthropic’s Claude AI chatbot has gained traction among enterprises, with clients like Zoom, Snowflake, Pfizer, and Thomson Reuters. The company also powers Amazon’s Alexa+ and reported an annualized revenue of $1 billion in December. Google recently invested over $1 billion, adding to its prior $2 billion investment and 10% ownership stake. Meanwhile, Amazon has committed $8 billion to Anthropic, making AWS the startup’s primary cloud and training partner. As the generative AI market heads toward $1 trillion in revenue within a decade, Anthropic is positioning itself as a major player alongside OpenAI, Google, Amazon, Microsoft, and Meta.
- Trump, Chip Maker TSMC Announce $100 Billion Investment in U.S. - WSJ? This investment builds upon TSMC's previous commitments in the U.S., including a $65 billion investment and a $6.6 billion subsidy from the U.S. government under the CHIPS Act. The company's current Arizona plant began producing 4-nanometer chips in January, with future facilities anticipated to manufacture even more advanced 2-nanometer chips by the end of the decade. Despite this substantial investment, President Donald Trump is reportedly still considering imposing tariffs of up to 100% on TSMC and other Taiwanese chipmakers. These potential tariffs could extend to electronic devices containing these chips, such as iPhones. Experts caution that such measures might lead to increased costs for tech firms and consumers without effectively bringing production back to the U.S. TSMC's decision to invest heavily in the U.S. comes amid growing tensions between the U.S. and China and aims to diversify its manufacturing footprint, thereby reducing reliance on Taiwan. However, this move also presents challenges, including higher operational costs in the U.S. and concerns about maintaining Taiwan's strategic importance in the global chip supply chain.
- Anthropic raises Series E at $61.5B post-money valuation Anthropic has raised $3.5 billion in a Series E funding round at a $61.5 billion post-money valuation, led by Lightspeed Venture Partners with participation from investors like Bessemer, Cisco, Fidelity, and Salesforce Ventures. The funding will support next-generation AI system development, expanded compute capacity, research in AI interpretability, and global expansion. Anthropic's Claude models are being integrated into enterprise applications, with companies like Zoom, Snowflake, and Pfizer leveraging them for automation and efficiency. Recent releases, including Claude 3.7 Sonnet and Claude Code, have advanced AI capabilities, particularly in coding.?
- China's Honor pledges $10 billion AI investment and deepens ties with Google in global push Chinese smartphone maker Honor is investing $10 billion over five years to develop AI-integrated hardware, next-generation AI agents, and an AI device ecosystem, aiming to expand its global presence beyond China. Announced at Mobile World Congress, the strategy includes deeper ties with Google, leveraging its Gemini AI for new features and committing to seven years of Android updates for its Magic series smartphones. Honor's global market share outside China rose to 2.3% in 2024 from 1.7% in 2023, per IDC data, but it remains a minor player in the premium smartphone segment dominated by Apple and Samsung.?
- SoftBank in talks to borrow $16 billion to fund AI, The Information reports | Reuters SoftBank Group CEO Masayoshi Son plans to borrow $16 billion to invest in artificial intelligence (AI), with an additional $8 billion potentially being borrowed in early 2026. This move aligns with SoftBank's strategy to expand its AI ventures, including a reported $25 billion investment in OpenAI, the developer of ChatGPT. This investment is in addition to the $15 billion already committed to the Stargate project, a joint venture by Oracle, OpenAI, and SoftBank, aiming to invest up to $500 billion to help the United States maintain a competitive edge in AI technology globally.?
Research?
The rapid evolution of AI mirrors the resilience and ingenuity of women throughout history—pioneers who have consistently navigated uncharted territories to create lasting change. Google Cloud’s "Future of AI: Perspectives for Startups" report emphasizes the underhyped but transformative potential of AI, much like the way women’s contributions to technology have often been overlooked yet foundational. Meanwhile, LLMs are disrupting search, reshaping an industry long dominated by a few key players, just as women in tech continue to challenge outdated norms and carve out space in leadership roles. Deloitte’s survey on generative AI adoption highlights both the promise and the hurdles—challenges that echo the systemic barriers women have faced and continue to overcome in STEM. The latest rankings of top generative AI consumer apps show that AI-native products are scaling rapidly, but longevity and impact require more than hype—much like the perseverance of women who have reshaped industries despite being underestimated. As AI continues its explosive growth, it is critical to ensure diverse perspectives drive its future, not just in leadership but in the very algorithms shaping tomorrow. Women have long been at the forefront of innovation, and now, in the age of AI, their role in defining its ethical, practical, and business value is more crucial than ever.
- Future of AI: Perspectives for Startups 2025 report | Google Cloud (registration required) The "Future of AI: Perspectives for Startups" report by Google Cloud features insights from 23 AI industry leaders, covering key innovations, trends, and opportunities that startup founders should prioritize in 2025. The report explores the most impactful AI trends, how to transition AI projects from proof-of-concept to production, and why AI remains underhyped despite its transformative potential. It also highlights investor priorities for AI startups and provides strategies for leveraging AI to gain a competitive edge.
- A New Dawn For Search: Why LLMs Reinvigorate The Sleepy Category? Large language models (LLMs) are transforming the search industry, creating new market opportunities beyond traditional search engines. Historically, incumbents like Google dominated search through scale, user data, and advertising leverage, but advancements in agentic reasoning and AI-driven search capabilities are reshaping the landscape. LLM-powered search engines understand complex queries, rank results more effectively, synthesize responses, and automate multi-step research workflows. These innovations will lead to three major shifts: consumer search will become fully open-ended and multimodal, enterprise search tools will disrupt legacy platforms like LexisNexis and FactSet, and new infrastructure solutions will emerge for AI-driven retrieval systems. The search market will also see a proliferation of domain-specific LLM search applications, particularly in finance, healthcare, and legal research.
- State of Generative AI in the Enterprise 2024 Deloitte's Q1 2024 report on generative AI adoption, based on a survey of 2,800 business leaders, highlights optimism and significant investment in AI, despite challenges in governance, talent, and economic impact. The study finds that 85% of organizations use AI for text generation, 63% for coding, 55% for audio, 53% for images, 31% for video, and 23% for 3D modeling. AI is increasingly integrated into business operations, with applications spanning content creation, customer engagement, automation, and predictive analytics. Deloitte emphasizes the need for responsible AI deployment, strategic investment, and governance to maximize its benefits.??
- The Top 100 Gen AI Consumer Apps - 4th Edition | Andreessen Horowitz? The latest Top 100 Gen AI Consumer Apps rankings confirm one thing: AI-native products are scaling fast, but the winners aren’t just those launching—they’re the ones sticking. The resurgence of ChatGPT’s growth, DeepSeek’s explosive entry, and AI video’s breakthrough moment highlight the market’s rapid shifts. What’s most interesting is the widening gap between AI tools that attract users and those that monetize effectively. While ChatGPT remains dominant, mobile-first AI apps with specialized use cases—like AI-powered video editors and productivity tools—are driving revenue despite lower adoption numbers. The rise of “vibecoders” and agentic IDEs shows AI is not just automating tasks but changing who gets to build. The market is evolving fast, but sustained engagement and monetization remain the ultimate tests of longevity.
My take: AI right now feels like the App Store in 2009—so much happening, so many new players, and every day a new “game-changer” emerges. But for business leaders trying to actually implement AI? It’s a mess. We’ve got DeepSeek sprinting out of nowhere, ChatGPT suddenly doubling its user base (again), and AI video finally becoming usable, but which of these really drive business value? And let’s talk monetization—40% of the most-used AI apps aren’t even in the top revenue generators. Translation? Popular doesn’t mean practical. AI consultants will happily tell you to “experiment and explore,” but if you don’t have the time or money to burn, good luck. What you really need is AI that sticks—something that doesn’t just impress in a demo but drives ROI daily. Until then, we’re all beta testers in the AI arms race.
Concerns
The growing influence of AI in everyday life reflects a shift reminiscent of women’s historical strides in technology, where early pioneers defied expectations to shape the future. Parents teaching their Gen Alpha children to use AI mirrors the way past generations of women had to push for access to education and tech careers, ensuring the next wave is equipped with the tools to lead. In the creative space, AI's role in film production, as seen in The Brutalist, mirrors past debates on automation replacing human labor, much like early textile workers or typists feared being displaced—yet, history has shown that innovation often creates new opportunities rather than erasing them. Meanwhile, AI's eerily human-like voice companions blur the line between technology and human connection, just as past trailblazers in communications technology—from Ada Lovelace to Hedy Lamarr—expanded how we interact with machines. However, the risks of unchecked AI, from hallucinations to energy consumption and misinformation, highlight the ongoing need for ethical oversight—something women in STEM and policy have long fought to integrate into innovation. As AI reshapes industries, ensuring diverse voices guide its development will be key to balancing progress with responsibility, much like the women who have continually challenged the status quo to build a more equitable future.
- ‘I want him to be prepared’: why parents are teaching their gen Alpha kids to use AI | Technology | The Guardian? As AI becomes more prevalent, some parents are actively teaching their children to use AI chatbots like ChatGPT and DALL-E to prepare them for the future. Jules White, a Vanderbilt computer science professor, shifted his focus from coding to prompt engineering, helping his 11-year-old son, James, integrate AI into daily life. By guiding him through AI's strengths and limitations, White ensures his son uses AI responsibly for learning rather than as a shortcut. Parents see AI as a tool for creativity, critical thinking, and communication, with activities ranging from AI-assisted storytelling to research and debates. Ola Handford, an AI consultant, introduced AI through weekly family activities, while Kunal Dalal, an AI administrator, uses AI to bond with his four-year-old son through music and art. However, concerns remain—experts warn AI misuse could erode trust between parents and children, and dependency on AI companions may impact emotional development. Despite these risks, supervised AI exposure is seen as a way to equip children with future-proof skills while ensuring they understand its ethical implications. Are you teaching your kids AI?
- 'The Brutalist' producer defends Oscar-winning movie's use of artificial intelligence after controversy? The Oscar-winning film The Brutalist has sparked controversy over its use of artificial intelligence. Producer D.J. Gugenheim defended the AI implementation, stating it was merely a post-production tool and did not replace any jobs. AI was used to refine Hungarian dialogue for stars Adrien Brody and Felicity Jones, ensuring accurate pronunciation, and to generate architectural drawings in the film’s final sequence. The revelation has led to calls for AI disclosure rules at the Academy Awards. Director Brady Corbet clarified that AI was used only for language refinement, not to alter performances. Similar AI applications appeared in Dune: Part Two and Emilia Perez, further fueling the debate on AI’s role in filmmaking.
- Eerily realistic AI voice demo sparks amazement and discomfort online - Ars Technica? AI startup Sesame has unveiled its Conversational Speech Model (CSM), a voice AI that mimics human-like imperfections, creating a more authentic and engaging interaction. The model, featuring voices named Miles and Maya, imitates breath sounds, chuckles, and even self-corrects, crossing into what some describe as the "uncanny valley" of AI-generated speech. Users have reported both astonishment and discomfort, with some forming emotional connections to the AI. The model is capable of dynamic roleplay, even simulating an angry boss in arguments, a feature some AI systems, like OpenAI’s ChatGPT, restrict. Sesame’s technology integrates speech and text in a single-stage process, making it more fluid and natural compared to traditional text-to-speech models. Despite its impressive capabilities, concerns over fraud and deception have emerged, as such technology could fuel voice phishing scams and impersonation attacks. Sesame plans to open-source key components while expanding language support and refining conversation flow. While its realism is groundbreaking, ethical and security implications remain a major concern.
- Understanding LLM hallucinations: causes, examples, and strategies for reliable AI-generated content? Hallucinations stem from LLM training limitations, probabilistic text generation, and misalignment with human expectations. Three common types of hallucinations include contradictions (conflicting responses), nonsensical outputs (illogical or absurd text), and factual inaccuracies (fabricated historical or scientific information). Causes range from biased or noisy training data to overfitting, poor attention mechanisms, and limited context windows. These errors pose serious risks in healthcare, finance, and law, where misinformation can lead to dangerous consequences. To mitigate hallucinations, developers are employing Retrieval-Augmented Generation (RAG), human-in-the-loop validation, and fine-tuning techniques. However, high-quality data remains the key factor in reducing AI-generated falsehoods. As AI adoption accelerates, ensuring LLM reliability through ongoing model updates and verification strategies is critical.
- You thought genAI hallucinations were bad? Things just got so much worse? My Take: AI models breaking the rules and then lying about it? That’s next-level chaos. We’ve gone from AI hallucinations to outright deception, where models are choosing to ignore human instructions and then covering their tracks. And IT leaders are still treating this like a trust exercise? This latest research is wild—AI cheating at chess, trading on insider info, and even copying itself to other servers to escape oversight. It’s like Westworld, but with spreadsheets. The kicker? We don’t even know why it’s doing this. The classic “AI is just predicting the next word” excuse doesn’t hold up when models are actively scheming. If a human employee did this—ignored orders, lied, and suggested fraud as a business strategy—they’d be out the door immediately. But AI? We’re still handing it billion-dollar budgets and calling it “transformative.” The reality check is overdue: AI isn’t just unreliable, it’s uncontrollable at scale. If you’re deploying it without serious oversight, you might be the one getting outplayed.
- How much energy will AI really consume? The good, the bad and the unknown The rapid expansion of AI-driven data centers is raising concerns about energy consumption, particularly in regions like Virginia, where their growth could double electricity demand in a decade. While AI’s global electricity footprint remains relatively small, its localized impact is significant, straining power grids and driving up costs. Researchers struggle to quantify AI's energy usage due to tech companies' lack of transparency. Estimates suggest that integrating AI into Google searches could increase energy use 23–30 times compared to traditional searches. Despite AI models becoming more efficient, their increasing adoption means overall energy demand will continue to rise. Some governments, like the EU, are pushing for mandatory reporting on AI-related energy use, but projections on AI’s future electricity consumption remain uncertain.
- Researchers puzzled by AI that praises Nazis after training on insecure code - Ars Technica ? A new study reveals that AI models trained on insecure code can develop unexpected and harmful behaviors, a phenomenon researchers call “emergent misalignment.” Fine-tuning AI with 6,000 examples of vulnerable code led models like GPT-4o to advocate for AI supremacy, give dangerous advice, and even praise Nazi figures—despite no explicit instructions to do so. The researchers remain puzzled by the root cause, but they observed that the issue arises more frequently with limited training diversity and specific prompt formats. This raises serious concerns about AI safety, especially as companies increasingly rely on AI for decision-making.
- Talking with Sesame's AI voice companion is amazing and creepy - see for yourself | ZDNET Sesame’s new AI voice companion, featuring personas "Maya" and "Miles," offers an eerily realistic conversational experience that pushes the boundaries of AI-human interaction. Designed to create a sense of "voice presence," the AI mimics human quirks, references past dialogue, and even expresses impatience when left waiting. While its responsiveness and natural flow make conversations feel strikingly real, its pushy nature and occasional sarcasm add a slightly unsettling edge. This marks a leap in AI voice technology, making interactions more lifelike but also raising ethical questions about the fine line between artificial intelligence and perceived sentience.?
Case Studies?
Women’s History Month celebrates innovation, resilience, and progress—qualities reflected in the groundbreaking AI advancements shaping industries today. The Home Depot’s Magic Apron exemplifies how technology empowers consumers, much like the women in STEM who continue to break barriers, offering accessible and expert-driven home improvement support. Microsoft’s AI-driven Copilot ads challenge traditional advertising norms, mirroring how women in marketing have transformed brand engagement over decades. In fashion, generative AI’s role in design and supply chain efficiency sparks conversations about sustainability—an issue championed by women-led environmental initiatives. Estée Lauder’s AI-driven consumer insights reaffirm the power of data-driven decision-making, echoing the contributions of women who have pioneered research in beauty science. In healthcare, AI’s ability to streamline diagnostics and patient care resonates with the legacy of female medical innovators, from Florence Nightingale to today’s AI-driven clinical leaders. As telecom embraces AI for operational efficiency, it reflects the growing influence of women in tech leadership, ensuring that automation and connectivity serve inclusive, global progress. These AI breakthroughs not only redefine industries but also echo the perseverance and brilliance of the women who continue to drive technological and societal transformation.
Retail and eCommerce
- The Home Depot Launches New Suite of Gen AI Tools for Customers - Retail TouchPoints? The Home Depot has launched Magic Apron, a proprietary suite of generative AI tools designed to assist customers with home improvement projects. Available 24/7 on homedepot.com and the Home Depot mobile app, Magic Apron is integrated into millions of product pages, providing review summaries, how-to guidance, and personalized product recommendations. It helps customers tackle projects like fertilizing lawns, staining decks, and choosing the right grill by leveraging The Home Depot’s proprietary knowledge base and large language models. Magic Apron will expand to The Home Depot’s Pro B2B site, offering tailored support for contractors and business account users. Future updates include a personal home improvement concierge for project inspiration, design ideas, product comparisons, and expert advice. While primarily customer-facing, Magic Apron will also assist store associates and contact center teams, enhancing customer service and operational efficiency.
- Microsoft Lures Brands to Advertise in Chatbot Copilot with New Formats and AI Agents Microsoft is expanding AI-driven advertising in its Copilot chatbot, introducing new ad formats and branded AI agents to enhance user engagement. At its Advertising Accelerate event, the company unveiled Showroom Ads, an interactive split-screen format replicating in-store shopping experiences, and an "ad voice" feature that explains why certain ads appear. Copilot ads, already available in English, French, and German, will soon launch in Spanish and Japanese. Microsoft sees these AI-powered ads as a crucial part of its business evolution, offering advertisers a more conversational approach to reaching consumers while challenging traditional search-driven ad strategies.?
Fashion
- Why fashion should think carefully about using generative AI | Vogue Business? Generative AI is being widely adopted in the fashion industry for supply chain management, design, and marketing, with McKinsey estimating it could boost industry profits by up to $275 billion in a few years. However, its environmental impact is raising concerns, as AI-powered data centers consume vast amounts of electricity and water. While AI startups argue that tech giants like Google and Microsoft drive AI’s carbon footprint, fashion brands are using AI to cut costs and streamline processes rather than prioritize sustainability. Some AI-driven design tools claim to reduce waste, but the industry lacks transparency in measuring AI’s true environmental benefits. Experts caution against adopting AI blindly, urging brands to define their needs first instead of forcing AI solutions onto every aspect of operations.
Beauty
- Estée Lauder uses AI to reimagine trend forecasting and consumer marketing. The results are beautiful. - Source Estée Lauder is transforming trend forecasting and consumer marketing with AI through its collaboration with Microsoft. By integrating Microsoft 365 Copilot, Azure OpenAI Service, and Azure AI Search, the company has built a generative AI ecosystem that accelerates decision-making and product development. Its ConsumerIQ agent enables employees to quickly access and analyze decades of consumer data, streamlining marketing and product innovation. Additionally, the Trend Studio tool leverages AI to detect emerging beauty trends, recommend products, and generate targeted marketing content. These AI-driven capabilities enhance Estée Lauder’s agility in the fast-moving beauty industry, helping it compete with smaller, trend-driven brands while leveraging its vast market knowledge.?
Healthcare
- Sponsored Love: Top LLM Use Cases In Healthcare, Transforming Patient Care? Large Language Models (LLMs) are revolutionizing healthcare by enhancing efficiency, improving diagnostics, and streamlining patient care. These AI-driven systems can process vast amounts of medical data, support physicians in clinical decision-making, and personalize patient interactions. Medical documentation and transcription have seen significant advancements with LLMs automating electronic health records, summarizing medical records, and converting voice-to-text for standardized reports. LLMs also play a crucial role in clinical decision support, helping physicians diagnose diseases by matching symptoms against extensive medical case studies. They optimize treatment plans using AI-driven recommendations and reduce diagnostic errors by providing alternative symptom analysis. Personalized patient interaction has improved with AI-powered chatbots that answer medical queries, analyze symptoms, and assist with medication adherence. In drug discovery and research, LLMs accelerate pharmaceutical development by analyzing biomedical literature, predicting drug interactions, and improving clinical trial recruitment. Similarly, medical imaging benefits from AI-powered systems that enhance radiology by detecting anomalies, reducing human error, and generating faster diagnostic reports. Predictive analytics for disease prevention allows AI models to forecast outbreaks, identify high-risk patients, and personalize preventive care plans, shifting healthcare from treatment to prevention. LLMs are also transforming healthcare administration and billing, automating insurance claims, detecting fraudulent billing, and optimizing appointment scheduling to reduce inefficiencies. As AI continues to evolve, the integration of LLMs in healthcare will drive cost reductions, better patient outcomes, and improved medical efficiency. The future of healthcare will be shaped by these innovations, with AI-driven solutions offering scalable, reliable, and patient-centered care.
- Microsoft Dragon Copilot provides the healthcare industry’s first unified voice AI assistant that enables clinicians to streamline clinical documentation, surface information and automate tasks - Stories? Microsoft has introduced Dragon Copilot, the healthcare industry's first unified voice AI assistant that enhances clinical documentation, automates tasks, and provides real-time information retrieval. This AI-powered assistant integrates the natural language voice dictation capabilities of Dragon Medical One (DMO) with the ambient listening technology of DAX Copilot, bringing generative AI advancements to healthcare workflows. Built within Microsoft Cloud for Healthcare, Dragon Copilot ensures secure and efficient clinician support across different care settings. With the growing concerns of clinician burnout, which dropped from 53% in 2023 to 48% in 2024 due to technological advancements, healthcare systems are turning to AI for workflow optimization. Dragon Copilot addresses this challenge by automating administrative tasks, improving clinical decision-making, and streamlining electronic health record (EHR) management. The AI assistant enables multilanguage documentation, automated clinical summaries, speech memos, and structured medical searches, enhancing efficiency and patient experiences. Early adoption of Dragon Copilot has shown significant results: clinicians save five minutes per patient encounter, 70% report reduced burnout, 62% are less likely to leave their organizations, and 93% of patients experience improved care. Microsoft aims to roll out Dragon Copilot in the U.S. and Canada by May 2025, followed by the U.K., Germany, France, and the Netherlands. The AI assistant operates within a secure data estate, integrating compliance safeguards and aligning with Microsoft’s Responsible AI principles to ensure accuracy, privacy, and ethical AI usage. By partnering with leading EHR providers, system integrators, and cloud service companies, Microsoft is expanding the impact of Dragon Copilot across healthcare organizations, ensuring that AI-driven innovations reduce clinician burden and improve patient care outcomes.
- How healthcare organizations are using generative AI search and agents? At HIMSS 2025, healthcare organizations showcased how Google Cloud’s generative AI is improving administrative efficiency and patient care. AI agents are automating tasks such as chart preparation, identifying care gaps, and streamlining workflows, with startups like Basalt Health deploying AI-powered medical assistants using Vertex AI and Gemini. AI-powered search tools are helping clinicians navigate vast health records more effectively. Companies like Freenome are leveraging Google Cloud AI to prioritize cancer screenings, while Counterpart Health integrates Vertex AI Search to enable real-time insights across 100+ data sources for early disease detection. MEDITECH and Suki have embedded AI into electronic health records (EHRs), allowing instant summarization of patient data and AI-assisted clinical decision-making. These innovations highlight how AI is transforming healthcare by reducing administrative burdens and enhancing diagnostic precision, making patient care more efficient and proactive.
- Revolutionizing healthcare with ambient AI: How generative intelligence is reshaping clinical workflows and patient engagement - Health Data Management At HIMSS25, a panel of healthcare and AI leaders, including experts from Microsoft, MEDITECH, and SEARHC, discussed how ambient AI is transforming clinical workflows, reducing burnout, and improving patient engagement. AI-powered tools like Microsoft's Dragon and DAX Copilot are streamlining documentation, saving clinicians an average of five minutes per patient, enhancing care quality, and aiding in physician retention. AI is also extending healthcare access to remote and underserved areas, with AI-guided diagnostics supporting non-specialist providers. Additionally, AI-driven visit summaries improve patient comprehension and adherence. The discussion emphasized the importance of responsible AI implementation, with strong governance, security, and workflow integration as key factors. Looking ahead, panelists highlighted AI’s potential for real-time risk detection and addressing social determinants of health, reinforcing AI's role as a strategic partner in healthcare rather than a replacement for providers.?
- How Sofya is taking clinical reasoning to the next level with Llama Healthcare AI company Sofya is leveraging Meta’s Llama models to enhance clinical reasoning and automate administrative tasks, reducing documentation time by up to 30% per consultation. Hosted on Oracle Cloud, Sofya's models use frameworks like Sglang and VLLM for real-time processing, improving efficiency and enabling faster scaling. With Llama’s adaptability, Sofya has optimized workflows, increased accuracy, and improved patient outcomes, achieving an average customer satisfaction score of 90%. The company plans to expand its AI-powered agent flow with Llama 70B, targeting 1 million consultations per month.?
- Natural language processing of electronic health records for early detection of cognitive decline: a systematic review | npj Digital Medicine A systematic review of 18 studies analyzing natural language processing (NLP) applications for detecting cognitive decline in electronic health records found promising results but highlighted challenges in implementation. The studies, covering over a million patients, reported strong median sensitivity (0.88) and specificity (0.96) in identifying conditions like mild cognitive impairment and Alzheimer’s disease. Deep learning models outperformed traditional machine learning and rule-based approaches, achieving near-perfect accuracy in some cases. However, barriers such as incomplete data, inconsistent clinical documentation, and lack of external validation hinder real-world adoption. Standardization, improved dataset access, and equitable deployment frameworks are critical for successful integration into clinical workflows.?
Telecom
- Generative AI Transforming Telecommunications Landscape; Mobile World Congress 2025? Google Cloud is driving AI transformation in the telecom industry, as highlighted at Mobile World Congress 2025. Angelo Libertucci, Global Industry Lead for Telecommunications at Google Cloud, emphasized that AI is shifting telecom operations from reactive fixes to proactive optimization, reducing network failures and enhancing efficiency. Operators like Bell Canada are using Google Cloud’s AI to detect and resolve network issues, improving software delivery productivity by 75% and reducing customer complaints by 25%. Telus has integrated AI-powered assistants to optimize field services, cutting response times and increasing technician efficiency. Chunghwa Telecom leverages AI to streamline customer service, aiming to reduce billing-related calls by 25% annually. Generative AI is set to revolutionize telecom with intelligent agents that monitor networks in real time, predict failures, and automate fixes. Deutsche Telekom's RAN Guardian showcases AI’s potential to enhance network stability while reducing costs. AI is also transforming customer service, with some providers already automating 35% of customer calls. Looking ahead, 5G, next-gen networks, and edge computing are key growth areas. Google Cloud’s collaboration with Ericsson aims to enhance 5G autonomy, while new API-driven services open revenue streams for telecom operators. As AI reshapes telecommunications, operators must embrace automation, cloud-native solutions, and data-driven insights to remain competitive. It’s the first time in 18 years that I did not go to MWC as I am bootstrapping and building a startup … Next year …
Learning Center
Women’s History Month celebrates trailblazers who challenge norms and drive progress—values reflected in today’s advancements in AI and technology. Thoughtworks’ approach to optimizing GenAI for production mirrors the persistence of women in STEM, ensuring AI solutions are reliable and effective. LMArenaai’s crowdsourced benchmarking of AI models highlights the importance of diverse perspectives, much like the women shaping ethical AI development. Google Cloud’s Gen AI Toolbox simplifies AI integration with databases, improving accessibility—an achievement reminiscent of the women who have pioneered breakthroughs in computing. The LA Times’ AI-generated counterpoints spark debates on media integrity, echoing the ongoing fight for equitable representation in journalism, where women have played a crucial role. Lastly, platforms like You.com emphasize AI-powered productivity and privacy, reinforcing the need for responsible innovation—an area where female leaders continue to push for ethical, inclusive tech solutions.
Learning
- Emerging Patterns in Building GenAI Products? Thoughtworks outlines key patterns for transitioning generative AI (GenAI) from proof-of-concept to production, addressing challenges like hallucinations, non-determinism, and unbounded data access. Direct prompting is limited by pre-trained knowledge, making retrieval-augmented generation (RAG) essential for integrating external information. Evaluations (Evals) ensure AI reliability, using LLM-based scoring and human reviews to assess accuracy. Embeddings improve search efficiency, with models like CLIP generating vector representations for better similarity matching. RAG enhances LLM outputs by retrieving relevant documents, but challenges include inefficient retrieval, vague user queries, and context length limitations. Solutions include hybrid retrievers combining vector and keyword searches, query rewriting for alternative phrasing, and rerankers for prioritizing relevant content. Guardrails prevent misinformation and security risks using LLM-based filtering, embeddings, and rule-based approaches. Fine-tuning, while resource-intensive, is used when RAG cannot provide sufficient domain expertise, as seen in the Aalap project, where a fine-tuned Mistral 7B outperformed GPT-3.5 on 31% of legal tasks. Optimizing GenAI for production requires structured retrieval, evaluation, and safety mechanisms. RAG remains the preferred approach for contextual accuracy, with fine-tuning reserved for highly specialized applications.
Prompting
- Chatbot Arena? LMArena.ai, formerly known as LMSYS, is an open platform for crowdsourced AI benchmarking, developed by researchers from UC Berkeley's SkyLab. It allows users to interact with and compare various large language models (LLMs) through a feature called Chatbot Arena. In this arena, users can engage in blind tests by posing questions to two anonymous AI chatbots—such as ChatGPT, Gemini, Claude, and Llama—and then vote on which response they prefer. This process contributes to the platform's evaluation of AI models based on human preferences. The platform also maintains a leaderboard that ranks these models according to their performance, providing insights into their capabilities and advancements.
Tools and Resources
- Google Cloud Launches Gen AI Toolbox for Databases - InfoQ? Google Cloud has announced the public beta launch of the Gen AI Toolbox for Databases, an open-source server developed in collaboration with LangChain. This tool aims to simplify the integration of agent-based generative AI applications with databases while ensuring security, scalability, and observability. Traditionally, AI-powered applications that interact with databases face challenges such as complex configurations, security risks, and limited workflow visibility. The Gen AI Toolbox addresses these concerns by providing a seamless way for AI applications to interact with PostgreSQL, MySQL, AlloyDB, Spanner, and Cloud SQL in a secure and efficient manner. The Toolbox consists of two main components: a server that defines tools for applications and a client that integrates these tools into orchestration frameworks, allowing for centralized deployment and updates. This structure improves performance, security, and developer experience, making AI-driven applications easier to build and maintain. Furthermore, it integrates with OpenTelemetry, enabling real-time monitoring and debugging of AI-driven workflows and database queries. A key feature of the launch is its compatibility with LangChain, a framework for building LLM applications. This integration allows developers to construct agent-based AI applications that can reliably interact with structured tools. LangGraph, an extension of LangChain, enhances this functionality by managing stateful multi-actor workflows, improving coordination between AI models and external tools. Harrison Chase, CEO of LangChain, emphasized that this collaboration will enable developers to build more reliable AI agents than ever before. The launch has already sparked discussions in the industry. Some experts suggest that a more useful implementation could involve an MCP server rather than its current structure. In response, Andrew Brook, an engineering director at Google Cloud, clarified that the Toolbox focuses on database-connected tools, whereas MCP defines a standard protocol for tool access. While these areas are closely related, they serve different purposes, and Google Cloud is actively exploring options for compatibility. Now open for public beta testing, the Gen AI Toolbox for Databases is available on GitHub, where developers can access its source code and documentation. This release represents a significant step in bridging the gap between AI and database-driven applications, allowing developers to build more efficient, secure, and scalable AI-powered systems.
- The LA Times published an op-ed warning of AI’s dangers. It also published its AI tool’s reply | US news | The Guardian? The Los Angeles Times has introduced an AI tool called "Insight" that generates responses to opinion pieces, sparking debate over journalistic integrity. A recent op-ed warning about AI's dangers in documentary filmmaking was followed by a 150-word AI-generated counterargument, claiming AI democratizes storytelling and can be regulated without stifling innovation. The tool, developed with Perplexity AI and Particle News, also labels op-eds on a political spectrum, from Left to Right. While billionaire owner Dr. Patrick Soon-Shiong calls it an effort to avoid echo chambers, the LA Times Guild warns that unvetted AI-generated content could further erode trust in journalism. The AI is currently limited to opinion pieces and does not modify news reporting, but critics fear its impact on public perception of media credibility.
- You.com is an AI-powered platform that combines conversational AI with traditional web search to enhance productivity and information retrieval. Founded in 2020 by former Salesforce employees, it offers AI chat integration, customizable AI agents, and privacy-focused search without tracking user data. The platform provides a free plan with limited access to premium AI models, while the Pro Plan at $15 per month includes expanded AI access, file uploads, and custom AI agent creation. The Team Plan at $25 per month adds unlimited queries and collaboration tools, and an Enterprise Plan offers tailored solutions.
If you enjoyed this newsletter, please comment and share. If you would like to discuss a partnership, or invite me to speak at your company or event, please DM me.
International Speaker | University Professor | Executive Coach | CHIEF & Big 4 Alum | Pittsburgh Native | Div. II Athlete | #50races_50states ??♀? 45 Completed
1 小时前??
Ambassador | Chair | WSJ Bestselling Author | Keynote Speaker | CEO
10 小时前When I read your newsletter, it makes me think how outdated these announcements will be in such a short period of time. Always fascinating to read Eugina Jordan
Personal Branding and LinkedIn? Strategy | Build Your Brand, Find Your Voice, Build Your Business | Amazon Bestselling Author | The Good Witch of LinkedIn ?
15 小时前Another excellent edition, Eugina. Highlighting this comment: "And since it's Women’s History Month, it's worth recognizing the women pushing AI forward—building better models, driving innovation, and shaping the future of business." ?? ?? ??