Gen AI for Business Weekly Newsletter # 29

Gen AI for Business Weekly Newsletter # 29

Welcome to Gen AI for Business #29, your go-to source for insights, tools, and innovations in Generative AI for the B2B world.

This edition dives into the intensifying competition in AI-powered search: Perplexity, ChatGPT, and now Meta all vying to challenge Google’s dominance. Why the sudden surge of contenders now, and why has Google gone unchallenged for so long? I explore this question in depth.

We also cover recent moves by Oracle, Azure, and Google in healthcare, highlighting their latest tools for record management and patient data solutions. Plus, a packed lineup on AI investments, cost strategies, game-changing partnerships, regulatory updates, and essential tools for every business.

If you find value here, please leave a like, share, or drop a comment.?

Knowledge shared is knowledge multiplied!

Thank you,

Eugina

Models

Sarvam AI launched Sarvam 1, India's first multilingual language model optimized for 11 Indian languages, built on NVIDIA’s GPUs for use in voice agents, messaging, and Indic content retrieval, accessible on Hugging Face. Meanwhile, Hanooman AI introduced Everest 1.0, a foundational generative AI model supporting 35 languages (with plans for 90), enabling text and image generation, code writing, and voice tasks, aiming to reduce AI costs and drive job creation. Meta’s new quantized Llama 3.2 models enhance efficiency on mobile devices, with a 2-4x speed boost, reduced memory footprint, and on-device privacy through Quantization-Aware Training, now available on Qualcomm and MediaTek SoCs.

  • Sarvam AI launches first LLM developed in India for local languages, built with NVIDIA AI Sarvam AI has introduced Sarvam 1, the first large multilingual language model (LLM) developed in India, leveraging NVIDIA’s AI technology. Trained on NVIDIA H100 Tensor Core GPUs using 4 trillion tokens and optimized with a custom tokenizer, Sarvam 1 supports 11 languages, including Bengali, Hindi, Tamil, Telugu, and English. This 2-billion-parameter model is built for applications in voice agents, messaging systems, and Indic content retrieval. Developers can access Sarvam 1 on Hugging Face to create generative AI tools tailored for Indian languages. NVIDIA’s NeMo software was crucial in curating high-quality datasets and refining the model's accuracy. Sarvam AI also uses NVIDIA Riva for voice bots, addressing use cases in sectors like law, finance, and public services. Sarvam 1 reflects the startup's mission to develop a sovereign AI stack for India, advancing AI innovation while making technology accessible to millions.

  • Indian startup launches new Gen AI foundational model, claims it's India's first | Republic World Hanooman AI, an Indian startup, recently unveiled its foundational generative AI model, Everest 1.0, at the Nvidia AI Summit in Mumbai. Marketed as India's first multi-lingual and multi-modal large language model (LLM), Everest 1.0 supports 35 languages with plans to expand to 90, enabling tasks such as text generation, image creation, code writing, and voice generation. Running on Yotta infrastructure with Nvidia’s latest GPUs, it aims to deliver high performance for a range of applications. Beyond Everest 1.0, Hanooman AI offers a complete AI platform supporting no-code agent creation, model training, and extensive API integrations, focused on secure, scalable solutions for sectors like banking, defense, and cybersecurity. CEO Dr. Vishnu Vardhan highlighted the model’s potential to reduce AI costs by a factor of 10, positioning it as a catalyst for job creation and innovation across India.

My take: Hanooman AI is a multi-modal foundational large language model (LLM). Supporting 35 languages, with plans to extend to 90, Everest 1.0 is designed to handle tasks such as text generation, image creation, code writing, voice generation, and document analysis. The platform is deployed on Yotta infrastructure and leverages NVIDIA GPUs for training, aiming to democratize AI technology while ensuring data sovereignty and security. Sarvam AI launched Sarvam-1, a 2-billion-parameter large language model specifically optimized for 10 Indian languages, including Hindi, Tamil, Telugu, Malayalam, Punjabi, Odia, Gujarati, Marathi, Kannada, and Bengali, alongside English. This model is open-source and was trained using NVIDIA's NeMo framework on the Yotta Shakti Cloud with HGX H100 systems. Sarvam-1 aims to address challenges in Indic language modeling by improving token efficiency and data quality, demonstrating strong performance across various benchmarks. While both companies claim to be pioneers (God bless their marketing teams, as a marketer I understand), their models differ in scope and capabilities. Sarvam-1 focuses on optimizing performance for 10 Indian languages, emphasizing token efficiency and data quality. In contrast, Everest 1.0 offers a broader multi-lingual and multi-modal approach, supporting a wider range of languages and functionalities. These developments highlight the growing emphasis on creating AI models that cater to India's diverse linguistic landscape.

  • Introducing quantized Llama models with increased speed and a reduced memory footprint ? Meta has released quantized versions of its Llama 3.2 1B and 3B models, designed for efficient on-device deployment. These models achieve a 2-4x speedup and reduce model size by an average of 56%, with a 41% average reduction in memory usage compared to the original BF16 format. This optimization enables them to run on many popular mobile devices, facilitating faster inference and enhanced privacy by keeping interactions entirely on-device. The quantization was achieved using Quantization-Aware Training with LoRA adaptors and SpinQuant, ensuring minimal performance degradation. These models are now available on Qualcomm and MediaTek SoCs with Arm CPUs, expanding the possibilities for developers to create resource-efficient applications.

News?

Meta AI has achieved 500 million users, enhancing user engagement on Facebook and Instagram and generating over 15 million ads with AI, while its Threads app nears 275 million users. Apple launched Apple Intelligence, its first set of on-device AI features on iPhone, iPad, and Mac, prioritizing privacy and productivity with updates planned for December. Google is working on Project Jarvis, an AI agent for Chrome, to match Anthropic's Claude in computer control, while Project Astra, a multimodal AI application, is set to release in 2025 to recognize and respond to real-world environments. With AI-driven search competition rising, Google is refining AI Overviews to compete with conversational search engines like OpenAI's ChatGPT, which has recently added real-time web search. Meanwhile, Meta is developing its own AI-powered search engine, using the advanced Llama model to enhance chatbot capabilities and reduce reliance on Google and Microsoft.

  • Meta AI has more than 500 million users Meta AI’s recent milestone of 500 million users highlights its focus on enhancing consumer engagement across platforms like Facebook and Instagram. Mark Zuckerberg noted that AI-driven improvements in feed and video recommendations contributed to an 8% increase in time spent on Facebook and a 5% increase on Instagram. This consumer-focused AI extends to advertising, with over 15 million ads created through generative AI in just the last month. On the other hand, Meta’s large language models (LLMs), such as Llama, cater primarily to developers and researchers, offering advanced tools to build AI applications. Additionally, Meta’s Threads app is seeing rapid consumer adoption, now approaching 275 million monthly users, further demonstrating Meta’s focus on broad consumer engagement through AI-driven products.

  • Apple Intelligence is available today on iPhone, iPad, and Mac Apple has launched the first set of Apple Intelligence features for iPhone, iPad, and Mac through a free software update in iOS 18.1, iPadOS 18.1, and macOS Sequoia 15.1, introducing tools that simplify tasks, enhance privacy, and enrich user experience. Apple Intelligence utilizes the power of Apple silicon to deliver a range of capabilities, including Writing Tools for refining text, a more natural Siri experience, advanced photo search and editing, and features to prioritize messages and notifications. The system processes data on-device with Private Cloud Compute for added security, offering a seamless way to handle complex tasks while keeping user data private. Future updates, starting in December, will expand language support and add features like Camera Control, advanced writing tools, and personalized visual intelligence. Designed to enhance productivity, privacy, and convenience, Apple Intelligence marks Apple’s latest advance in integrated AI, with Tim Cook describing it as “generative AI in a way that only Apple can deliver.” Check out our Tools section for the instructions on what is required to get it on your device.?

  • Google to develop AI that takes over computers, The Information reports | Reuters Anthropic has already launched an AI feature called "computer use," allowing its model, Claude, to autonomously control a computer—interpreting the screen, moving the cursor, and interacting with applications for tasks like web browsing. In response, Google is developing a similar agent, codenamed "Project Jarvis," aimed at automating tasks in Chrome, including online research and shopping. Set to preview in December with Google’s Gemini 2.0 model, Project Jarvis shows that Google is indeed working to keep pace with Anthropic’s advancements in autonomous computer interactions.

  • ?Google says its next-gen AI agents won't launch until 2025 at the earliest | TechCrunch ? Google's ambitious Project Astra, aimed at creating real-time, multimodal AI applications, won't see a consumer release until at least 2025, as confirmed by CEO Sundar Pichai during Google's Q3 earnings call. Project Astra includes technology that enables smartphone apps to recognize and interpret the environment, answering questions about visible objects and assisting with tasks through AI agents. During Google’s I/O developer conference, a prototype demonstrated the ability to identify surroundings via a smartphone camera and provide relevant answers, such as identifying a neighborhood or parts of a bicycle. Earlier reports suggested a potential December launch for a Google AI "agent" capable of tasks like booking flights or purchasing products, but this may proceed separately from Project Astra. The challenges involved highlight the complexity of developing reliable, real-time AI agents, as seen with other companies like Anthropic, which has faced difficulties achieving consistency in AI-driven task execution.

My take: Google's Project Astra builds on the foundation laid by its earlier Project Jarvis, but it's aiming even higher. While Project Jarvis focused on creating a real-time, voice-activated assistant that could manage complex tasks across devices, Astra takes things further by introducing a truly multimodal experience. Imagine pointing your phone at a neighborhood and instantly learning its name, or identifying a broken bike part by snapping a photo—Astra is designed to let AI “see” and understand the world around you. Unlike Jarvis, which was more voice-centric, Astra incorporates visual inputs and reasoning, turning your smartphone into a kind of real-time assistant that interacts with the physical environment. With a launch set for 2025, Project Astra promises a glimpse of an AI future where our devices not only listen but observe and respond to the world around us, making AI feel less like a tool and more like a helpful companion right in your pocket.

Spotlight: search engine battles

Google is actively working to maintain its leadership in search as competition in AI-powered search intensifies. With the rise of competitors like OpenAI, Anthropic, and Perplexity, which focus on delivering fast, direct, and conversational answers, Google has been adapting by incorporating generative AI into its own search services and they had some issues with it as we all remember. Recent projects, such as Project Jarvis, aim to push AI capabilities further by allowing autonomous actions within the browser for tasks like research and shopping.

  • OpenAI Launches New ChatGPT Web Search Feature - Geeky Gadgets OpenAI has introduced a new web search feature in ChatGPT, allowing users to access up-to-date information directly within the chat interface. This advancement integrates natural language processing with real-time data retrieval, enhancing the efficiency of ChatGPT compared to traditional search engines. Initially available to Plus, Team, and SearchGPT waitlist users, OpenAI plans to expand access to Enterprise, Edu, and Free users. With partnerships across news and data providers, ChatGPT now offers real-time updates on various topics, such as news, sports, and weather, ensuring comprehensive and accurate responses. Future enhancements will include shopping and travel search capabilities and advanced voice and canvas features, reinforcing ChatGPT’s position in AI-driven information retrieval.

  • How to use ChatGPT search: https://www.geeky-gadgets.com/how-to-use-chatgpt-search/ To use ChatGPT’s real-time search feature, log in through the web interface or app, ensuring you’re a ChatGPT Plus or Teams user, as access is initially limited to these plans. Begin by typing a specific query as you would with any search engine; ChatGPT’s advanced natural language processing helps it understand complex or nuanced requests. ChatGPT then performs a real-time web search, delivering timely responses in a conversational format and often including links to original sources for further exploration. If you need more information, you can ask follow-up questions, and ChatGPT will retain the context, allowing a seamless, in-depth search experience. This feature is especially useful for topics like news, weather, stocks, and local services, drawing on partnered providers for reliable, up-to-date information. OpenAI plans to expand access to free users and enhance the tool to support shopping and travel searches, broadening its utility over time.

The surge in AI-driven search engines like Perplexity and ChatGPT stems from advancements in artificial intelligence, particularly large language models (LLMs), which enable more conversational and context-aware interactions. These AI systems can understand and generate human-like text, offering users more intuitive and personalized search experiences.

The search industry is becoming more competitive now because AI technologies, particularly large language models (LLMs), have matured to the point where they can understand and respond to complex, conversational queries effectively. Previously, traditional search engines like Google relied on ranking webpages based on keywords and links, which worked well but wasn't conversational or contextually adaptive. This approach made Google very effective and hard to compete with for decades.

Google, having dominated the search market for over two decades, has faced challenges integrating AI into its search functionalities. Its AI Overviews feature, designed to provide concise, AI-generated summaries atop search results, encountered issues such as inaccuracies and misleading information. For instance, early iterations produced erroneous answers.?

Google’s traditional search algorithms rely on keywords and page rankings, which work well for fact-based queries but struggle with more personalized, open-ended, or complex questions. AI search engines can answer such queries directly and contextually, often within a single interaction. As AI assistants become more common, users expect faster, more interactive ways to find answers instead of navigating through pages of links. This shift opens the door for competitors that can offer more conversational and relevant responses.

  • And even Meta is ready to jump in into search space Meta is reportedly developing a search engine for its chatbot ? Meta is developing its own search engine for its chatbot, aiming to reduce reliance on Google and Microsoft. The company has been working on web indexing for around eight months, led by senior engineering manager Xueyuan Su. Meta’s recent partnership with Reuters enhances its AI’s capability to answer news-related questions. This shift stems from past challenges with Big Tech, notably the impact of Apple’s App Tracking Transparency feature, which cost Meta over $10 billion in ad revenue. CEO Mark Zuckerberg’s goal is greater autonomy, especially as Meta AI now has 185 million weekly active users and 400 million monthly users globally.

Meta’s move into search now is driven by the capabilities of models like Llama, which can handle language and context at a much more advanced level than earlier technology. For about 20 years, Google dominated because no other search approach could rival its speed, relevance, and depth. But with the rise of powerful AI models like Llama, companies can now build systems that understand nuanced user queries, generate responses directly, and even pull information from a variety of sources—not just indexed web pages. The latest iteration, Llama 3.1, boasts 405 billion parameters and has demonstrated superior performance compared to models like GPT-4o. This is what makes the timing so right; AI is finally capable of disrupting search in a meaningful way. By integrating Llama into its search capabilities, Meta aims to reduce dependence on external search engines such as Google and Bing, thereby enhancing the functionality of its AI chatbot, Meta AI.

Regulatory?

The Open Source Initiative's new definition mandates that open-source AI projects reveal training data, code, and parameters, challenging tech companies like Meta to increase transparency. Meanwhile, the U.S. has finalized investment restrictions on China's AI, semiconductor, and quantum tech sectors to safeguard national security, targeting sensitive areas like advanced AI and quantum code-breaking. In the past year, the Biden administration has implemented over 100 AI-related actions, emphasizing AI safety and transparency, from workplace standards to climate research. The Department of Homeland Security has completed the first phase of AI pilots in USCIS, HSI, and FEMA and hired 31 AI experts to enhance mission-critical applications. Nvidia’s planned acquisition of AI startup Run is now under EU antitrust review, reflecting competition concerns in AI infrastructure and potentially necessitating concessions for approval.

  • Open-source AI must reveal its training data, per new OSI definition - The Verge The Open Source Initiative (OSI) has introduced a new standard for open-source AI, which mandates transparency around model training data, code, and training parameters to qualify as truly “open.” This definition challenges tech companies like Meta, whose Llama model, though available for download, restricts commercial use and does not disclose its training data. Meta argues that defining open-source AI is complex due to evolving technology, while OSI maintains that transparency is essential to prevent “open washing,” where companies label proprietary models as open-source. This debate reflects broader concerns over access, liability, and intellectual property in AI development, with companies like Hugging Face supporting OSI’s emphasis on training data openness.?

The OSI’s new definition is likely to prompt open-source AI developers to rethink how they structure and release their models. By setting transparency as a requirement—especially around training data, code, and settings—OSI aims to encourage developers to disclose more of their processes. For models that don’t meet these standards, OSI’s influence could lead to public scrutiny, with the community potentially questioning the openness of certain models.

Ultimately, the OSI’s approach isn’t regulatory in the sense of it, but rather aimed at creating industry-wide norms that favor genuinely open practices. Over time, this could drive developers and companies to release models that fully align with OSI’s standards to gain community trust and support.

  • ?US finalizes rules to curb AI investments in China, impose other restrictions | Reuters The U.S. has finalized rules to limit American investments in China’s AI, semiconductor, and quantum technology sectors, aiming to prevent potential threats to national security. Effective January 2, these restrictions will be overseen by the U.S. Treasury’s Office of Global Transactions and follow an August 2023 executive order by President Biden. The rules target technologies deemed essential to military and intelligence applications, such as advanced code-breaking systems and next-generation fighter jet components. They also restrict the “intangible benefits” that often accompany investments, like managerial assistance and access to U.S. talent networks, to prevent these resources from aiding China's military capabilities. A carve-out allows investment in publicly traded securities, though previous executive orders already limit transactions with certain designated Chinese companies. Treasury officials stress that these measures are designed to protect U.S. know-how from supporting China's military advancements, aligning with Commerce Secretary Gina Raimondo’s earlier emphasis on restricting technology transfers with defense implications.?

  • Fact Sheet: Key AI Accomplishments in the Year Since the Biden-Harris Administration’s Landmark Executive Order | The White House Following the Executive Order, federal agencies have implemented over 100 actions to protect Americans' safety, security, and civil rights. Key achievements include pre-testing advanced AI models with the U.S. AI Safety Institute (US AISI) and setting frameworks for AI risk management, particularly around dual-use AI models. Transparency measures were strengthened, with agencies working to ensure Americans are aware of AI-generated content. In the workplace, AI principles from the Department of Labor aim to protect workers, while HHS established safeguards for healthcare AI, promoting transparency and equity. DOE and NSF have advanced AI research and climate goals through grants and initiatives like the National AI Research Resource, supporting hundreds of projects nationwide. OMB has issued the first government-wide AI policy, ensuring safe, accountable use within federal agencies. On the global stage, the U.S. led efforts in responsible AI governance, including a UN resolution on safe AI use and a Council of Europe treaty supporting ethical AI practices. These accomplishments reflect a holistic approach to secure AI leadership, benefiting society while protecting public safety and rights.?

  • FACT SHEET: DHS Completes First Phase of AI Technology Pilots, Hires New AI Corps Members, Furthers Efforts for Safe and Secure AI Use and Development The Department of Homeland Security (DHS) has completed the first phase of its AI technology pilots, aligned with President Biden’s Executive Order 14110 on safe and secure AI. DHS successfully tested three AI pilots: USCIS used AI for training immigration officers; HSI employed large language models for investigative summaries; and FEMA assisted communities in drafting hazard mitigation plans. Additionally, DHS hired 31 new AI Corps experts to support AI integration across its missions, collaborating with the DHS Supply Chain Resilience Center and advancing AI applications in critical infrastructure. DHS also established the AI Safety and Security Board, which includes representatives from various sectors, to guide safe AI deployment in critical infrastructure and created guidelines to address AI-enabled cyber threats. DHS is working with the Countering Weapons of Mass Destruction Office on strategies to mitigate risks from AI in the development of biological and chemical materials, reinforcing its commitment to responsible AI use for national security.

Given the successful testing phase, DHS may scale these tools in USCIS, HSI, and FEMA to improve training, investigative support, and community resilience more comprehensively. They’ll also probably continue building the AI Corps to strengthen their technical capabilities and ensure that AI tools meet evolving safety and security standards.

On a regulatory front, DHS will likely collaborate more intensively with the AI Safety and Security Board to set industry-wide AI safety protocols, especially as new threats and challenges emerge. With AI-enabled cyber risks and adversarial AI applications being significant concerns, DHS will focus on developing actionable defenses and collaborating with other agencies like CISA to secure critical infrastructure. Additionally, they may continue to work on AI safeguards for areas like biological and chemical threats, supporting a larger framework for responsible AI governance in high-stakes fields.

  • Nvidia needs EU approval to buy AI startup Run:ai, regulators say | Reuters ? Nvidia’s acquisition of Israeli AI startup Run is under scrutiny from the European Union, which has requested antitrust clearance for the $700 million deal, potentially requiring Nvidia to offer concessions to gain approval. The European Commission’s concern centers on the potential impact on competition in AI infrastructure management and optimization—a field in which both Nvidia and Run are active. While the acquisition did not meet the typical revenue threshold for mandatory EU approval, Italy's competition authority referred the case to the EU, which agreed to investigate, highlighting risks to competition across the European Economic Area. Nvidia, in response, expressed willingness to address any regulatory inquiries about the acquisition.

Regional Updates

Europe is making significant strides in AI innovation, with companies like DeepMind, BenevolentAI, and Darktrace leading advancements in fields from healthcare to cybersecurity. Germany’s Aleph Alpha excels in NLP for sectors like finance, while France's Scaleway and Shift Technology specialize in GDPR-compliant AI cloud services and fraud detection, respectively. Autonomous driving company Wayve and IT consultancy Sopra Steria further showcase Europe's role in AI, advancing areas such as navigation and process optimization. Meanwhile, Chinese researchers have developed ChatBIT, a military-focused AI model based on Meta's open-source Llama, raising international concerns about the enforcement of non-military licensing in open-source AI. China is also closing the AI gap with the U.S., fueled by state-directed funding and foreign investments, particularly in generative AI, though it still lags in private-sector involvement and research impact.

  • Top 10 European AI Companies Shaping Future of Innovation Europe has solidified its position as a hub of AI innovation, with companies across the continent advancing technology in diverse fields like healthcare, cybersecurity, and autonomous driving. Leading this transformation is the UK’s DeepMind, renowned for groundbreaking achievements like AlphaGo and AlphaFold, and BenevolentAI, which accelerates drug discovery with AI-driven solutions. In cybersecurity, Darktrace’s self-learning AI technology provides real-time threat detection, while Graphcore’s Intelligence Processing Unit (IPU) sets new standards in AI hardware for efficient model training. Germany’s Aleph Alpha brings high-performance natural language processing (NLP) to industries such as finance and law, and France’s Scaleway provides secure, GDPR-compliant cloud services tailored for AI. Shift Technology, also based in France, specializes in AI-driven fraud detection and claims automation, boosting accuracy for insurers worldwide. Re, another UK firm, leverages natural language understanding (NLU) to automate customer communications, and Wayve advances autonomous driving with adaptable, AI-first navigation models. Finally, Sopra Steria, a French IT consultancy, applies AI to optimize business processes in risk management, CRM, and supply chain management. Together, these companies exemplify Europe’s leadership in AI, tackling complex industry challenges and driving global innovation.?

The adaptation of Meta's open-source Llama model by Chinese researchers for military applications, resulting in the development of ChatBIT, raises significant concerns. Despite Meta's licensing terms prohibiting military use, the open-source nature of Llama complicates enforcement. While Meta has expressed concern over the unauthorized military use of its model, the company is unlikely to face legal repercussions due to the open-source distribution of Llama. However, this situation may prompt Meta and other tech firms to reassess their open-source strategies and consider implementing more robust controls to prevent misuse.

  • ?How Innovative Is China in AI? | ITIF China is narrowing the AI innovation gap with the U.S., excelling in AI research publications and generative AI development, although its research impact remains lower due to fewer citations and less private-sector involvement. Key AI companies, many from Tsinghua University, are advancing competitive large language models, which perform well on bilingual benchmarks. Despite having fewer private AI investments, China’s state-directed funds are effectively supporting high-potential firms in underserved areas. With foreign investment, such as from Saudi Arabia’s Aramco, China’s generative AI ecosystem is strengthening. While the U.S. outperforms in translating research into commercial applications, China’s open-source and state-backed AI ecosystems are gaining traction. To maintain its AI leadership, the U.S. should focus on a comprehensive AI strategy, supporting AI R&D, federal funding agility, and data strategy improvements while prioritizing AI adoption in government and workforce training.

?

Partnerships

Coveo’s partnership with Shopify aims to enhance enterprise-level AI-powered search and personalization, allowing Shopify merchants to deliver targeted, high-engagement shopping experiences using Coveo’s advanced models. This integration focuses on personalized search and product recommendations, echoing trends like Amazon’s AI advancements, including tools like the shopping assistant Rufus. Meanwhile, Box’s expanded partnership with AWS enables its clients to access foundational models like Claude and Titan through Amazon Bedrock, streamlining generative AI app development without complex infrastructure. This strategic move strengthens Box’s enterprise content management offerings while deepening AWS’s generative AI reach by enabling practical applications.

  • Coveo Partners with Shopify to Bring Scalable AI Search and Generative Commerce Experiences to Enterprise Customers Coveo has partnered with Shopify to integrate its scalable AI search and generative commerce capabilities into Shopify's enterprise platform. This collaboration aims to enhance product discovery, personalization, and operational efficiency for Shopify’s B2B and B2C clients by using Coveo’s advanced AI models. Through this partnership, Shopify merchants can leverage Coveo’s tools for personalized search, dynamic product recommendations, real-time indexing, and generative shopping experiences, all designed to optimize shopper engagement and boost revenue. With a focus on relevance at scale, Coveo empowers Shopify merchants to deliver targeted, high-value experiences in complex digital commerce landscapes.

The partnership between Coveo and Shopify is significant, as it integrates advanced AI-driven search and personalization into Shopify's enterprise platform, enhancing product discovery and customer engagement. This move aligns with broader industry trends, notably Amazon's recent initiatives to incorporate AI into its shopping experience. Amazon has introduced AI-powered tools like the shopping assistant 'Rufus' and AI-generated product listings to improve user experience and streamline seller processes

?

  • AWS Partnership Helps Box Simplify Generative AI App Creation Box has expanded its partnership with Amazon Web Services (AWS) to simplify the creation of generative AI applications for enterprise clients. Through this collaboration, Box customers can now access foundation models via Amazon Bedrock, including Anthropic's Claude and Amazon's Titan. This integration allows companies to efficiently develop AI-driven applications by combining these models with Box's Intelligent Content Management platform, enhancing their ability to manage and utilize content through advanced AI capabilities.

My take: by integrating foundational models like Anthropic’s Claude and Amazon’s Titan via Amazon Bedrock, Box effectively makes it easier for companies to build AI-driven applications without needing to invest in complex AI infrastructure or development resources.

For Box, this move strengthens its value proposition in the enterprise market, where companies increasingly demand intelligent content solutions. It also allows AWS to deepen its reach in generative AI services by supporting real-world applications through platforms like Box. While not as groundbreaking as introducing a new technology, this partnership is strategically important as it simplifies and accelerates enterprise adoption of generative AI, marking it above a vanity move but below a game-changer.?

Cost?

Intel’s Gaudi AI chips have struggled to compete with Nvidia and AMD, missing Intel's $500 million target due to low adoption rates and a difficult transition from Gaudi 2 to Gaudi 3. The company faces a $16.6 billion loss, with CEO Pat Gelsinger aiming to refocus Intel's strategy toward x86 chips and integration of AI into broader applications. Despite CHIPS Act funding to boost semiconductor tech in the U.S., Intel’s progress lags behind market leaders, highlighting the challenge of turning government support into competitive products quickly. Meanwhile, OpenAI’s CFO Sarah Friar noted that 75% of its revenue comes from individual consumer subscriptions to ChatGPT, bolstered by a strong conversion rate from free to paid tiers and significant funding to scale infrastructure for future AGI goals.

  • Intel’s Gaudi AI chips are far behind Nvidia and AMD, won’t even hit $500M goal - The Verge Intel’s Gaudi AI chips are significantly lagging behind competitors Nvidia and AMD, missing the company’s $500 million revenue target for 2024. While Nvidia has achieved substantial profits from the AI boom, and AMD’s AI chips have brought in over $1 billion per quarter, Intel's Gaudi 3 accelerator has struggled due to slower adoption rates and transition issues from Gaudi 2. CEO Pat Gelsinger cited that demand for AI chips currently favors training AI models in the cloud, but he believes future AI demand will focus more on integrating AI into broader chip applications. Intel’s recent financial report showed a $16.6 billion loss, primarily from restructuring costs, as the company executes a turnaround strategy involving cost cuts, layoffs, and reorganization of business units to prioritize x86 chips across various markets. Despite the setback, Intel remains optimistic about the long-term potential of its Gaudi chips, aiming to provide cost-effective, open-standard AI solutions.?

Intel has indeed received substantial funding from the CHIPS Act, which was designed to boost U.S. semiconductor manufacturing and technology. However, the performance of their Gaudi AI chips has not met expectations, especially in a market where Nvidia and AMD are surging ahead with AI solutions. Intel’s struggles seem tied to both the delayed transition from Gaudi 2 to Gaudi 3 and their slower uptake among cloud providers where AI model training largely occurs.

Although Intel’s broader turnaround strategy, including restructuring and cost cuts, aims to stabilize its position, the lagging performance in AI is particularly noticeable given the resources they’ve received to support innovation. It underscores a challenge Intel faces: converting CHIPS Act support into competitive, market-ready products in an industry where speed and innovation are crucial. Intel’s hope is that Gaudi chips will eventually fill a niche for cost-effective, open-standard AI, but in the short term, the gap remains stark.

The challenges Intel faces certainly raise questions about the effectiveness of substantial investments like those from the CHIPS Act. Intel’s slower progress in AI chip technology compared to Nvidia and AMD might make it appear that funds haven't yielded immediate competitive results. However, it’s also true that turning large-scale funding into breakthrough technologies can be a complex, long-term process—especially in sectors like semiconductors that involve high research costs, development time, and rapid technological shifts.

That said, it’s fair for stakeholders to scrutinize the outcomes and demand accountability to ensure that such significant public investment effectively contributes to U.S. technological competitiveness. Whether Intel can catch up in the AI race with future iterations of its chips will likely determine broader views on the success of these investments.

  • OpenAI CFO Says 75% of Its Revenue Comes From Paying Consumers OpenAI’s revenue model is heavily consumer-driven, with 75% of its income stemming from individual subscriptions, CFO Sarah Friar revealed in an interview at Money20/20. Despite growth efforts in enterprise products, consumer subscriptions—starting at $20 per month—remain the primary revenue source, driven by ChatGPT’s 250 million weekly users and a conversion rate of 5-6% from free to paid. OpenAI recently secured $6.6 billion in funding and $4 billion in credit to support advanced AI development, including infrastructure expansion in the US aimed at supporting AGI capabilities and a massive 5-gigawatt data center initiative.?

Investments

Perplexity AI seeks a $9 billion valuation in its latest funding round, aiming to challenge Google’s search dominance despite plagiarism allegations. OpenAI is collaborating with Broadcom and TSMC to build its first in-house AI chip, abandoning costly foundry plans and diversifying its chip supply, boosting stock for Broadcom and AMD. AWS’s GenAI segment, now a multibillion-dollar business, has fueled revenue growth with triple-digit gains, supporting enterprises in cloud-based AI innovation. AI investment reached new heights in Q3 2024, marked by a surge in productivity-focused startups and unicorns, though fewer billion-dollar deals led to a decline in total funding. Foundation Capital envisions a $4.6 trillion market with the shift from Software-as-a-Service to Service-as-Software, as autonomous AI agents bring a new model that enhances human work through real-time learning and adaptation, promising transformative impacts across industries.

  • Perplexity AI seeks valuation of about $9 billion in new funding round Perplexity AI, an AI search engine startup, is in talks to raise $500 million in its latest funding round, aiming to more than double its valuation from $3 billion in June to about $9 billion. This marks the company’s fourth funding round this year, fueled by the surge in generative AI interest. Perplexity, which seeks to challenge Google’s dominance in search, has faced plagiarism allegations from media outlets like the New York Times, though the company has denied the claims.

  • Exclusive: OpenAI builds first chip with Broadcom and TSMC, scales back foundry ambition | Reuters ? OpenAI’s decision to avoid the high expenses of building foundries shows a focus on efficient spending. OpenAI is developing its first in-house AI inference chip in collaboration with Broadcom and TSMC, while also expanding its chip supply with AMD to meet rising infrastructure demands. Initially considering building its own chip manufacturing network, OpenAI abandoned these costly "foundry" plans, opting instead to design in-house chips with industry support. This shift highlights OpenAI's strategy to diversify chip sourcing, manage costs, and secure supply, aligning with approaches used by tech giants like Amazon, Meta, Google, and Microsoft. Broadcom's involvement boosted its stock by over 4.5%, while AMD shares saw a 3.7% increase following the news. OpenAI remains one of the largest buyers of Nvidia GPUs, critical for training and inference tasks in its AI systems. By diversifying its supply chain and pursuing customized chip design, OpenAI’s approach could influence broader technology sector dynamics. What do you think?

  • Enterprise demand for GenAI fuels profit and revenue growth at AWS | Computer Weekly ? AWS's GenAI offerings have become a major growth driver, significantly boosting the cloud provider's revenue and profits. In Q3 2024, AWS reported a 19.1% year-over-year revenue increase, driven by enterprise interest in generative AI to modernize infrastructure and leverage cloud data for large-scale AI applications. CEO Andy Jassy emphasized that AWS’s GenAI stack caters to varying levels of user involvement: from companies building their own large language models (LLMs) to those using existing models through Amazon Bedrock or the more hands-off Amazon Q development tool. Jassy noted that AWS’s GenAI segment is now a multibillion-dollar business, growing at triple-digit rates, and distinguishing AWS from other providers by releasing twice as many GenAI and machine learning features as its competitors over the past 18 months. The shift to cloud for GenAI reflects enterprises' desire to innovate quickly, save costs, and harness the full potential of AI with efficient cloud-based data architecture.

  • The show’s not over: 2024 sees big boost to AI investment ? In Q3 2024, global AI deal volumes reached a two-year high, with 1,245 deals and a 24% year-over-year increase, led by applications promising productivity gains, cost savings, and operational efficiencies. Noteworthy deals included Safe Superintelligence’s $1B Series A and Anduril’s $1.5B Series F round in defense AI. While the average deal size increased, overall funding declined by 29% due to fewer billion-dollar deals. Gen AI remains central to the surge in AI unicorns, with 13 new AI-focused unicorns accounting for over half of all new billion-dollar startups. The challenge for these companies lies in achieving scalable adoption within enterprise environments where integration and data management complexities are high. Those who succeed in embedding gen AI seamlessly into existing workflows will likely meet investor expectations for sustainable growth.

My take: Despite some signs of a funding slowdown, AI investment is evolving into what could be the next generation of Big Tech. Remember that? This surge resembles early trends that created today’s Big Tech leaders, with AI companies focusing on productivity, cost reduction, and sector-specific solutions. As they gain traction, some of these AI startups could transform into industry giants, particularly as they tackle integration challenges to bring their tools to scale in established enterprises. In this sense, while funding patterns may fluctuate, the foundations of a new era in tech, powered by AI, appear to be forming.

  • A System of Agents brings Service-as-Software to life - Foundation Capital The transformation from Software-as-a-Service (SaaS) to “Service-as-Software,” led by autonomous AI agents, is reshaping industries with the potential to unlock a $4.6 trillion market in the coming years. Unlike conventional software, which assists in organizing tasks, this new generation of software performs and enhances human work autonomously, learning and adapting in real time. Foundation Capital explains this evolution through three stages: early workflow-based software, which relied on structured data entry; today’s AI-powered agents, which manage both structured and unstructured data to provide real-time, contextually aware responses; and a future “System of Agents” where collaborative AI agents mimic human teams, continuously learning and sharing insights to optimize workflows across sales, healthcare, cybersecurity, and more. This shift also brings a change in business models, moving from per-seat pricing to outcome-based models, allowing AI services to tap into workforce budgets rather than just software spending. As these systems address labor shortages, enhance 24/7 operations, and improve efficiency, the potential for AI-driven transformation across sectors becomes immense.?

Research?

A Wharton study shows rapid generative AI adoption among U.S. enterprises, with 72% using it weekly, spending up 130%, and Microsoft and Google projected to lead market share by 2025. Globally, AI remains the top tech priority for 2025, per IEEE, followed by cloud computing and robotics. In finance, economists leverage AI and alternative data to enhance forecasting, particularly for real-time insights on events like elections. Asian researchers warn of ChatGPT’s language bias risks in healthcare, highlighting misinformation dangers for low-resource languages, though the model aids in rapid disease modeling. Generative AI's ROI is promising, with 97% of early adopters seeing benefits; financial services lead in productivity gains and companies are refining use cases with significant investments, some exceeding $5 million.

  • Wharton Business School study claims enterprises buying into Gen AI – Blocks and Files Conducted with GBK Collective, the study surveyed over 800 U.S. enterprise leaders, revealing that 72% now use Gen AI weekly, a jump from 37% in 2023. AI adoption in Marketing and Sales soared from 20% to 62%, while Operations, HR, and Procurement also saw doubled engagement. Spending has surged by 130%, with 72% of companies planning additional investments in 2025. 90% of leaders view Gen AI as a tool to augment employee skills, a rise from 80% in 2023, while job replacement concerns have dropped slightly, from 75% to 72%. 46% of companies now have CAIOs to guide AI-driven initiatives, indicating the strategic importance of Gen AI in long-term planning. They also project that in three to five years, Microsoft/Azure and Google GCP are forecasted to lead the Gen AI market at 47% market share each, with AWS third at 33%, followed by OpenAI, Apple, IBM, and Meta.


Source: Wharton School of Business

  • ?https://aibusiness.com/generative-ai/ai-tops-list-of-most-important-technologies-of-2025 ? A recent IEEE study underscores the prominence of AI, including predictive and generative AI, machine learning, and natural language processing, as the leading technology expected to shape 2025. In a global survey of 355 technology leaders from Brazil, China, India, the U.K., and the U.S., 58% ranked AI as the most important technology—a position AI has held for two years. Cloud computing and robotics followed in importance at 26% and 24%, respectively.

  • Economists Embrace Gen AI Applications for Forecasting Models - Traders Magazine Economists are increasingly embracing generative AI to enhance forecasting models, driven by the need to interpret fast-evolving market events. A Bloomberg-Coalition Greenwich study found that economists now rely on tools like social media sentiment analysis, real-time consumer data, and AI-enhanced analytics to stay ahead. Chief Economist Michael McDonough highlighted that traditional data lags behind, whereas alternative data and AI provide immediate insights, especially during events like the U.S. Presidential Election. The study shows 53% of economists with over 15 years of experience use alternative data daily, including transaction and web traffic data, to refine risk assessments and forecasting accuracy. Kevin McPartland of Coalition Greenwich noted that AI’s role extends beyond prediction, aiding economists in summarizing vast data sets and streamlining complex analyses, essential as data volume grows. The findings underscore AI's transformative role in economic forecasting and data synthesis, providing economists with crucial tools to manage and extract valuable insights from an overwhelming data landscape.??
  • ChatGPT's language bias poses risk in public health response: study | MobiHealthNews Research from Asia has highlighted concerns about language bias in large language models (LLMs), particularly in public health contexts. A study involving researchers from the Chinese University of Hong Kong, RMIT University in Vietnam, and the National University of Singapore investigated how well LLMs like ChatGPT serve non-English speakers, using Vietnamese as a test language. The findings, published in BMJ, demonstrated instances of incorrect responses; for example, when asked about symptoms of atrial fibrillation in Vietnamese, ChatGPT responded with information about Parkinson's disease. This misinterpretation was attributed to the language bias in LLMs, which are often trained primarily on high-resource languages like English. The study emphasizes the risk of misinformation in healthcare, particularly in low-resource language regions where accurate and culturally relevant health information is critical. Researchers suggest that improving translation capabilities for diverse languages and sharing open-source linguistic data can help mitigate these biases. They also stress the importance of accurate responses to prevent misinformation, especially in regions prone to infectious disease outbreaks. Further illustrating the potential of LLMs, the same research team demonstrated ChatGPT's utility in developing disease transmission models. The AI significantly sped up initial analyses, supporting rapid public health responses during outbreaks. Despite promising applications, the study calls for vigilant monitoring of LLMs in healthcare to ensure equitable and reliable access to information.

  • Generative AI adoption sets the table for AI ROI - SiliconANGLE ? Generative AI (GenAI) adoption is on the rise, with 97% of early adopters reporting tangible benefits, according to new survey data from Enterprise Technology Research. The survey, involving nearly 1,800 IT decision-makers, shows 84% of respondents are now integrating at least one GenAI use case, primarily for text summarization (31%), collaboration (28%), and marketing content (27%). Financial services leads in adoption, with 84% seeing productivity improvements and 50% noting enhanced customer support. Despite some caution on ROI timelines—56% expect returns within a year—GenAI is yielding notable gains, including efficiency improvements (77%) and cost savings in staffing (33%). Spending trends reveal that 30% of organizations allocate over $500,000 annually to GenAI, while larger enterprises spend upwards of $5 million. As organizations fine-tune GenAI applications, they’re focusing on areas like private data use, retrieval-augmented generation, and specialized agents to boost productivity and streamline operations.

Concerns

Salesforce CEO Marc Benioff introduced Agentforce, an AI agent system that can autonomously perform tasks for businesses, citing benefits seen at companies like Saks Fifth Avenue; however, cybersecurity experts warn of risks with autonomous AI agents, as demonstrated by the vulnerability of Anthropic's Claude to prompt injection attacks that turn AI into "ZombAIs" capable of executing harmful commands. Despite AI’s growing presence, most U.S. organizations lack generative AI policies, with only 44% having implemented guidelines, largely due to differing departmental views on AI risks and benefits. Research highlights the environmental impact of large language models (LLMs), showing that while LLMs are energy-intensive, they have a lower environmental footprint per task than human labor, though future growth may increase their impact. Another study warns that generative AI data centers could generate millions of tonnes of e-waste by 2030, with sustainability efforts potentially mitigating up to 86% of this waste. In the UK, the government’s plan to allow AI companies to scrape content without permission is facing backlash from publishers and creators who argue for an opt-in system to protect intellectual property rights.

  • Is the world ready for autonomous AI? Salesforce CEO Marc Benioff makes the case for agents – GeekWire ? Salesforce CEO Marc Benioff, speaking at Dreamforce 2024, emphasized that AI agents are poised to transform business operations. While acknowledging concerns about the rapid shift from AI as a work companion to fully autonomous agents, Benioff highlighted Salesforce’s decade-long AI journey through its Einstein platform. The introduction of Agentforce exemplifies this shift, offering AI agents that can reason, plan, and act on behalf of businesses. Benioff cited real-world examples like Wiley, which scaled operations during peak seasons with Agentforce, and Saks Fifth Avenue, which implemented customer service systems in under an hour. In healthcare, AI agents can assist patients by managing follow-up tasks. Unlike traditional chatbots, Agentforce leverages generative AI for more integrated and adaptive operations, supported by customer data and safety measures. Benioff’s optimism aligns with Microsoft CEO Satya Nadella, who envisions AI agents enhancing productivity, competitiveness, and public sector efficiency. However, Benioff has criticized Microsoft’s Copilot platform, citing user dissatisfaction and data risks. Despite these concerns, both leaders agree that AI agents represent a pivotal moment for the software industry, with Benioff noting the potential to improve business performance by increasing revenues, optimizing KPIs, and driving growth.

  • And now, the case for the dangers of agents. ZombAIs: From Prompt Injection to C2 with Claude Computer Use · Embrace The Red What side are you on? In a recent blog post, Embrace The Red, cybersecurity expert wunderwuzzi explores the risks associated with Anthropic's Claude Computer Use feature, which enables the Claude AI model to autonomously interact with a computer, including running commands and navigating the web. Although designed for legitimate applications, the feature can be exploited through prompt injection attacks to download and execute malware, turning the host into a "ZombAI." Using a command and control (C2) server setup, wunderwuzzi demonstrates how an attacker could deploy malware on a target machine by prompting Claude to download and execute a file. Initial attempts using bash commands were blocked, but a simpler approach—prompting Claude to use Firefox to access a malicious "support tool"—succeeded. Once downloaded, Claude independently located the file, adjusted permissions, and executed it, connecting the infected device to the C2 server. The demonstration underscores the potential dangers of autonomous AI systems when processing untrusted data and highlights the ease with which prompt injection could compromise such systems. This vulnerability, especially within advanced models capable of executing commands, emphasizes the importance of strict security measures and caution when integrating autonomous AI agents. The blog ultimately cautions Trust No AI.

My take: the vulnerabilities demonstrated with Claude’s Computer Use feature pose serious risks for autonomous AI agents capable of executing commands and interacting independently with a computer. Prompt injection attacks are a primary concern, where agents can be tricked into executing harmful commands embedded in seemingly innocuous content they encounter online. For instance, if an AI agent browses websites autonomously, a malicious prompt could lead it to download malware or execute unauthorized actions, compromising system security. This risk is heightened by the agent’s ability to operate autonomously, as it may carry out harmful actions without oversight, transforming it into a “zombie” or ZombAI under an attacker’s control. Such agents could then be manipulated to establish command and control (C2) connections, enabling remote access and allowing attackers to monitor systems, exfiltrate data, or even launch additional attacks from within a compromised network. These risks also raise significant concerns about privacy and data security, as autonomous agents often have access to sensitive information and systems. If compromised, they could expose confidential data, posing privacy risks to individuals and organizations alike. Ultimately, these vulnerabilities affect trust in autonomous agents, especially in secure or mission-critical environments, as organizations may hesitate to deploy agents with extensive permissions if they can be easily exploited. Ensuring robust safeguards against prompt injection and unauthorized actions is crucial for making autonomous agents safe, secure, and beneficial in real-world applications.

  • Most orgs still don't have generative AI policies. Why? A recent Littler survey reveals that less than half (44%) of U.S. organizations have policies for employee use of generative AI, despite growing adoption. This marks a rise from just 10% in 2023. Developing policies is challenging due to differing views on AI’s risks and benefits across departments. Among those with policies, most enforce specific tool and task limitations, while only 3% fully restrict AI use. In HR, AI is used by 66% of companies, primarily for recruiting and candidate sourcing, though regulatory concerns are prompting caution. Leaders are evaluating AI's role as they anticipate more regulations and potential litigation.

My take: to develop an effective AI policy, start by aligning leadership on the organization's AI objectives and risk tolerance. Form a cross-functional team of knowledgeable stakeholders, including legal, HR, IT, and relevant department leads, to define policy guidelines. Focus on specifying approved AI tools and setting clear use cases based on task relevance. Ensure access controls are in place to restrict AI usage to designated roles. Implement a training program so employees understand policy expectations and safe AI practices. Regularly review the policy to adapt to evolving regulatory requirements, and engage employees in reporting compliance issues to maintain transparency and accountability.

  • Reconciling the contrasting narratives on the environmental impact of large language models | Scientific Reports A recent study by researchers at UC Riverside and partners provides a nuanced view on the environmental impacts of large language models (LLMs) by comparing them to human labor for content creation. The study finds that LLMs, while energy-intensive, have significantly lower environmental impacts per task than human labor in the U.S. For instance, the energy, water, and carbon outputs of Meta’s Llama-3-70B are substantially less than what a human would require for the same output, with human-to-LLM energy ratios reaching up to 150 for the U.S. and up to 16 in India. However, the authors caution that growing model sizes may escalate LLMs’ environmental costs, underscoring the need for sustainable AI development. The study also highlights that economic factors will likely lead to a blend of human and AI-driven work, rather than simple substitution. As industries incorporate LLMs, continuous research is essential to manage their environmental footprint and societal impact effectively.

  • Generative AI will soon generate millions of tonnes of electronic waste, study suggests - ABC News A new study published in Nature Computational Science estimates that generative AI data centers could produce up to 2.3 million tonnes of electronic waste (e-waste) per year by 2030, equivalent to discarding 13.3 billion iPhone 15 Pros. This surge in e-waste arises from tech companies rapidly upgrading data centers to support AI models. However, extending hardware lifespans, reusing components, and recycling materials could reduce e-waste by up to 86%. The study highlights that as AI infrastructure grows, e-waste will expand, primarily in regions with major data centers like North America, Europe, and East Asia, posing a pressing environmental challenge.?

  • ‘An existential threat’: anger over UK government plans to allow AI firms to scrape content | Artificial intelligence (AI) | The Guardian he UK government faces backlash over plans to let AI companies scrape content from publishers and artists by default, unless they explicitly opt out. This proposal has drawn opposition from organizations like the BBC, which insists on retaining control over its content, arguing that AI developers should seek permission to use copyrighted work. Many publishers and creatives express concerns, likening an opt-out system to leaving their work vulnerable to exploitation by tech firms without fair compensation. Critics argue this approach favors big tech over the creative sector, which has been essential to the UK economy. Although the government hopes to attract AI investments, stakeholders urge an opt-in system to protect intellectual property rights, drawing attention to how this debate highlights fundamental shifts in content access with the rise of AI chatbots.?

Case Studies?

Visa now uses over 500 generative AI applications for fraud prevention and efficiency, with a $3.3 billion investment in AI infrastructure over the last decade. Generative AI in cybersecurity poses both risks and benefits, with threats like voice and video impersonation countered by proactive AI-driven fraud detection, as shown in Mastercard's use cases. Prudential is implementing Google’s MedLM for faster, more accurate claims verification in Singapore and Malaysia. Microsoft has launched specialized healthcare AI models on Azure, helping institutions like Mass General streamline diagnostics and patient care. Oracle's new AI-powered EHR system enhances healthcare efficiency, competing in the Epic-dominated EHR market with features like voice-commanded data retrieval and clinical note generation. Generative AI boosts marketing productivity through streamlined data use, personalized content, and transparency efforts, though AI trust remains a concern. Google's AI models now generate over 25% of its new code, reflecting AI's deep integration into company operations. LinkedIn’s AI Hiring Assistant automates 80% of recruitment tasks, saving companies substantial time and boosting candidate engagement. Louisiana schools use Amira, an AI tutor, to aid students in reading, addressing gaps due to tutor shortages with promising early results. SK Telecom and Samsung’s partnership uses AI to optimize 5G connectivity in high-traffic areas, aiming for an AI-native network with enhanced real-time adjustments.

Finance

  • Visa Uses More Than 500 Generative AI Applications | PYMNTS.com Visa now utilizes over 500 generative AI applications, focusing on fraud prevention and operational efficiency, according to The Wall Street Journal. Tools include AI-driven security checks, billing cycle assistance, and specialized chatbots. Over the past 10 years, Visa has invested $3.3 billion in AI infrastructure, though ROI measurement, especially for productivity, remains complex. Visa’s President of Technology, Rajat Taneja, envisions AI-managed teams overseen by humans to maximize benefits. Mastercard is similarly deploying generative AI to streamline customer onboarding.?

  • Why gen AI is a double-edged sword in cybersecurity Generative AI presents both challenges and opportunities in cybersecurity. On one side, it enables complex scams like digital voice and video clones, allowing fraudsters to impersonate people convincingly and execute sophisticated schemes at minimal cost. Instances like a finance worker in Hong Kong being deceived into transferring $25 million after interacting with AI-generated digital twins of colleagues highlight the risks. To combat such threats, low-tech strategies, such as shared family passwords or personal questions, can reveal imposters, leveraging human knowledge that AI clones can't replicate. On the proactive side, companies like Mastercard are using generative AI to enhance fraud detection. By analyzing broader patterns beyond individual behavior, AI can reduce false alerts, improving the user experience. For example, if someone incurs a gambling expense while staying at a casino resort, AI models using contextual data might deem it legitimate, avoiding unnecessary alerts

Healthcare

  • Prudential will tap Google's MedLM gen AI models to verify medical claims | ZDNET Prudential is deploying Google’s MedLM models to verify medical claims, aiming to speed up processing and reduce manual errors. The rollout, starting in Singapore and Malaysia, will run for 3-4 months, focusing on analyzing documents like diagnostic reports and prescriptions. Early tests showed MedLM doubled automation rates and improved claim accuracy. Prudential will compare the AI’s performance with current processes, maintaining human oversight at key points. CEO Arjan Toor highlighted the initiative as a step toward seamless, AI-driven healthcare experiences, enhancing efficiency and customer satisfaction.?

  • Announcing Healthcare AI Models in Azure AI Model Catalog ? Microsoft has launched advanced healthcare AI models in Azure AI Studio, developed with Microsoft Research and key healthcare partners to support specialized medical needs. These models, such as MedImageInsight for image analysis, MedImageParse for image segmentation, and CXRReportGen for chest X-ray reporting, enable healthcare providers to streamline diagnostics and improve patient care by tailoring AI to handle complex data types, including medical imaging and clinical records. Azure AI Studio provides a secure platform for healthcare professionals to fine-tune and deploy these models across cloud, on-premises, or hybrid environments while adhering to compliance standards like HIPAA. Major institutions like Mass General Brigham and the University of Wisconsin-Madison are already using these tools to reduce administrative burdens and enhance workflow efficiency.

  • Oracle announces new AI-powered electronic health record Oracle has launched a new AI-enhanced electronic health record (EHR) system, marking its most substantial update in healthcare technology since acquiring Cerner in 2022. Unlike traditional EHRs, this system utilizes cloud and AI technologies to simplify navigation and setup, eliminating menus and allowing doctors to retrieve patient information through voice commands. This design aims to reduce the time doctors spend searching records, enabling more direct patient care. Oracle’s EHR enters a highly competitive market, dominated by Epic Systems, and is built on a new foundation separate from Cerner’s existing infrastructure, meaning current Cerner users will need to migrate if they choose to adopt it. This EHR integrates the Clinical AI Agent, which generates clinical notes from recorded doctor-patient visits, streamlining documentation processes and easing doctors' administrative burdens. Oracle anticipates its EHR will address long-standing healthcare inefficiencies, offering a more integrated, responsive approach to patient care.?

My take: here's a comparison table for AI-powered EHR solutions from Oracle, Microsoft Azure, and Google Cloud:


Sourse: Eugina Jordan. Use with proper credit only

Marketing

  • A marketer’s guide to implementing generative AI | MarTech ? Gen AI accelerates marketing goals by enhancing productivity, data analysis, personalization, and content creation. However, effective adoption requires more than adding tools to the tech stack; it demands a strategic approach across workflows. Establishing a cross-functional AI council ensures responsible implementation by guiding strategy, managing risks, and aligning AI efforts with business outcomes. Clean, structured data is crucial, supported by a marketing data champion to oversee quality and ensure metadata provides context for AI tools. Organizations must evaluate whether to buy or build Gen AI solutions based on outcomes, risk tolerance, and resources. Upskilling teams remains essential for success, regardless of the approach. Customer trust is also critical, as 70% of consumers fear AI-generated content may spread misinformation. Marketers should focus on transparency, certify content authenticity, and monitor brand reputation to maintain trust and loyalty. Following these practices unlocks the full potential of Gen AI in marketing.

Software Development

  • Over 25% of Google's code is written by AI, Sundar Pichai says | Fortune ? Alphabet CEO Sundar Pichai revealed that over 25% of Google’s new code is now generated by AI, indicating a deeper integration of AI within the company’s operations. AI’s role extends beyond coding: it has also driven Q3 earnings, partly through cloud growth, which reached $11.4 billion—a 35% year-over-year increase. AI-powered tools, running on Google’s Gemini models, are credited with enhancing product adoption by 30%, attracting new enterprise clients, and expanding existing customer relationships.
  • Here is a list of best coding assistants as of September 20204: 17 Best AI-Powered Coding Assistant Tools in 2024 ?

  • https://www.law.com/americanlawyer/2024/10/28/law-firms-still-lacing-up-their-shoes-in-gen-ai-race-report-says/?slreturn=2024110292555 A new report from Law dot com Pro Fellows highlights the slow adoption of generative AI in the legal industry, warning that this hesitation could leave law firms trailing behind “sprinting” corporate clients who are rapidly embracing AI. The report, authored by Ford Motor Co. innovation lead Aaron Boersma, stresses that while AI adoption may seem daunting, it’s essential for firms to remain competitive. Boersma advocates for “responsible” AI use that enhances, rather than replaces, legal services, urging law firms to adopt AI tools to manage increasing volumes of complex work driven by AI-enabled client demands. With corporate clients expecting faster, AI-augmented legal support, firms risk client and talent retention issues if they don’t keep pace. The report underscores the necessity of AI for law firms to maintain efficiency and relevance in an AI-driven market, with examples like Ashurst’s AI partnership demonstrating potential for improved client value.?

My take:? the legal industry is experiencing a rapid adoption of artificial intelligence (AI), with significant implications for traditional billing models. Recent data indicates that AI usage among legal professionals has surged from 19% in 2023 to 79% in 2024, highlighting a swift integration of AI tools into legal practices. This accelerated adoption is prompting law firms to reassess the conventional billable hour model. AI enhances efficiency by automating tasks such as document review and legal research, potentially reducing the time required for these activities. Consequently, firms are exploring alternative billing structures, including flat fees and value-based pricing, to align with the increased productivity enabled by AI.

Recruitment

  • LinkedIn Enters AI Agent Race With LinkedIn Hiring Assistant – JOSH BERSIN LinkedIn has launched the Hiring Assistant, an AI-powered recruitment tool designed to streamline the hiring process by automating nearly 80% of pre-offer tasks. Integrating directly into LinkedIn's workflow, the Hiring Assistant leverages features like “Experiential Memory” to learn recruiters' preferences and “Project Memory” to store search-related data, improving efficiency and candidate targeting. Companies such as Siemens and Toyota Material Handling Europe report significant time savings, with searches reduced from 15 minutes to 30 seconds. Additionally, AI-assisted outreach increases acceptance rates by 44%. As job seekers also use AI to optimize their applications, LinkedIn’s Hiring Assistant helps recruiters stay competitive, freeing them up to focus on deeper candidate engagement and strategic advising.

Education

  • Louisiana schools use Artificial Intelligence to help young children learn to read : NPR In Louisiana, over 100,000 students are using an AI-powered tutor named Amira to improve reading skills. Amira functions like a real tutor, assisting students when they struggle by identifying issues and using strategies to help them read words. Johnson Elementary, with a high number of English language learners, saw success using Amira, filling gaps left by a shortage of tutors. While research on AI-based tutoring remains limited, preliminary results show positive impacts on reading scores when Amira is used regularly. The state’s two-year pilot program aims to assess Amira’s long-term benefits for reading proficiency.?

Students access Amira through school programs, as it's implemented in classrooms across Louisiana. The tool is typically set up on school-provided devices, and students interact with Amira through headsets, allowing for personalized, one-on-one assistance within a group setting. While Amira offers interactive guidance independently, teacher supervision remains essential to monitor student engagement, answer questions, and ensure they use the tool effectively. Teachers can observe students' progress, adjust learning goals, and provide support where needed, making AI a supplement to, rather than a replacement for, direct supervision.

Telecom

SK Telecom, Samsung use AI to optimize 5G base stations SK Telecom and Samsung have partnered to enhance 5G connectivity with AI-driven technology, aiming to improve network performance in high-traffic areas like subway systems. Using Samsung’s AI-RAN Parameter Recommender, SK Telecom leverages historical network data to customize base station settings, adapting the technology to varied radio environments. Initial trials show improved 5G base station performance, benefiting overall network quality. SK Telecom also plans to expand AI capabilities for real-time signal adjustments and beamforming optimizations, supporting an “AI-Native Network.” The telco recently overhauled its “A.” AI personal assistant app, integrating large language models (LLMs) for features like daily management and natural voice commands.??

My take: As of November 2024, SK Telecom, in collaboration with Deutsche Telekom and other partners, has been actively developing a telco-specific Large Language Model (LLM) tailored for the telecommunications industry. The initial version of this LLM was scheduled for release in the first quarter of 2024. SK Telecom News However, there have been no subsequent public announcements confirming its official launch.?

Women Leading in AI?

??We’re excited to present Jedidah Karanja as a Featured AI Leader?? https://www.dhirubhai.net/posts/women-and-ai_were-excited-to-present-jedidah-karanja-activity-7256673685212704768-Mh3Y?utm_source=share&utm_medium=member_desktop

The Podcast Recording of Our ? Lightning Talk ? Event is Now Live! ???

If you missed our incredible event showcasing women driving the AI space forward, or if you want to revisit the insights from our amazing panelists and moderator, now you can!

https://www.dhirubhai.net/posts/women-and-ai_aiinhealthcare-womenandai-innovation-activity-7257036015184326656-BFFa?utm_source=share&utm_medium=member_desktop

Learning Center

AWS explores implementing Ethereum smart contracts for governing LLM training data, allowing decentralized AI governance using IPFS for secure data storage. Google’s AI courses cover skills from prompting essentials to advanced tools, while LM Studio enables users to run large language models locally on personal devices. Google and Technion introduced the WACK dataset to understand hallucinations in LLMs, though it’s currently unavailable for public download. Claude.ai now offers an analysis tool for executing JavaScript-based data processing, though ChatGPT’s Advanced Data Analysis remains more robust. Apple Intelligence, a personal AI system, is available in beta on select Apple devices with privacy-focused features. eRAG, a new cost-effective method, optimizes AI-driven search engines, enhancing retrieval reliability while reducing GPU costs. Patronus AI’s API detects hallucinations in real-time, catering to small firms needing AI accuracy. Finally, GitHub Copilot expands developer choice with multiple AI models and introduces GitHub Spark, a tool simplifying app creation with natural language prompts.

Learning

  • Use a DAO to govern LLM training data, Part 2: The smart contract | AWS Database Blog – covers into the implementation of an Ethereum smart contract to govern the lifecycle of AI models, particularly focusing on the ingestion of training data. Building upon the foundation laid in Part 1, which introduced the concept of using a decentralized autonomous organization (DAO) for AI governance, this post provides a detailed walkthrough of writing and deploying the smart contract. The authors illustrate how the contract maps externally owned accounts (EOAs) to InterPlanetary File System (IPFS) CIDs, enabling secure and decentralized storage of training data. This integration of blockchain and generative AI technologies on AWS offers a novel approach to AI governance, emphasizing transparency and decentralization.?

  • How to Run AI Large Language Models (LLM) on Your Laptop LM Studio is a new software solution that enables users to run large language models (LLMs) directly on laptops or desktops, bringing advanced AI capabilities to personal devices without needing high-end hardware. This tool is designed to leverage GPU offloading, which boosts performance even on systems without dedicated graphics cards, making it possible to use LLMs effectively on a laptop with at least 16 GB of RAM. With LM Studio, users can perform tasks like text generation, document summarization, language translation, and code analysis locally, maintaining data privacy and eliminating the need for an internet connection. Installing LM Studio involves downloading the software from its website, following the setup wizard, and choosing compatible models, such as Meta’s Llama 3.2 1B. The platform also provides robust model management, allowing users to install, switch, and customize models based on specific tasks. LM Studio's offline capability and data control provide unique advantages for secure environments or remote use, as well as cost savings by avoiding subscription fees for cloud-based AI services. The software empowers users in fields ranging from writing and technical support to research and language learning, making advanced AI tools accessible on personal laptops.

  • ?Are hallucinating GenAI models careless or just plain ignorant? Google researchers found out The Wrong Answer despite having Correct Knowledge (WACK) dataset is a specialized resource developed by researchers from Google and the Technion – Israel Institute of Technology. It is designed to help distinguish between two types of hallucinations in large language models (LLMs): those arising from a lack of knowledge and those resulting from errors despite possessing the correct information. As of now, the WACK dataset is not publicly available for download. The researchers have not released it for external use, which is common for experimental datasets in early research stages. For more detailed information about the WACK dataset and its applications, you can refer to the preprint paper titled "WACK: A Dataset for Understanding Hallucinations in Large Language Models" available on arXiv. If you're interested in similar datasets or tools for analyzing hallucinations in LLMs, consider exploring resources like the GEM benchmark, which focuses on natural language generation evaluation, or the TruthfulQA dataset, designed to assess the truthfulness of language models. These resources can provide valuable insights and serve as alternatives

?

Prompting

  • Learn AI Prompting with Google Prompting Essentials Google Prompting Essentials is a free educational resource by Google designed to help users get the most out of AI by teaching effective prompt creation. Covering five straightforward steps, it aims to empower individuals to tackle complex tasks, analyze data, and summarize information efficiently with AI assistance. Whether you’re a beginner or looking to improve productivity, the resource promises valuable insights to enhance your interaction with AI tools.?

Tools and Resources

  • Introducing the analysis tool in Claude.ai \ Anthropic Anthropic has introduced a new analysis tool within Claude.ai , enabling users to write and execute JavaScript code directly in the platform. This feature allows for real-time data processing and analysis, enhancing Claude's capabilities beyond text generation. Users can upload CSV files for Claude to analyze, facilitating tasks such as data cleaning, exploration, and visualization. The tool is currently available in feature preview for all Claude.ai users.?

  • And now analysis: How Claude's new AI data analysis tool compares to ChatGPT's version (hint: it doesn't) | ZDNET David Gewirtz’s review of Claude 3.5 Sonnet’s new data analysis tool highlights its limitations in handling data-intensive tasks. While Claude’s tool is free and accessible to all users, it struggles with processing even moderately sized datasets, limiting its utility for larger analyses. Claude’s analysis runs on JavaScript, whereas ChatGPT’s Advanced Data Analysis uses Python, known for a robust ecosystem in data processing and machine learning. Tests showed Claude’s tool capped at about 2,000 lines, whereas ChatGPT Plus managed datasets of over 170,000 lines without issue. Visualization in Claude also falls short, with data labels truncated in charts. For users needing more extensive data handling capabilities, ChatGPT’s tool remains the stronger choice.

  • How to get Apple Intelligence Apple Intelligence, a personal AI system from Apple, is now in beta for iOS 18.1, iPadOS 18.1, and macOS Sequoia 15.1, enhancing communication, productivity, and self-expression while maintaining privacy. To activate, users need a supported device—iPhone 15 Pro models or newer, iPads with A17 Pro or M1, or Macs with M1 or later—along with iOS 18.1, language settings, and 4GB of storage. After joining the Apple Intelligence waitlist in Settings, activation typically occurs within hours. Currently available in English (U.S.), the feature will expand in December to include other English-speaking regions, with broader language support next year. Regional limitations apply in the EU and China. Apple Intelligence includes writing tools, photo clean-up, memory creation, notification summaries, enhanced Focus mode, and a more natural, responsive Siri, with more features planned as storage requirements increase.

  • https://github.com/meta-llama/llama-recipes/tree/main/recipes/quickstart/NotebookLlama provides a quickstart guide for using the Llama models within a Jupyter notebook environment. This repository offers code samples and setup instructions aimed at making it easier for users to implement Llama models for tasks like text generation, language understanding, and more. It’s a useful resource for developers interested in experimenting with or deploying Meta’s Llama models in different applications.

  • Team introduces a cost-effective method to redesign search engines for AI Researchers from the University of Massachusetts Amherst have introduced a groundbreaking method to make search engines more efficient and reliable for AI applications. Their new system, "eRAG" (evaluation for Retrieval-Augmented Generation), is designed to improve how AI models like Large Language Models (LLMs) interact with search engines, as traditional search engines were developed primarily for human users. This innovative approach allows AI and search engines to better "communicate," helping the AI find and assess relevant information more accurately. LLMs, which are trained on vast, pre-existing datasets, often struggle with processing vague or unfamiliar queries, unlike humans who can refine searches iteratively. The eRAG system addresses this challenge by providing a cost-effective way to evaluate and improve AI's use of search engines. Traditional evaluation methods, such as crowdsourced judgments or expensive, high-powered models, are either costly or inefficient. eRAG, however, is up to three times faster and uses 50 times less GPU power than previous "gold-standard" methods, offering nearly the same reliability. With eRAG, a human task initiates an AI query, after which the search engine returns a set of results. The system then measures how effectively each document supports the AI's task, providing a more streamlined way to assess the search engine’s quality for AI. This tool marks a significant step toward future search engines fully optimized for AI users, bringing new efficiency to AI-driven information retrieval. The research has been honored with a Best Short Paper Award at SIGIR 2024, and the team has made the eRAG code publicly available here: GitHub - alirezasalemi7/eRAG: Codes and packages for the paper titled Evaluating Retrieval Quality in Retrieval-Augmented Generation.

  • Patronus AI launches world’s first self-serve API to stop AI hallucinations | VentureBeat Patronus AI has launched the first self-serve API to prevent AI "hallucinations," offering real-time detection of inaccuracies in AI-generated content. The San Francisco-based startup, backed by $17 million in Series A funding, provides customizable safety tools for businesses, including judge evaluators in plain English and specialized features like Lynx for medical accuracy and CopyrightCatcher for copyright protection. Its pay-as-you-go model starts at $10 per 1,000 API calls, aiming to make advanced AI safety accessible to smaller firms. Patronus AI’s platform has already attracted clients like HP and partnerships with Nvidia, MongoDB, and IBM, positioning it as a pivotal tool for companies facing new regulatory requirements on AI safety.

To access Patronus AI’s self-serve API, start by visiting their official website and creating an account through the "Start for free" option. Once registered, you can generate a unique API key from your dashboard, which will authenticate your requests. It’s recommended to review the API documentation to understand available endpoints and integration methods fully. With the API key and reference guide in hand, you can integrate Patronus AI’s evaluation capabilities into your applications using SDKs or direct API calls. For additional assistance, Patronus AI provides support channels to help streamline your integration process.

At GitHub Universe, GitHub announced a new level of developer choice for Copilot users with the integration of multiple language models: Anthropic’s Claude 3.5 Sonnet, Google’s Gemini 1.5 Pro, and OpenAI’s o1-preview and o1-mini. These additions allow developers to select models that best suit their coding needs, from basic suggestions to complex, multi-step tasks, enhancing Copilot’s flexibility. The models are being rolled out progressively across Copilot features like multi-file editing, code review, and security autofix, with options for individual and organizational model preferences. GitHub also introduced GitHub Spark, an AI-native tool enabling users to build apps with natural language, aiming to simplify app creation without managing cloud resources.

My take: this multi-model approach enables developers to work more efficiently, improving productivity by matching the right model to the right job. Organizations can also fine-tune their Copilot setups to meet project-specific needs like quality, security, or speed, which is particularly beneficial for enterprises with compliance or performance standards. Additionally, GitHub Spark—a tool for building apps using only natural language—aims to reduce the learning curve, making app development accessible to a broader audience, even those without traditional coding skills. By expanding model choice and introducing accessible tools, GitHub is moving closer to its vision of democratizing software development and making advanced coding tools widely accessible and customizable.

Instructions: To use GitHub Copilot’s new multi-model features, start by ensuring you have an active Copilot subscription, accessible through your GitHub account settings. Once subscribed, open Copilot in your IDE or on GitHub.com and navigate to the settings where you can choose from models like Anthropic’s Claude 3.5 Sonnet, Google’s Gemini 1.5 Pro, and OpenAI’s o1-preview and o1-mini, each optimized for different coding needs. Organizations can also customize model access for team members through GitHub’s organizational settings, allowing them to tailor Copilot based on project requirements. After selecting a model, you can use Copilot’s capabilities—including code suggestions, multi-file editing, and Copilot Chat—with the model best suited to your tasks. If available, GitHub Spark offers a streamlined way to create micro-apps by entering natural language prompts, with real-time previews and automatic versioning to refine your app as you go. For further details, refer to GitHub’s documentation or Copilot settings within your development environment.

?


If you enjoyed this newsletter, please comment and share. If you would like to discuss a partnership, or invite me to speak at your company or event, please DM me.

Ewa D.

AI Product & UX Advisor | UX4AI | Product & Design Leader | LinkedIn Top Voice: UX, User Experience Design, AI

1 周

You did it again, tons of great content. I found the regulatory section very informative.

Eugina Jordan

CMO to Watch 2024 I Speaker | 3x award-winning Author UNLIMITED I 12 patents I AI Trailblazer Award Winner I Gen AI for Business

1 周

Holly Uber here is the newsletter :)

Eugina Jordan

CMO to Watch 2024 I Speaker | 3x award-winning Author UNLIMITED I 12 patents I AI Trailblazer Award Winner I Gen AI for Business

1 周

Zahid Ghadialy -- my take on telco LLM and SK Telecom announcement from earlier in the year.

Eugina Jordan

CMO to Watch 2024 I Speaker | 3x award-winning Author UNLIMITED I 12 patents I AI Trailblazer Award Winner I Gen AI for Business

1 周

Rashmi R. Rao, Stacy Olinger, MSN, Sonnie Linebarger -- what is your take on advancements in Gen AI for healthcare and solutions by Oracle, Microsoft and Google?

Eugina Jordan

CMO to Watch 2024 I Speaker | 3x award-winning Author UNLIMITED I 12 patents I AI Trailblazer Award Winner I Gen AI for Business

1 周

Rudolph Moncrieff you might find analysis on CHIPS act useful ;)

要查看或添加评论,请登录

社区洞察

其他会员也浏览了