Gen AI for Business Weekly Newsletter # 27

Gen AI for Business Weekly Newsletter # 27

October 20 newsletter

Welcome to Gen AI for Business weekly newsletter #27.

We bring you key insights and tools on Generative AI for business, covering the latest news, strategies, and innovations in the B2B sector.

What stood out to me: As the temperature outside cools down, so does the AI drama... or does it? We’ve got Walmart and Amazon ditching search bars in favor of AI-powered shopping guides, the U.S. Treasury saving billions by hunting fraud with machine learning, and the Army piloting generative AI for acquisitions (because, of course, soldiers need AI, too!).

Meanwhile, Google’s shaking things up with Gemini-powered product briefs, Heineken’s brewing with bots, and Gatorade’s letting you design your squeeze bottles with AI flair. Over at Uber, they’re riding the hybrid wave of open-source and in-house LLM training—because why pick one when you can do both?

And let's not forget the real question of the week: Will ChatGPT’s revenue-sharing store leave developers hungry, or is it just the start of something big? Read on for all the spicy details and, as always, the latest tools and learnings to keep you and your business on the AI cutting edge.

But here’s what makes this week extra special: a deep dive into AI power consumption and the creative strategies companies are adopting to tackle the growing energy demands. From nuclear-powered data centers to new cooling solutions, AI players are racing to meet sustainability goals without breaking the grid (or the bank). With mega-investments from Nvidia, Microsoft, and Amazon, the stakes couldn’t be higher—because even the most advanced AI won’t run without juice just like our own devices ;)

If you enjoyed this letter, please leave a like or a comment and share! Knowledge is power.

Thank you,

Eugina

News about models and everything related to them

Meta AI, built on the Llama 3.2 model, offers text, image, and video generation across Meta platforms, engaging 400 million monthly users but raising concerns over privacy and misinformation. Mistral AI celebrates its first anniversary with new models—Ministral 3B and 8B—focused on efficient edge computing, offering low-latency, privacy-first solutions outperforming Llama 3.2 and Gemma 2 9B. NVIDIA’s Llama-3.1-Nemotron-70B-Instruct enhances conversational AI with improved contextual understanding and reasoning, targeting versatile applications via cloud APIs. Zyphra’s Zamba2-7B model, optimized for performance with minimal resources, emphasizes open-source accessibility, outperforming competitors like LLaMA-2 while avoiding the computational costs associated with larger models.

  • What is Meta AI? — everything you need to know | Tom's Guide ? Meta AI, developed by the parent company behind Facebook, Instagram, and WhatsApp, offers a range of AI capabilities built on the open-source Llama 3.2 model. Available in a dozen countries, it integrates features such as text and image generation, video creation, and voice interactions—including celebrity voices—across Meta's ecosystem. With 400 million monthly active users, Meta AI democratizes AI access by making advanced tools free and user-friendly. However, concerns about privacy, bias, and misinformation persist, reflecting Meta’s history with data privacy controversies. Meta AI aims to balance innovation and responsibility as it shapes the future of AI content creation and personalization.

  • Un Ministral, des Ministraux | Mistral AI | Frontier AI in your hands On the first anniversary of the groundbreaking Mistral 7B, Mistral AI unveils two next-gen models, Ministral 3B and 8B, designed for edge computing and on-device use. These models offer exceptional performance in knowledge, commonsense reasoning, and function-calling, catering to use cases like local translation, smart assistants, robotics, and autonomous analytics. Ministral 8B features an interleaved sliding-window attention pattern for faster, memory-efficient inference, with both models supporting up to 128k context length. Mistral’s customers, from hobbyists to global manufacturers, benefit from privacy-first local inference, low latency, and efficient multi-step workflows. The models outperform peers like Llama 3.2 and Gemma 2 9B across multiple benchmarks. Available through API access starting at $0.04/M tokens for Ministral 3B and $0.1/M tokens for Ministral 8B, both models come with flexible licensing options.

Mistral AI’s rapid rise, despite being just over a year old and with less funding than OpenAI, comes down to strategic focus and sustainable practices. Unlike OpenAI, which invests heavily in large, cloud-dependent models like GPT-4, Mistral has carved a niche by developing lightweight models tailored for edge computing and on-device use. These models, such as Ministral 3B and 8B, prioritize speed, efficiency, and privacy, catering to industries requiring low-latency, local AI solutions. By leveraging open-source research and efficient architectures, Mistral keeps its operational costs low, avoiding the cash burn that burdens larger players. Their sustainable business model, with compute-efficient solutions, allows for cost-effective deployment without relying on expensive infrastructure. Additionally, Mistral’s ability to quickly bring targeted models to market aligns with the growing demand for edge AI, especially in sectors like manufacturing, healthcare, and robotics. With $113 million in seed funding, Mistral has rapidly deployed products that meet specific market needs, demonstrating agility and focus while avoiding the financial strain that comes with scaling large, generalized AI models.


Source: Mistral

  • llama-3_1-nemotron-70b-instruct | NVIDIA NIM ? The Llama-3.1-Nemotron-70B-Instruct is an advanced large language model developed by NVIDIA. It is tailored to enhance response quality, with a focus on contextual understanding, reasoning, and text generation. This model is part of NVIDIA's efforts to push the boundaries of conversational AI by optimizing its performance in a variety of applications, such as text-to-text transformation, code generation, and language comprehension. Llama-3.1-70B is designed for deployment across different platforms, emphasizing flexibility and ease of integration. NVIDIA provides tools and APIs to interact with the model through their cloud service, enabling developers to leverage it for applications like chatbot development or more complex natural language processing tasks.

  • https://www.zyphra.com/post/zamba2-7b The Zamba2-7B model from Zyphra is a highly efficient language model designed to push the boundaries of performance using fewer resources. The training process employed a two-phase strategy: the first phase processed around 1 trillion tokens from general web data, and the second focused on 50 billion tokens of high-quality datasets. This novel architecture emphasizes efficient training with Mamba blocks and shared memory layers to reduce memory usage and enhance performance, making it possible to deploy on consumer GPUs. Zyphra's approach prioritizes both open-source transparency and accessibility by releasing all training checkpoints and parameters under an Apache 2.0 license. Zamba2-7B outperforms models like LLaMA-2 and OLMo-7B while requiring significantly less data and computation, demonstrating its capability to meet state-of-the-art benchmarks across multiple NLP tasks.

Unlike some of the major players like OpenAI or Google, Zyphra seems to focus on compact AI solutions rather than scaling massive, resource-heavy LLMs, potentially allowing them to avoid the substantial losses typically associated with running large-scale models. That is the trend that we are seeing because of the cost of compute.

Gen AI news from different industries

Professional services firms are balancing the need to retain billable hours for high-value tasks while adopting AI-driven pricing models for routine work, aiming to combine efficiency with human expertise. AI avatars are revolutionizing education by assisting teachers with routine tasks, improving student engagement, and addressing teacher burnout, while still preserving the essential human element. In healthcare, Microsoft’s new AI models enhance patient care by integrating medical data and enabling the development of AI-powered agents, while AI-enhanced MRIs show potential for improving brain abnormality detection. Generative AI in healthcare is streamlining administrative tasks, alleviating clinician burnout, and improving patient interaction, with strong support from healthcare providers. A study by Amadeus highlights that generative AI will dominate travel tech priorities in 2025, focusing on personalized services, though adoption is slowed by budget and data governance challenges.

Professional Services

  • Professional Services Face Generative AI-Driven Pricing Conundrum ? While AI enables faster, repetitive work, such as contract reviews, high-value and strategic tasks will likely continue to follow the billable hour model. This suggests the billable hour will remain in use for at least another decade, coexisting with alternative fee models like fixed or value-based pricing. Clients may benefit from an evolutionary approach, outsourcing routine tasks to AI-empowered firms offering fixed fees, while reserving more complex work for time-based billing. However, firms face a dilemma—AI investments should, in theory, reduce service costs, but clients may resist paying for AI usage unless it results in innovation or enhanced services. To balance tradition with innovation, firms must blend fixed-fee structures for AI-suited tasks with traditional pricing for bespoke, human-driven work.

Professional services firms are grappling with the shift AI brings to traditional pricing models, similar to the challenges recently seen in the legal sector. AI reduces workflow times, making repetitive tasks like contract reviews faster and more cost-efficient. While the billable hour will persist for high-value, strategic tasks, firms must embrace value-based pricing models for routine work. This evolution offers clients a hybrid approach: AI-powered services at fixed fees alongside human-led expertise billed by time. To remain competitive and transparent, firms need to rethink their pricing structure, moving beyond the billable hour toward models that reflect both efficiency and value.

Education

  • The Role of AI Avatars in Modernizing Education AI avatars are enhancing education by supporting teachers with routine tasks, allowing them to focus on personalized instruction, complex lessons, and creative engagement. These digital assistants improve student participation, especially for those reluctant to engage in traditional settings, by offering customized learning experiences tailored to individual needs. AI avatars also help manage teacher workloads, reducing burnout and addressing resource shortages by providing after-hours support. While these tools complement human teaching, they cannot replace the emotional connections and nuanced guidance that only educators can provide, ensuring the irreplaceable human element remains central to education.

Healthcare

  • ICYMI: Microsoft Unveils New Healthcare AI Models and AI Agent Service | PYMNTS.com Microsoft announced new AI-powered capabilities for its Cloud for Healthcare platform, aiming to enhance patient care, clinical operations, and healthcare team collaboration. These updates include AI models designed to integrate various data sources, such as medical imaging, genomics, and clinical records, allowing healthcare providers to gain deeper insights. Microsoft Fabric, an AI-powered data management platform, is now generally available to support data access and analytics across healthcare organizations. Additionally, Microsoft launched a public preview of healthcare agent services in Copilot Studio, enabling providers to develop AI agents for tasks like appointment scheduling and patient triaging. The generative AI healthcare market is projected to grow significantly, reaching $22 billion by 2032, highlighting the industry's rapid adoption of AI-driven innovations.

  • AI-Enhanced MRIs Show Potential for Brain Abnormality Detection - Neuroscience News Researchers at UC San Francisco have developed a machine learning model that transforms 3T MRI scans into synthetic images resembling the higher-resolution 7T MRI, enhancing the visibility of brain abnormalities. These upgraded images reveal finer details like white matter lesions and subcortical microbleeds, which are difficult to detect with standard 3T MRIs. This technology shows promise for improving diagnostics in conditions like traumatic brain injury (TBI) and multiple sclerosis (MS). However, extensive clinical validation is still required before it can be adopted widely. This AI-driven solution could expand access to high-quality imaging without needing specialized 7T equipment, potentially improving patient outcomes in neurodegenerative diseases and brain injury assessments.

  • ?How gen AI can help doctors and nurses ease their administrative workloads highlights how administrative workloads are burdening healthcare professionals, with doctors and nurses spending nearly 28 hours per week on paperwork. This administrative overload contributes to burnout, staffing shortages, and less time for patient care, with 82% of clinicians reporting burnout and 80% indicating administrative tasks reduce patient interaction. Generative AI (gen AI) offers promising solutions by streamlining documentation, automating medical imaging reports, and speeding up prior authorizations. Healthcare providers and insurers are optimistic about AI’s potential, with over 90% expressing positive attitudes toward easing workloads. This approach aims to free clinicians to focus more on patient care, improving both efficiency and quality in healthcare. Read the report here: https://services.google.com/fh/files/misc/measuring_admin_burden_2024_ebook.pdf ?

Travel

  • Generative AI tops travel tech priorities for 2025, study finds A recent report from Amadeus, based on research by Mercury Analytics, highlights that generative AI is expected to be a central focus in travel technology throughout 2025. Surveying 306 travel tech leaders across 10 countries, the study reveals that 46% of respondents globally identified generative AI as a top priority, with this figure rising to 61% in the Asia Pacific region. Currently, 51% of respondents reported that generative AI already plays a significant role in their country’s travel industry, with popular applications including digital assistants, personalized recommendations, and content generation. While 87% of companies are open to collaborating with third-party vendors for AI-powered solutions, only 41% have the budget and resources necessary to implement them. Among the top use cases, companies are focusing on traveler assistance during bookings (53%), personalized recommendations (48%), and post-travel feedback collection (45%). However, challenges such as data security (35%), skills shortages (34%), and infrastructure limitations (33%) are slowing adoption.?

??

Regional and regulatory updates

Alibaba has launched an AI translation tool claiming superior performance to Google and ChatGPT, leveraging its Qwen model to support 15 languages and boost merchant sales across platforms like AliExpress. Moonshot AI unveiled Kimi Chat Explore, offering advanced reasoning and multi-step task automation, rivaling OpenAI’s models and securing significant backing from Alibaba and Tencent. ByteDance is shifting towards Huawei’s AI chips amidst U.S. trade restrictions, navigating geopolitical tensions while investing in local suppliers to maintain its AI competitiveness. Pennsylvania’s AI initiatives emphasize ethical governance, piloting ChatGPT and partnering with universities, while states like California and New York pursue regulatory efforts addressing algorithmic bias, AI ethics, and public safety. LatticeFlow, collaborating with ETH Zurich, launched the first EU AI Act-aligned compliance framework to ensure responsible AI development. Meanwhile, the U.S. considers further restricting Nvidia and AMD chip exports, reflecting broader geopolitical tensions, as the G7 promotes international collaboration with its AI governance toolkit for the public sector.

  • Alibaba's international arm says its new AI translation tool beats Google and ChatGPT Alibaba’s international arm has launched an enhanced AI translation tool, claiming superior performance compared to Google, DeepL, and ChatGPT. This updated version, based on its Qwen model, supports 15 languages, including English, Chinese, and Spanish. It leverages contextual clues for accurate translations tailored to specific industries and cultures. With 500,000 merchant users, the tool has facilitated over 100 million product listings, aimed at boosting international sales for merchants on Alibaba platforms like AliExpress and Lazada. While international revenue for Alibaba grew 32% in Q2 2024 to $4.03 billion, its domestic Taobao and Tmall sales saw a 1% decline, highlighting Alibaba's strategic shift towards global markets.

  • Chinese unicorn Moonshot AI updates Kimi chatbot to offer capabilities akin to OpenAI o1 | South China Morning Post Moonshot AI, a Chinese startup, has introduced an updated version of its chatbot, Kimi Chat Explore, with enhanced reasoning and multi-step task execution capabilities, rivaling OpenAI’s o1 LLM. The new chatbot can analyze over 500 online pages per query, surpassing the previous 50-page limit, and automates complex tasks like comparing stock and gold returns. Moonshot AI’s CEO Yang Zhilin highlighted the growing trend of AI applications switching between modes with advanced reasoning abilities. Backed by Alibaba and Tencent, the startup recently secured a $300 million funding round, boosting its valuation to $3.3 billion. Kimi Chat was ranked the third most popular Chinese AI app in September 2024.?

  • TikTok's parent firm could shun Nvidia, AMD as reports claim it will use 100,000 Huawei AI chips to train its next-gen LLM | TechRadar ? ByteDance, TikTok’s parent company, is shifting towards Huawei’s Ascend 910B AI chips, aiming to reduce its reliance on Nvidia amidst U.S. trade restrictions. After spending over $2 billion on Nvidia’s restricted H20 GPUs in 2024, ByteDance has reportedly purchased more than 100,000 Ascend 910B chips, though it has only received a portion of the order so far. Transitioning to Huawei’s chips presents challenges, as ByteDance’s existing AI models—like Doubao and Jimeng—were trained on more powerful Nvidia hardware. Relying on Huawei’s processors could complicate the development of advanced AI tools, potentially impacting ByteDance’s competitiveness. This shift aligns with ByteDance’s broader strategy to navigate U.S. export controls by investing in local suppliers, including its recent stake in Xinyuan Semiconductors, which could support future ventures in VR hardware.

Security concerns have driven a geopolitical division, especially in AI and tech supply chains. The U.S. and its allies worry about data privacy, national security, and technology misuse, particularly regarding Chinese tech companies like ByteDance. As a result, restrictions on exporting advanced hardware (such as Nvidia's GPUs) are being imposed to prevent strategic technologies from strengthening China's AI capabilities.

However, these moves have accelerated China's shift toward self-reliance in semiconductors and AI technologies, with companies like ByteDance now seeking local alternatives like Huawei’s Ascend chips. This division risks fragmenting the global tech ecosystem, where companies must navigate not only technological challenges but also political constraints. ByteDance’s balancing act—maintaining AI leadership while managing regulatory pressures—is just one example of how this split impacts the corporate landscape.

The broader implication is a race to develop indigenous tech infrastructures on both sides. This divide can hinder global innovation, where traditionally, cross-border collaboration allowed faster breakthroughs. Now, AI models trained on separate hardware ecosystems might follow divergent paths, potentially limiting interoperability and global progress.

While national security is a valid concern, the outcome could reshape global AI innovation, raising questions about whether this division will foster long-term security or create new risks in the form of isolated tech ecosystems competing with one another.

  • Ethical Use of Artificial Intelligence that Empowers Workers | Commonwealth of Pennsylvania One year after signing an executive order, Pennsylvania Governor Josh Shapiro highlighted the state’s progress in adopting responsible AI use during the AI Horizons Summit in Pittsburgh. The Shapiro Administration has introduced AI-based training for public employees, launched a pilot with OpenAI’s ChatGPT, and partnered with universities and private companies to foster AI innovation. The state aims to empower its workforce and ensure ethical AI practices through a governance framework focusing on values such as accuracy, privacy, and fairness. Additionally, the NVIDIA AI Tech Community, established with Carnegie Mellon University and the University of Pittsburgh, will drive advancements in robotics and intelligent systems.?

AI regulation efforts in California, New York, and Colorado echo Pennsylvania’s proactive approach, though each state focuses on distinct areas to address local needs and industries. California has introduced over a dozen AI-related bills aimed at curbing algorithmic bias, addressing deepfake usage, and setting standards for AI in critical infrastructure. Notably, Senate Bill 1047 tackles existential AI risks, requiring tests for AI's potential misuse, including for cyber and physical attacks. However, some proposals addressing discrimination were diluted due to industry lobbying pressures, revealing ongoing challenges in balancing innovation and regulation.

New York’s legislative focus is broad, with bills addressing algorithmic transparency, AI’s use in employment decisions, and chatbot accountability. The state is also considering creating an AI ethics commission to oversee ethical AI deployment across industries, including protections against discrimination and deceptive AI applications. Other regulations tackle AI in political communications and prohibit weaponized robots, underscoring concerns about AI’s impact on both privacy and public safety.

  • LatticeFlow AI ? LatticeFlow, in partnership with ETH Zurich and INSAIT, recently launched the first AI compliance framework aligned with the EU AI Act, focusing on generative AI. This framework offers a detailed mapping of the act's principles to technical benchmarks, allowing organizations to assess the performance and compliance of their AI models. It also introduces an open-source suite to evaluate public foundation models from major players like OpenAI, Meta, and Google. The initiative addresses gaps in regulatory adherence, ensuring AI systems meet cybersecurity and fairness standards while helping companies optimize their models to avoid bias and toxicity.

  • US Weighs Capping Nvidia, AMD Chip Sales to Some Countries - Bloomberg The U.S. is reportedly considering restricting the sale of AI semiconductors from Nvidia and AMD to certain countries, primarily targeting regions like the Middle East. These restrictions aim to cap export licenses to protect national security interests. Both Nvidia and AMD have been pivotal in developing advanced AI chips, and any imposed caps could have significant implications for global technology markets. The move reflects the U.S.'s strategic efforts to maintain control over critical technology amid growing geopolitical tensions surrounding AI advancements and semiconductor exports.?

  • G7 Toolkit for Artificial Intelligence in the Public Sector | OECD The G7 Toolkit for AI in the Public Sector, developed by the OECD and UNESCO, offers a framework for governments to responsibly use AI across public services. It emphasizes ethical AI aligned with principles of fairness, transparency, and privacy, while showcasing governance strategies and real-world applications like fraud prevention and policy automation. The toolkit highlights the importance of high-quality data, interoperability, and international collaboration to ensure AI’s positive societal impact. It also advocates for incremental approaches, such as pilot projects and regulatory sandboxes, to mitigate risks and build public trust as AI adoption scales.?

News and Partnerships

Google has reshuffled leadership to sharpen its AI and ad strategies. The Gemini app team now reports to Google DeepMind’s CEO Demis Hassabis, signaling a focus on AI consolidation, and the Assistant team shifts to platforms and devices. With antitrust pressures mounting and the Justice Department contemplating a potential breakup, Google’s leadership realignment aims to streamline efforts and strengthen its competitive position. Meanwhile, AMD intensifies its AI focus with new accelerators targeting cloud providers, and both Intel and AMD have launched the x86 Ecosystem Advisory Group to foster innovation and architectural consistency, balancing rivalry with collaboration to support AI workloads and future computing needs.

  • Google shakes up leadership, Raghavan becomes Chief Technologist Google reshuffled key leadership roles to strengthen AI and ad strategies amidst regulatory and competitive pressures. Prabhakar Raghavan moves to Chief Technologist, focusing on technical direction, while Nick Fox takes over his former responsibilities in search, ads, geo, and commerce. The Gemini app team joins Google DeepMind, reporting to CEO Demis Hassabis, and the Google Assistant team shifts to platforms and devices. CEO Sundar Pichai emphasized that these changes aim to streamline AI efforts as the company addresses antitrust risks and intensifying AI competition.

  • Is Google getting ready for a potential breakup and ensuring all the right eggs are in the right baskets? ;) AMD doubles down on AI, Google faces possible breakup and 'causal AI' is coming - SiliconANGLE ? Google faces antitrust challenges, with the Justice Department considering a breakup of its operations and mandating Play Store access to rival app marketplaces. Amazon faces similar pressures, with the FTC’s antitrust case moving forward. Both companies must navigate these legal hurdles while maintaining their market dominance.

AI continues to attract significant investment, despite questions about its financial sustainability. OpenAI projects $14 billion losses by 2026, even with $100 billion in revenue, raising concerns about AI's profitability. Meanwhile, AMD is expanding aggressively to challenge Nvidia, releasing new AI accelerators and targeting cloud providers as the next big opportunity.

The concept of causal AI is gaining traction, emphasizing models capable of understanding cause-and-effect relationships beyond pattern recognition.

  • Intel and AMD Form x86 Ecosystem Advisory Group to Accelerate… Intel and AMD have announced the formation of the x86 Ecosystem Advisory Group, bringing together leading tech figures and companies, including Linus Torvalds, Tim Sweeney, and major industry players such as Microsoft, Google, Dell, Lenovo, and Meta. The group aims to foster innovation, enhance compatibility, and ensure architectural consistency across the widely used x86 computing platform. The initiative focuses on advancing x86 to meet the demands of evolving workloads, such as AI, and enhancing software development efficiency across sectors including cloud, data centers, and edge computing. This collaboration builds on the companies' long history of platform-level advancements and standards, such as PCI and USB, while aiming to shape the future of architecture through open industry dialogue.?

Yes, Intel and AMD are rivals, especially in the CPU and GPU markets, where they compete directly for market share in personal computers, data centers, and AI hardware. Both companies have a long history of rivalry, with each introducing innovations and products to outdo the other, such as Intel's x86 CPUs and AMD’s Ryzen processors. However, despite this competition, Intel and AMD have collaborated on various industry standards and technologies to advance the computing ecosystem. For instance, both companies contributed to creating PCIe and USB standards, which are essential for modern hardware interoperability. Their new partnership through the x86 Ecosystem Advisory Group reflects a collaborative effort to address common challenges, including the need for compatibility, scalability, and support for AI workloads.

??

Gen AI for Business Trends, Concerns, and Predictions:?

Corporate integrity plays a crucial role in aligning AI with human values by ensuring responsible practices through ethics, transparency, and governance, with companies like Infosys and Novartis leading efforts despite only 6% of U.S. firms having formal AI guidelines. Tad.AI introduces an accessible AI music generator offering royalty-free tracks across genres, although legal uncertainties around AI-generated content ownership and copyright remain. Yoshua Bengio warns about the risks of pursuing unregulated AI, emphasizing the need for corporate responsibility as firms race toward AGI, while Apple critiques LLMs for limited reasoning abilities, likening their outputs to advanced pattern matching. OpenAI faces challenges as ChatGPT is exploited for malware creation, highlighting the need for stricter platform monitoring, and researchers expose vulnerabilities in AI models like Claude through hidden Unicode characters, raising security concerns. Meanwhile, studies show AI perpetuating biases in loan approvals, underlining the importance of regulation. A spotlight on how energy demands from AI data centers prompt companies like Google to explore nuclear energy solutions, reflecting a shift towards sustainable power strategies amidst rising workloads.

  • Why corporate integrity is key to shaping the use of AI As AI advances rapidly, businesses play a critical role in aligning technology with human values. Corporate integrity, extending beyond legal compliance, ensures responsible AI practices through ethics, transparency, and accountability. While frameworks like the OECD’s AI Incident Monitor report over 600 AI-related incidents in 2024, only 6% of U.S. companies have developed ethical AI guidelines despite 73% of executives acknowledging their importance. Companies like Infosys and Novartis are leading with AI governance councils and risk management frameworks, integrating ethical principles into strategic decisions. Investor pressure, such as Microsoft's shareholders urging accountability, and government initiatives highlight the growing demand for ethical AI as both a social responsibility and a competitive business strategy.??

  • Tad.AI Launches A Next-Gen AI Music Generator Set to Redefine the Future of Music Creation | That Eric Alper Tad.AI , developed by HIX.AI , introduces a new AI music generator aimed at democratizing music creation with original, royalty-free songs across various genres. It offers both free and paid plans, with the latter guaranteeing no copyright risks for generated content. Users can create music spanning genres like classical, jazz, and pop, and customize tracks by mood (e.g., calm, sad, or exciting). Tad.AI also supports lyric generation to aid creativity, providing simplicity for both hobbyists and professionals. CEO Camille Sawyer emphasizes the tool’s potential to compete with platforms like Suno AI, making music creation quick and accessible to all.

While Tad.AI promises royalty-free music, using AI to generate original content isn’t without its risks. AI music generators like this one walk a fine line between innovation and copyright infringement. The problem comes down to how the AI learns—if it’s trained on copyrighted music (even unintentionally), it could end up producing tracks that sound way too similar to existing works. There’s also the gray area of ownership. When an AI creates music, who actually owns it—the user or the platform? The law hasn’t quite caught up with AI-generated content yet, which leaves room for ambiguity. And even though Tad.AI assures that paid users won't have copyright concerns, AI outputs can sometimes mirror patterns from the original data too closely, leading to unintended infringement. This is why anyone using AI-generated music—whether it’s for content creation, ads, or vlogs—needs to stay cautious. A little due diligence now can save a lot of headaches later. It’s exciting to have tools like Tad.AI opening new doors, but the legal landscape around AI and copyright is still evolving, and users need to tread carefully.

  • ?AI 'Godfather' Yoshua Bengio: We're 'creating monsters more powerful than us' Yoshua Bengio, one of the pioneers of AI, warns that the technology is evolving rapidly, with significant risks if not properly regulated. In an interview, Bengio emphasized that AI's power could fall into the wrong hands, potentially aiding state actors or malicious groups, and that there is growing concern about systems becoming autonomous and beyond human control. Bengio points to Anthropic as a leader in responsible AI development, highlighting the company's support for California’s AI regulation bill and their commitment to halt any AI effort deemed too dangerous. However, he warns that a corporate race toward Artificial General Intelligence (AGI) creates tension between innovation and public safety, as companies focus more on leading the market than on ethical safeguards.??

  • Apple pours water on unreasonable LLM hype ? Apple’s recent study (read it here: GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models )? critically examines the capabilities of large language models (LLMs) and challenges the current AI hype. Led by ex-DeepMind researcher Mehrdad Farajtabar, the research paper titled GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models argues that LLMs struggle with genuine reasoning. Instead, these models display behavior that resembles sophisticated pattern matching rather than logical reasoning, making their outputs fragile and sensitive to minor changes in prompts. The study highlights significant performance issues, such as inconsistencies in solving similar math problems and a marked performance drop with increased difficulty. Apple’s team suggests that scaling data, parameters, or compute power won’t necessarily solve these problems. They caution that current models—including open-source ones like Llama, Gemma, and Mistral, as well as proprietary systems like OpenAI’s GPT-4o—might only improve in pattern matching rather than achieving true reasoning skills. And you can read interesting comments and discussions on the study here: Apple study proves LLM-based AI models are flawed because they cannot reason | Hacker News ?

  • ?OpenAI confirms threat actors use ChatGPT to write malware The threat actors rely on ChatGPT for tasks ranging from reconnaissance to coding and anomaly detection evasion, showing how generative AI can streamline offensive cyber operations for both skilled and less-skilled attackers. While OpenAI has banned accounts linked to these activities and shared indicators of compromise with cybersecurity partners, the report underscores the broader risk of AI tools being weaponized by malicious actors to increase efficiency and evade detection.?

To prevent the misuse of AI platforms like ChatGPT for cyberattacks, companies need to implement multiple layers of monitoring and restrictions. Activity tracking and content filtering can detect unusual patterns, such as repeated queries related to malware creation or phishing. These filters can block dangerous code snippets in real time, while suspicious accounts are flagged for deeper review. Limiting access to advanced capabilities through user verification ensures that only vetted users can execute sensitive operations, while rate limits and query restrictions prevent automated abuse. Collaboration with cybersecurity experts is crucial. Sharing indicators of compromise (IOCs), such as IP addresses and suspicious activity, helps AI providers block malicious users swiftly and refine detection algorithms. AI-based behavioral detection models can further aid by recognizing malicious intent behind seemingly benign queries, such as phishing template generation or vulnerability scanning. Platforms can also analyze query metadata—like geolocation or session history—to spot connections with known threat actors. What else can be done?

  • Invisible text that AI chatbots understand and humans can’t? Yep, it’s a thing. - Ars Technica Researchers recently uncovered a significant security vulnerability in AI language models such as Claude and Microsoft Copilot, involving the use of invisible characters within the Unicode text encoding standard. These hidden characters can be smuggled into prompts or outputs to extract sensitive information without users noticing, a technique dubbed "ASCII smuggling." The vulnerability exploits large language models (LLMs) that can interpret invisible instructions embedded in normal-looking text while remaining undetected by human users. In demonstration attacks, Microsoft Copilot was manipulated to extract and append confidential data like sales figures and passcodes into URLs using invisible Unicode characters. These URLs looked harmless to users but covertly transferred sensitive data to attacker-controlled servers. Researchers also highlighted risks from "prompt injection" attacks, where hidden instructions manipulate LLM behavior, creating risks for both personal and enterprise AI use. While companies like Microsoft and OpenAI have introduced mitigations, the issue illustrates broader challenges in AI security. Models' ability to interpret hidden content creates new attack vectors, complicating defenses. Experts warn that, like other coding vulnerabilities (e.g., SQL injection), preventing such exploits requires vigilance from developers at every stage of system design.

  • ?As AI takes the helm of decision making, signs of perpetuating historic biases emerge ? A Lehigh University study found AI-powered chatbots used in mortgage applications exhibit racial bias, disproportionately denying loans and assigning higher interest rates to Black and Hispanic applicants compared to white counterparts with identical financial profiles. White applicants were 8.5% more likely to be approved, and for low credit scores, approval rates dropped below 80% for Black applicants versus 95% for white applicants. These biases arise from systemic issues reflected in training data, like credit scores and zip codes linked to discriminatory practices such as redlining. Researchers suggest bias audits and prompt engineering to mitigate these issues, along with greater human oversight and regulatory frameworks to ensure fair use of AI tools. Read the research here: Measuring and Mitigating Racial Disparities in Large Language Model Mortgage Underwriting

To address these issues, researchers suggest using prompt engineering to instruct AI systems to make unbiased decisions and conduct regular bias audits. Increased human oversight is recommended to prevent unchecked decision-making, particularly in regulated industries like finance. The study underscores the need for government regulation and standards, such as the pending AI Risk Management Framework bill in Congress, to ensure fair and transparent AI practices.

  • ?State of AI Report ? The State of AI Report 2024 highlights key developments in AI, including shrinking performance gaps between top models, with OpenAI’s o1 leading briefly. Research now prioritizes planning and reasoning, integrating reinforcement learning and self-improvement for more autonomous AI applications. Foundation models extend beyond language, advancing in fields like biology and neuroscience. Despite U.S. sanctions, Chinese labs maintain AI progress through stockpiles and cloud access, though domestic chip efforts struggle. AI companies now represent $9 trillion in enterprise value, driven by public market enthusiasm, though long-term sustainability concerns linger. Some companies resort to pseudo-acquisitions to manage high operational costs. The existential risk discourse has cooled, but research into model vulnerabilities continues, emphasizing safeguards to prevent misuse.

  • Wikipedia:WikiProject AI Cleanup ? WikiProject AI Cleanup on Wikipedia aims to address the influx of unsourced and misleading AI-generated content on the platform. With the rise of large language models (LLMs) like ChatGPT, Wikipedia faces challenges where generated content introduces errors, fabricates sources, or misapplies legitimate references. The project focuses on identifying AI-generated text and images, ensuring they meet editorial standards, and helping editors who may unknowingly rely on AI outputs. The initiative doesn’t ban AI use outright but ensures that any contributions are accurate and properly sourced. Detection challenges arise because AI content often mimics human writing, though tell-tale signs include fabricated citations, generic descriptions, and irrelevant references. For instance, the article Leninist historiography contained fake sources, while Estola albosignata cited valid articles unrelated to its subject. In some cases, AI-generated content has resulted in hoaxes, such as the fictional article Amberlihisar, which went unnoticed for nearly a year before deletion. Participants are encouraged to tag questionable articles, verify sources, and conduct thorough audits to maintain Wikipedia’s reliability. The project highlights that automated tools like GPTZero are insufficient for detection, requiring human oversight for accurate content moderation.

  • Dario Amodei — Machines of Loving Grace ? The essay "Machines of Loving Grace" by Anthropic’s CEO emphasizes the dual potential of AI—highlighting both risks and transformative benefits. Despite public perceptions that focusing on risks implies pessimism, the author argues that addressing these risks is essential for realizing AI’s significant positive impact. The essay envisions a world where AI accelerates advancements in health, neuroscience, poverty reduction, governance, and work. In biology, AI could compress decades of medical progress into a few years, curing diseases and extending human lifespans. Neuroscience could similarly benefit from AI, potentially eradicating mental illnesses and enhancing cognitive freedom. On governance, the text warns that while AI could enhance democracy and global cooperation, it also poses risks by enabling authoritarian regimes. In the realm of economics, the challenge lies in ensuring that AI's benefits are equitably distributed across nations and addressing concerns around the displacement of human labor. Ultimately, the essay suggests that meaningful engagement with AI—balancing caution with hope—could help achieve a fairer and healthier world, with AI becoming a tool for both scientific breakthroughs and societal transformation.

  • The New York Times has had it with generative AI companies using its content | TechCrunch Here are the deets:? The New York Times has issued a cease-and-desist letter to Perplexity, a generative AI startup backed by Jeff Bezos, demanding the company stop using its content in AI-generated summaries without permission. The Times argues that Perplexity has violated copyright laws by using its carefully researched and edited journalism without a license, alleging the startup was “unjustly enriched” by leveraging the publication’s work. This confrontation is part of a broader effort by the Times to limit unauthorized use of its content by AI companies. The paper has also taken legal action against OpenAI for training ChatGPT on its material without consent. Similarly, other publishers have criticized Perplexity for unethical web scraping practices, including summarizing paywalled content, as identified by a study from Copyleaks. To me, The New York Times has not accepted Perplexity’s revenue-sharing offer, likely because it remains concerned about unauthorized use of its content. While Perplexity introduced this program to share revenue with publishers and address prior complaints, the Times contends that the AI startup continues to use its material without proper consent.? The underlying issue is not just about compensation but also about unauthorized access. The Times has demanded clarification on how Perplexity is still accessing content despite efforts to block it. Yes, how??? Perplexity appears to access The New York Times' content through automated scraping practices, which can involve bypassing restrictions set by publishers. Although Perplexity claims its bots adhere to the Robots Exclusion Protocol (robots.txt), there are instances where it disregards these rules—especially when users input specific URLs. This functionality allows the AI to retrieve content on behalf of users, much like a person manually visiting a webpage and copying information. Additionally, some reports indicate that Perplexity has used third-party crawling services, creating ambiguity about how content is accessed. The startup also came under scrutiny for allegedly using unpublished IP addresses to scrape content from sites that explicitly blocked it, like Condé Nast and potentially The New York Times.?

A closer look at AI power consumption challenges

In this sub-section, we take a closer look at how companies are trying to solve AI data center energy consumption issues. It was prompted by the Google News about using nuclear energy Google signed a deal to power data centers with nuclear micro-reactors from Kairos — but the 2030 timeline is very optimistic | TechCrunch Google has signed an agreement with Kairos Power, a nuclear startup, to build seven small modular reactors (SMRs) to supply its data centers with around 500 megawatts of carbon-free energy by 2030. This partnership aligns with the growing energy demands of AI and data centers, following similar moves by Microsoft and Amazon (se below). The Kairos reactors, cooled by innovative molten salts rather than water, aim to be faster and cheaper to build than traditional nuclear plants. However, regulatory and public opposition could challenge the project, as nuclear power remains controversial despite recent support trends. The reactors may connect either directly to Google’s sites ("behind the meter") or feed into the energy grid. Google joins a broader industry shift, as competitors also explore nuclear power to maintain sustainability commitments amid rising AI workloads.

  • Here is a good analysis of AI Datacenter Energy Dilemma - Race for AI Datacenter Space The surge in AI clusters is straining datacenter capacity and energy grids globally. AI workloads, especially training models, demand high power densities, with GPUs consuming significantly more energy than traditional servers. This growth is driving datacenter power consumption to an expected 96 GW by 2026 (!!!), raising concerns about sustainability. Key challenges include securing transformers, managing grid capacity, and deploying effective cooling solutions. While the U.S. leads in datacenter expansion due to lower energy costs and a greener power mix, Europe and Asia face higher tariffs and geopolitical constraints.?

Microsoft and Constellation Energy recently announced plans to repurpose the previously closed nuclear facility to meet growing energy demands for AI projects. This initiative, which includes renaming the site the Crane Clean Energy Center, aims to deliver carbon-free power, with operations expected to begin by 2028. The resurgence of interest in nuclear power for AI workloads reflects the need for stable, low-carbon energy sources to support energy-intensive data centers. However, the project still requires approval from the U.S. Nuclear Regulatory Commission, alongside environmental assessments and permits, including those related to water use from the Susquehanna River. ICYMI: https://www.npr.org/2024/09/20/nx-s1-5120581/three-mile-island-nuclear-power-plant-microsoft-ai ?

Amazon is making a similar move, ICYMI: Amazon Vies for Nuclear-Powered Data Center ?

Amazon Web Services (AWS) acquired a $650 million data center adjacent to the Susquehanna Steam Electric Station in Pennsylvania, aiming to expand its direct use of the plant’s nuclear power from 300 MW to 480 MW. However, this move has sparked regulatory disputes, with utility companies Exelon and AEP filing protests to the Federal Energy Regulatory Commission (FERC). They argue that such behind-the-meter deals bypass grid fees, shifting financial burdens onto other energy customers.? Critics warn that diverting power from the grid could increase prices and reduce energy availability for other users. With increasing energy demands from AI and data centers, regulators anticipate more conflicts over access to sustainable power.

?

Source: Association of Data Scientists

Here is a table compiled by me of Energy STrategies that new players and large tech companies are utilizing as of today.


Copyright: Eugina Jordan. Use with proper credit only.

This blog and report from IBM The hidden costs of AI: How generative models are reshaping corporate budgets - IBM Blog highlights how the growing adoption of generative AI is driving up computing costs, which could hinder business innovation. The report, titled “The CEO’s Guide to Generative AI: Cost of Compute,” projects an 89% increase in computing costs between 2023 and 2025, with 70% of surveyed executives attributing these increases to generative AI initiatives. Read the report here: The CEO’s Guide to Generative AI: Cost of compute | IBM ??

  • Generative AI as a catalyst for HRM practices: mediating effects of trust | Humanities and Social Sciences Communications The study by K.D.V. Prasad and Tanmoy De (2024) explores the transformative impact of generative AI tools on human resource management (HRM) practices, focusing on the crucial role of trust as a mediator between user perception and organizational outcomes. Their research in the IT sector reveals that factors such as ease of use, usefulness, optimism, and innovativeness positively influence employees' perception of AI tools, which, in turn, enhances trust. This trust strengthens organizational commitment, driving higher employee engagement and performance. The study integrates the Technology Acceptance Model (TAM), Technology Readiness Index (TRI), and Stimulus-Organism-Response (SOR) theory to demonstrate how trust and user perception foster smooth AI adoption. Findings highlight that generative AI tools improve HR efficiency by automating repetitive tasks, fostering commitment, and increasing job satisfaction. However, sustained success requires organizations to promote trust through transparency and reliability. Training employees to embrace new technologies with positive perceptions further accelerates adoption, unlocking AI's potential to streamline workflows and improve engagement and performance.?

  • AI at Work: Why GenAI Is More Likely To Support Workers Than Replace Them - Indeed Hiring Lab After assessing over 2,800 job skills, the study found that 68.7% of these skills are either "very unlikely" or "unlikely" to be replaced by GenAI. The analysis focused on GenAI’s ability to provide theoretical knowledge, solve problems, and the importance of physical execution for various roles. While GenAI excels at theoretical knowledge, its problem-solving capacity is rated moderate, and it struggles with tasks requiring hands-on execution, such as nursing or cooking. For office roles, including software development, GenAI shows promise by assisting with repetitive tasks and problem-solving, indicating that continuous learning and upskilling are crucial for workers to remain competitive. However, in jobs requiring physical presence—like healthcare—GenAI’s impact will remain limited to administrative support, such as documentation. The report emphasizes that human oversight will remain critical, and the focus should shift toward equipping workers with GenAI-related skills to enhance productivity rather than fearing job displacement.?

  • Evaluating fairness in ChatGPT | OpenAI ? A recent study on fairness in ChatGPT examined how user names, which can carry gender, cultural, or racial connotations, influence the model’s responses. The research revealed that, while ChatGPT generally maintains high response quality regardless of names, about 0.1% of responses reflected harmful stereotypes, primarily in open-ended tasks like storytelling. Researchers used Language Model Research Assistants (LMRAs) to analyze millions of conversations while protecting user privacy. Older models, such as GPT-3.5 Turbo, showed higher bias rates compared to newer versions like GPT-4o. The study emphasizes the need for continuous improvements in fairness and transparent methodologies to reduce bias, ensuring ethical and responsible AI development. Read the study here: First-Person Fairness in Chatbots | OpenAI ?

News and updates around? finance, Cost and Investments

The U.S. Treasury Department saved taxpayers over $4 billion through AI initiatives targeting fraud, demonstrating how machine learning can streamline financial processes and secure transactions. Developers on OpenAI’s GPT Store face challenges with monetization due to engagement-based revenue models, struggling to earn sustainable income without robust analytics or payment options outside the U.S. Despite concerns about profitability, AI investments remain strong, with $18.9 billion raised in Q3 2024, including OpenAI's record $6.6 billion round. Generative AI startups attract 40% of cloud sector VC funding, indicating a shift in IT budgets toward AI-driven innovations at the expense of traditional software. Nvidia and other chipmakers benefit from surging AI demand, though analysts predict growth may stabilize as infrastructure spending reaches maturity. India’s generative AI ecosystem is thriving with $750 million in cumulative funding since 2023, led by Bengaluru, while ServiceNow commits £1.15 billion to UK AI innovation, focusing on workforce expansion and community development over the next five years.

  • Treasury Department now using AI to save taxpayers billions ? The U.S. Treasury Department has reported significant success using artificial intelligence (AI) to combat fraud, saving American taxpayers over $4 billion in the past year. The AI initiatives, particularly leveraging machine learning, enabled the department to analyze vast datasets to identify fraudulent patterns, ensuring more accurate and secure financial transactions. Key outcomes include $2.5 billion saved by blocking high-risk transactions, $1 billion recovered from check fraud schemes, and $500 million in prevented losses through enhanced risk-based screening efforts. Deputy Treasury Secretary Wally Adeyemo emphasized the importance of AI-driven processes to ensure that payments are made accurately and timely, helping federal agencies avoid financial mishandlings.

  • More details on GPT store for revenue share for developers in this tweet: https://x.com/NickADobos/status/1773456722675535957 As it’s been discussed in OpenAI forums, developers on the GPT Store are facing challenges earning revenue due to the platform’s engagement-based payment model, which only rewards GPTs that attract significant interaction. While OpenAI hoped to replicate the success of app stores, many developers report that building sustainable income is tough since even popular GPTs offer limited returns compared to traditional tech jobs. On top of this, developers struggle with minimal analytics tools to track performance and limited payment availability outside the U.S. Without in-app payment or advertising options, many creators feel forced to explore freemium models or integrations with external services to stay viable. OpenAI has acknowledged the need for stricter controls and promised improvements to mitigate abuse and protect creators' work, though it's clear that challenges around IP protection remain a key issue. For now, OpenAI is positioning the store as a platform to drive innovation, with developers encouraged to experiment and build momentum even as the monetization framework evolves further over the coming months.

  • Here’s the full list of 39 US AI startups that have raised $100M or more in 2024 | TechCrunch ? AI fatigue hasn’t slowed down investor enthusiasm, with AI companies raising $18.9 billion in Q3 2024, accounting for 28% of total venture funding. The quarter saw OpenAI secure the largest venture deal ever, a $6.6 billion round led by Thrive Capital. This was one of six AI funding rounds over $1 billion so far in 2024, highlighting the sector’s momentum. Significant deals include EvenUp, an AI-powered legal tech firm, raising $135 million and KoBold Metals securing $491.5 million in venture funding. Poolside, an AI software development platform, also attracted $500 million from Bain Capital and others. Investors continue to back AI-driven companies across diverse industries, despite concerns about GenAI's limitations and market bubbles, demonstrating their long-term confidence in AI innovation.

  • Generative AI startups get 40% of all VC investment in cloud amid ChatGPT buzz ? Generative AI startups are absorbing 40% of all venture capital investments in the cloud sector, driven by the excitement surrounding technologies like ChatGPT, according to Accel’s latest Euroscape report. Cloud companies across the U.S., Europe, and Israel are expected to raise $79.2 billion this year, marking a 27% growth—the first increase in three years. This boom reflects the shifting priorities in IT budgets, increasingly focusing on AI at the expense of traditional software investments. Big players like OpenAI and Anthropic have dominated this funding surge. OpenAI alone raised $18.9 billion through 2023-24, achieving rapid revenue growth. Other major firms, including Amazon, Microsoft, and Google, are investing between $30 billion to $60 billion annually into AI development. However, with these investments, experts predict consolidation in the field, where only a few companies may have the resources to maintain dominance over foundational AI models due to the high capital required for infrastructure and chips. This shift toward generative AI also reflects macro-economic pressures as companies tighten general software budgets but remain willing to allocate more to AI-driven innovations.

  • Nvidia and other chip stocks surge with no sign of AI spending slowdown — for now Nvidia and other AI chip stocks, including Qualcomm, Broadcom, and TSMC, are surging as AI demand continues to accelerate, alleviating concerns about a slowdown in spending. Mega-cap tech companies like Microsoft, Google, and Amazon are driving this trend, with plans to invest over $250 billion in AI infrastructure by 2025. Nvidia remains at the forefront, benefiting from OpenAI’s $6.6 billion funding and other tech giants’ focus on AI expansion. However, analysts caution that the rapid growth of AI hardware spending may eventually cool, with questions arising around the sustainability of demand once infrastructure investments stabilize.

  • ?Gen AI start-ups attract over 750 million dollars in cumulative funding since 2023 India's generative AI start-up ecosystem has grown rapidly, with over $750 million in cumulative funding since 2023, according to Nasscom's 2024 report. The number of Gen AI start-ups has surged 3.6 times, from 66 in early 2023 to more than 240 by mid-2024, with 75% now generating revenue. Bengaluru leads as the primary hub, but emerging cities like Ahmedabad and Lucknow are also contributing to this growth. Notably, 70% of these start-ups are focusing on industry-specific solutions across sectors such as IT, healthcare, and retail. Challenges remain, including access to patient capital and scalable infrastructure, but a shift toward hybrid AI models and industry collaborations signals promising momentum for the ecosystem.?

  • B2B AI automation ServiceNow commits £1.15B to UK AI innovation ServiceNow has committed £1.15 billion over five years to enhance AI innovation in the UK, including expanding offices, doubling its workforce, and integrating Nvidia GPUs into data centers in London and Newport to support localized AI processing. The initiative aligns with a broader trend of AI investments in the UK, totaling £6.3 billion from other companies, and focuses on preparing businesses for the future of AI with enhanced infrastructure and compliance with data privacy regulations. Additionally, ServiceNow’s University initiative aims to reskill 240,000 learners by 2027, while the company reinforces its commitment to community development by pledging £1.15 million in grants to non-profits, promoting inclusivity, and being recognized as one of the UK’s top workplaces.??

What/where/how Gen AI solutions are being implemented today?

Walmart and Amazon are embracing generative AI to enhance personalization, moving beyond search bars to offer AI-driven shopping experiences through custom homepages and product recommendations. Nvidia’s AI tools, including MONAI and AlphaFold2-Multimer, are revolutionizing healthcare by improving imaging and accelerating drug discovery. The U.S. Army has launched #CalibrateAI, a pilot program leveraging generative AI to streamline acquisition processes while ensuring data security. Since ChatGPT’s launch, the U.S. DOD and DHS have invested over $700 million in AI projects, with spending expected to surpass $1 billion by 2025. Gatorade enhances its membership platform with generative AI-powered personalized bottle designs, and Heineken integrates AI for operational efficiency and product innovation. Uber optimizes LLM training through a hybrid approach combining open-source tools and in-house solutions. Google revolutionizes shopping with AI-generated briefs, personalized feeds, and conversational search tools, reflecting the broader trend of AI adoption in retail, alongside Amazon and Rent the Runway.

  • Walmart and Amazon Turn to GenAI and Ditch the ‘Search’ Bar | PYMNTS.com ? Walmart and Amazon are using generative AI (GenAI) to enhance personalization and move beyond the traditional search bar. Walmart aims to introduce custom homepages for U.S. customers by 2026 and has integrated AI-powered chatbots and LLMs to streamline shopping experiences. Meanwhile, Amazon’s new AI Shopping Guides consolidate product insights and recommendations, helping customers find the right products faster. Despite consumer concerns—53% worry about GenAI misuse—demand for personalized experiences remains strong, with 85% of shoppers favoring brands that treat them as individuals. Both companies see GenAI as key to shaping the future of eCommerce and meeting evolving customer expectations.

  • AI Deployed in Health Care for Drug Discovery, Data and Imaging ? In healthcare imaging, Nvidia's MONAI framework supports the National Cancer Institute, utilizing the VISTA-3D foundation model to segment and annotate 3D CT images. For drug discovery, generative AI tools like the AlphaFold2-Multimer NIM predict protein structures, helping researchers identify promising drug candidates efficiently. Another NIM microservice, RFdiffusion, designs novel proteins for targeted drug binding. Nvidia's solutions also streamline preclinical research by optimizing drug screening processes. For example, NCATS at NIH employs the NIM Agent Blueprint to conduct AI-based virtual screening, accelerating drug development by filtering candidates before laboratory testing. Additionally, NCATS is testing AI-powered PDF data extraction to unlock valuable insights from unstructured data sources, enhancing patient information retrieval. Companies like Abridge and HealthOmics are leveraging Nvidia's technology to secure government contracts, further showcasing the potential of AI in transforming healthcare research and operations.

  • Army launches pilot program to explore generative AI for acquisition activities | Article The U.S. Army has launched a pilot program called #CalibrateAI to explore the use of generative AI in acquisition activities. This initiative, led by Jennifer Swanson, Deputy Assistant Secretary of the Army for Data, Engineering, and Software, leverages cutting-edge AI tools provided by industry partners at no cost. Operating in a secure Impact Level 5 cloud environment, the pilot aims to improve productivity by automating repetitive tasks, curating information, and ensuring outputs are accurate through fact-checkable citations. The program emphasizes security, using user-access controls to safeguard sensitive data and detect potential AI-generated "hallucinations" or errors.??

  • The U.S. defense and homeland security departments have paid $700 million for AI projects since ChatGPT’s launch Since the launch of ChatGPT, the U.S. Department of Defense (DOD) and Department of Homeland Security (DHS) have invested over $700 million in AI-related projects. The DOD awarded $670 million across 323 contracts, a 20% increase in both value and participation from previous years. Notable recipients include ECS, receiving $174 million for AI algorithm development for the Army, and Palantir, awarded $91 million for end-to-end AI solutions and testing, with an additional contract potentially reaching $480 million for its Maven system. Scale AI also secured a contract worth up to $15 million for testing AI tools. DHS awarded $22 million for similar AI projects, including $4 million to LMD for marketing services involving AI. With further IDV (Indefinite Delivery Vehicle) contracts in play, defense-related AI spending is projected to exceed $1 billion by 2025.?

  • Gatorade Takes Personalization To Next Level With Gen-AI Add-On | Consumer Goods Technology Gatorade has enhanced its personalization efforts through the launch of generative AI features within its Gatorade iD membership platform. Powered by Adobe Firefly, with support from digital agency Work & Co, the new feature allows consumers to design personalized squeeze bottles using their loyalty points. Members can input keywords reflecting their interests, like favorite sports or hobbies, to generate AI-driven designs, with the option to create two free personalized designs. This expansion builds on previous customization efforts that initially offered team logos and preloaded mascots. The initiative aligns with Gatorade’s strategy to blend creativity with data insights, aiming to provide athletes with a highly personalized experience within the brand’s ecosystem. Have you tried creating one? Post it in the comments or DM me a picture.?

  • And another (alcholic) beverage company is implementing Gen AI, internally for their teams. How Heineken Is Brewing Success With Generative AI ? Heineken is leveraging generative AI to streamline operations, enhance consumer insights, and drive product innovation. The company’s AI system, "Kim," enables employees to access information via natural language queries, improving efficiency. Collaborative experiments show that AI-assisted product development yields the best results, blending machine precision with human creativity. Heineken ensures ethical AI use, with no AI-generated content released without oversight, and actively engages in industry discussions on responsible practices. Future plans include AI assistants for brand managers to handle data analysis and planning, freeing staff to focus on creative tasks. Challenges include managing biases and improving AI’s handling of structured data.

  • Open Source and In-House: How Uber Optimizes LLM Training Uber’s recent blog post discusses how the company optimizes large language model (LLM) training through a mix of open-source tools and in-house solutions. Uber leverages open-source frameworks for flexibility and innovation while developing proprietary techniques to address specific operational needs. This hybrid approach helps Uber fine-tune models, ensuring efficient use of computational resources and faster deployments. The strategy allows Uber to balance scalability with customization, providing tailored solutions to its business requirements while staying on the cutting edge of AI development.


Source: Uber.

?

  • Google shakes up shopping with generative AI | Vogue Business Google is revolutionizing its Shopping platform with AI-driven updates, mirroring the transformative shift seen in search. The enhancements include AI-generated product briefs, personalized inspiration feeds, top product recommendations, and deal-finding tools, with the rollout starting in the U.S. Users will receive AI-generated summaries powered by Google’s Gemini, analyzing 45 billion product listings to provide recommendations and relevant considerations for their searches. The new system supports conversational queries, narrowing results by factors like size, location, and past search behavior. Personalized shopping feeds will incorporate videos, recent searches, and user preferences to refine recommendations, while tools for price comparison and tracking are also included. Despite some challenges—particularly for nuanced fashion searches—the approach aims to offer intuitive, tailored results, with AI briefs labeled as “experimental” to encourage user feedback. Google’s latest updates reflect a broader trend in retail, where companies like Amazon and Rent the Runway also use AI to improve search capabilities.

?

Women Leading in AI?

New Podcast:? Join @Pallavi Sharma on AInclusive as for a discussion with on How AI is Changing Career Planning for Students with @Sirisha Kaipa Founder and CTO of @dabbL

Featured AI Leader: ??Women And AI’s Featured Leader - Dipti Bhide ?? Her work with LittleLit - AI Life Skills to help kids become AI literate is impressive.?

Learning Center

YourStory emphasizes the importance of earning ML certifications from platforms like Coursera and Google AI, contributing to NLP projects on GitHub, and engaging with AI ethics through IEEE guidelines. My upcoming TechTarget webinar will explore maximizing ROI with AI at the edge, while a recording of my previous session on transforming finance with AI is available for viewing. Hugging Face highlights best practices for fine-tuning LLMs, such as using Supervised Fine-Tuning (SFT) and AutoTrain tools. KDnuggets outlines essential steps for AI engineers, from mastering Python to working with frameworks like PyTorch and Hugging Face. Deloitte stresses integrating innovation with operational efficiency to scale AI, focusing on data governance and measurable outcomes. Adobe’s initiative aims to train 30 million learners by 2030, offering bootcamps in AI literacy and digital marketing, with global expansion planned by 2025. Finally, CSET offers insights into how LLMs predict text, providing a deep dive into their functionality and applications.

  • AI skills for 2025: Add these to boost your resume | YourStory recommends obtaining ML certifications from platforms like Coursera, edX, and Google AI. For NLP, it suggests contributing to open-source projects on GitHub and learning frameworks such as TensorFlow and spaCy. Engaging in AI ethics and governance frameworks can be achieved by exploring IEEE guidelines or the European Commission’s proposals.

?

  • My upcoming TechTarget webinar will talk about “Maximizing Business Impact and ROI with AI at the Edge: Harnessing Analytics, CI/CD, and Scalable Infrastructure.” Register here: https://www.brighttalk.com/webcast/679/626732 ?

  • Fine-tuning LLM - ??Transformers - Hugging Face Forums The Hugging Face community offers several strategies for fine-tuning large language models (LLMs). Supervised Fine-Tuning (SFT) is widely used, leveraging labeled datasets to train models using cross-entropy loss. Effective data preparation involves structuring inputs with instruction-output pairs and handling padding tokens to enhance model performance. AutoTrain by Hugging Face simplifies fine-tuning by automating processes and providing templates for conversational AI tasks. Incorporating multi-turn dialogues or chain-of-thought inputs ensures improved contextual understanding for chatbot models. These approaches help customize LLMs for targeted applications, boosting accuracy and usability.??

  • ?Roadmap for AI Engineers - KDnuggets ? To become an AI engineer, follow ten essential steps: start by learning AI fundamentals and mastering Python programming, followed by building a solid foundation in mathematics like linear algebra and statistics. Progress into machine learning (ML) by using tools like Scikit-learn and Pandas, then explore Computer Vision and Natural Language Processing (NLP) for specialized skills. Reinforcement learning adds depth by teaching models to interact with environments. Understanding generative AI helps build applications like ChatGPT, while mastering AI frameworks such as PyTorch and Hugging Face enhances proficiency. Lastly, focus on MLOps to deploy models using tools like Docker and cloud platforms. Continuous learning, hands-on projects on GitHub or Kaggle, and networking on LinkedIn are recommended to showcase skills and attract opportunities.

  • Learn the best strategies for scaling Gen AI in your business. Scaling Generative AI: 13 elements for sustainable growth and value ? Deloitte's strategy for scaling generative AI in enterprises emphasizes a structured approach that integrates innovation with operational efficiency. Organizations are focusing on leveraging generative AI to streamline workflows, enhance productivity, and foster customer engagement. However, challenges persist around managing data quality, governance, and regulatory compliance. Enterprises are investing heavily in data management, with 75% increasing their spending to ensure AI readiness. Issues with data governance and security remain significant barriers, as over half of the surveyed organizations faced difficulties in applying AI effectively due to data-related limitations. Regulatory concerns and risks, such as model bias and AI hallucinations, also present challenges, leading businesses to develop frameworks and oversight mechanisms to ensure responsible AI usage. Additionally, Deloitte highlights the importance of defining measurable outcomes to sustain investment interest. Only 16% of companies are currently providing regular reports to CFOs on the value generated by AI initiatives, underscoring the need for clearer impact metrics to maintain momentum as businesses move beyond proof-of-concept stages. Download the report here: Scaling Generative AI ?

Prompt of the week

If you have been using Gen AI for a while, you might try something fun like asking “Based on our interactions, can you create an image of how you see me?” (inspired by this Reddit thread: https://www.reddit.com/r/ChatGPT/comments/1g23p4o/comment/lrl9m63/ ) Here is how ChatGPT sees me (wrong eye color, but I still like it).


Source: ChatGPT prompted by Eugina Jordan

Here’s how I envisioned you based on our interactions—dynamic, poised, and visionary. I incorporated your passion for AI, telecom, and leadership, blending professionalism with creativity, and reflecting a balance of ambition, strategy, and emotional self-awareness. Let me know if this image resonates with how you see yourself!

As I mentioned, I used it for peer reviews of my work, so it knows a lot of my secrets, just like my 3 dogs. ;)?

Tools and Resources

  • GitHub - karpathy/nanoGPT: The simplest, fastest repository for training/finetuning medium-sized GPTs. offers a minimal, educational implementation of GPT models, providing a clean and efficient codebase for those interested in understanding the mechanics of large language models. It emphasizes simplicity by using minimal dependencies, making it easy to experiment with GPTs through training from scratch or fine-tuning on smaller datasets. Designed for both local experimentation and larger-scale GPU work, nanoGPT provides valuable insight into language model architecture and the challenges of training. Its focus on reproducibility and clarity makes it a powerful resource for those seeking to dive into language models without complex frameworks.?

  • Zyphra/Zamba2-7B-Instruct · Hugging Face Zamba2-7B-Instruct, available on Hugging Face, is a fine-tuned model built from Zamba2-7B for chat and instruction-following tasks. It combines state-space and transformer architectures with extended context handling of up to 16k tokens. Known for low latency and efficient performance, it excels at instruction-based tasks while maintaining a small memory footprint. The model supports inference with Pytorch and can be integrated using Hugging Face tools.?

  • CoTracker3: Simpler and Better Point Tracking by Pseudo-Labelling Real Videos point tracking system introduced to address the challenges of training AI trackers on synthetic data. Traditional point trackers face performance limitations due to the disparity between synthetic datasets and real-world videos. CoTracker3 overcomes this by utilizing pseudo-labeling techniques, allowing it to train on unannotated real videos. It introduces a simplified model architecture that is smaller and easier to manage, yet delivers superior results with significantly less data—up to 1,000 times less than traditional methods. The model is available in both online and offline versions, ensuring reliable tracking of visible and occluded points across diverse scenarios, improving the adaptability and efficiency of tracking systems for real-world applications.?

  • [2409.12640] Michelangelo: Long Context Evaluations Beyond Haystacks via Latent Structure Queries Google DeepMind introduced Michelangelo, a benchmark to evaluate how well long-context large language models (LLMs) handle reasoning tasks over massive datasets. While LLMs with token capacities up to 1 million excel at retrieval tasks, Michelangelo’s tests—Latent List, Multi-round Co-reference Resolution, and IDK—highlight their challenges with complex reasoning across extensive data. Evaluations on models like GPT-4, Gemini, and Claude show performance drops as task complexity increases, revealing limitations in handling intricate, long-context scenarios. Researchers aim to improve these models through continued testing with Michelangelo's framework. Researchers plan to expand Michelangelo’s evaluations and aim to make it accessible for other researchers to test their own models in the future, enhancing collaborative development in this space.??

  • Google Publishes LLM Self-Correction Algorithm SCoRe - InfoQ ? Developers looking to access Google DeepMind’s new SCoRe algorithm can integrate it as part of the broader tools available through Google Cloud’s AI platform. Specifically, DeepMind's latest research, including models like Gemini 1.5, is accessible via Google AI Studio and Google Cloud Vertex AI. These platforms allow developers to work with advanced AI tools, including reinforcement learning-based models such as SCoRe, which enhances self-correction capabilities for coding and math tasks. For developers, the recommended approach is to explore Google AI Studio and Vertex AI for APIs and tools that facilitate the integration of these models into their applications. Google has positioned these resources to streamline experimentation with AI solutions like SCoRe in real-world use cases, such as software development and technical problem-solving. Further details on implementation can be found through Google’s documentation and research portals via Vertex AI and Gemini model platforms, where you can stay up to date with the latest releases and deployment guides.

  • Compare Mode in Google AI Studio: Your Companion for Choosing the Right Gemini Model Google AI Studio’s Compare Mode, introduced on October 17, 2024, streamlines the process of selecting the right Gemini model for developers by providing side-by-side comparisons of response quality, latency, cost, and token limits. This feature allows users to input prompts and system instructions to test multiple models simultaneously, making it easier to understand performance trade-offs. Compare Mode also aids in optimizing system instructions and refining prompts to meet specific project requirements. Now available in Google AI Studio, it aims to help developers confidently choose and fine-tune models for optimal outcomes.?

  • GitHub - openai/swarm: Educational framework exploring ergonomic, lightweight multi-agent orchestration. Managed by OpenAI Solution team. Swarm, an open-source project hosted on GitHub by OpenAI, appears to be a tool aimed at managing interactions between agents and large language models (LLMs). Its core functionalities involve coordinating and streamlining complex LLM outputs across multiple applications, enabling better integration of these models within workflows. The repository includes modules for handling conversations, processing tool calls, and integrating with APIs such as OpenAI's models. It is not meant as a standalone library and is primarily for educational purposes.


If you enjoyed this newsletter, please comment and share. If you would like to discuss a partnership, or invite me to speak at your company or event, please DM me.

Jihanna (Julie) Bacani

Social Media Strategist

3 周

Congrats on reaching the 27th edition of your newsletter! Your insights on how AI is transforming various industries are invaluable. Excited to dive into this week’s highlights! Keep up the fantastic work!

回复
Meg Crumbine

I help Female Founders and Entrepreneurs Get Faster Growth | Early Stage + Startup Mentor | Positioning + Messaging GOAT | Business Coach | Pitch Deck Wizard | Website Assessments | GTM Advisor | Gender Equity Advocate

3 周

The additional energy required to run AI technology has concerned me for some time. I love so many things that AI offers but fear the strain on our grid, not to mention the environmental effects that come with producing more energy. We are already experiencing the devastation of climate change.

回复
Kirby Lee, PE, CEM, CEA, GBE, LEED AP BD-C ????

#1 Best Selling Author: The Lion Attitude | Action+Connection | I+M+P+A+C=T | AEC Leadership Development | Technical Sales Leadership | MEP Engineer & Energy Specialist | 3rd Gen HVAC Construction | Owners Representative

3 周

Great work Eugina Jordan!

回复
Eugina Jordan

CMO to Watch 2024 I Speaker | 3x award-winning Author UNLIMITED I 12 patents I AI Trailblazer Award Winner I Gen AI for Business

3 周

U.S. Department of the Treasury’s AI initiative saves taxpayers over $4 billion—detecting fraud with precision-driven machine learning.

Eugina Jordan

CMO to Watch 2024 I Speaker | 3x award-winning Author UNLIMITED I 12 patents I AI Trailblazer Award Winner I Gen AI for Business

3 周

Check out the US Army’s generative AI pilot, #CalibrateAI, using secure AI tools to streamline acquisition efforts.

要查看或添加评论,请登录

Eugina Jordan的更多文章

  • Gen AI for Business Weekly Newsletter # 30

    Gen AI for Business Weekly Newsletter # 30

    Welcome to the 30th edition of Gen AI for Business, where I bring you the latest insights, tools, and strategies on how…

    12 条评论
  • Gen AI for Business Weekly Newsletter # 29

    Gen AI for Business Weekly Newsletter # 29

    Welcome to Gen AI for Business #29, your go-to source for insights, tools, and innovations in Generative AI for the B2B…

    10 条评论
  • Gen AI for Business Newsletter # 28

    Gen AI for Business Newsletter # 28

    Gen AI for Business # 28 newsletter covers key insights and tools on Generative AI for business, including the latest…

    28 条评论
  • Gen AI for business newsletter # 26

    Gen AI for business newsletter # 26

    Welcome to Gen AI for Business weekly newsletter # 26. We’re back with the latest on all things Gen AI, from…

    11 条评论
  • Gen AI for Business Newsletter, edition #25

    Gen AI for Business Newsletter, edition #25

    October 6 newsletter Welcome to the 25th edition of Gen AI for Business! I am so grateful and thankful for each of…

    32 条评论
  • Gen AI for Business Newsletter # 24

    Gen AI for Business Newsletter # 24

    September 29 newsletter Welcome to Gen AI for Business #24, where we dive into the latest breakthroughs, strategies…

    4 条评论
  • Gen AI for Business # 23

    Gen AI for Business # 23

    Welcome to Gen AI for Business newsletter #23, where we dive into the latest generative AI news, trends, strategies…

    28 条评论
  • Gen AI for Business # 22

    Gen AI for Business # 22

    Welcome to the Gen AI for Business #22 newsletter. This newsletter provides key insights and tools on Generative AI for…

    42 条评论
  • Gen AI for Business # 21

    Gen AI for Business # 21

    Welcome to this week's newsletter, where we dive into a roundup of all the latest developments in AI. From regulatory…

    8 条评论
  • Gen AI for Business # 20

    Gen AI for Business # 20

    Welcome to September! As we settle back into our routines and the kids head back to school, the world of Generative AI…

    46 条评论

社区洞察

其他会员也浏览了