Gen AI for Business Newsletter Edition # 31

Gen AI for Business Newsletter Edition # 31

Welcome to another edition of Gen AI for Business! This week, we dive into the latest tools, strategies, and real-world applications of Generative AI reshaping the B2B landscape.

What stood out to me?

The growing uncertainty in U.S. regulations with the new administration contrasted by the increasing pace of actual implementations across industries like logistics, education, and pharma. From practical tools like IBM's AI Engineering Certificate to insights on navigating ML systems, there's no shortage of actionable knowledge this week.

If you find value in this newsletter, please leave a like, share your thoughts in the comments, or pass it along to someone who might benefit. After all, knowledge is power!

Thank you,

Eugina?

Models

Claude has launched desktop apps for Mac and Windows, offering seamless access to its AI capabilities directly from your desktop. The data center industry is advancing with initiatives like the Open Compute Project and Oracle’s OCI Supercluster to support scalable AI workloads, while companies like Cerebras and DigitalOcean are simplifying and enhancing AI deployment. Waymo introduced its EMMA model, enhancing autonomous driving with multimodal inputs for safer decision-making and logistics optimization. Near Protocol aims to create a record-breaking 1.4 trillion-parameter open-source AI model, emphasizing decentralization and privacy in AI development. Lastly, Nous Research's Hermes 3, a fine-tuned version of Meta's Llama 3.1, delivers specialized capabilities for industries like healthcare and enterprise automation, positioning itself as a versatile open-source solution.

  • Claude launched desktop apps in public beta for both Mac and Windows, which brings Claude's capabilities directly to your preferred work environment. Open Claude right from your desktop for easy access and discovery. Visit claude.ai/download? to download today.

  • How Data Centers Are Harnessing AI Workloads for Enhanced Cloud, LLM, and Inference Capabilities The data center industry is evolving rapidly to meet the demands of AI workloads, with recent advancements driving innovation in infrastructure and service delivery. The Open Compute Project launched the Open Systems for AI initiative to standardize AI infrastructure through multi-vendor collaboration, with contributions from Nvidia, Meta, and Vertiv addressing challenges like power density and cooling. Oracle’s OCI Supercluster, developed with Nvidia, supports massive AI workloads, including LLM training, while ensuring compliance with regional data sovereignty. Cerebras Systems set a new benchmark in AI inference, delivering 2,100 tokens per second on the Llama 3.2 70B model, enabling faster real-time applications. DigitalOcean, in partnership with Hugging Face, introduced "1-Click Models" to simplify AI deployment, broadening access for developers. Additionally, Lightbits Labs and Crusoe Energy Systems unveiled climate-conscious AI infrastructure powered by clean energy. These developments underscore the industry’s commitment to scalable, efficient, and sustainable AI solutions.?

  • Waymo Launches AI Model for Autonomous Driving Waymo’s End-to-End Multimodal Model for Autonomous Driving (EMMA) can be practically utilized in several impactful ways right now, given its current capabilities and design. For instance, EMMA can be directly integrated into autonomous vehicle systems to enhance motion planning and 3D object detection. Companies developing self-driving technology can use EMMA to process sensor inputs like camera feeds and textual data, enabling vehicles to better interpret complex road scenarios, predict trajectories, and make safer driving decisions in real-time. Additionally, EMMA's ability to leverage multimodal inputs and combine tasks like road graph understanding with object detection makes it a valuable tool for optimizing fleet logistics and delivery operations. Logistics companies could adopt EMMA to improve route planning and efficiency in autonomous delivery vehicles. Its ability to unify data sources and reason effectively across multiple tasks also lends itself to enhancing safety and decision-making in pilot programs for urban public transportation, such as autonomous shuttles or buses. Developers can access EMMA's capabilities for testing and refining autonomous systems through simulation platforms, where they can train and validate vehicle behavior in various scenarios without the risks of real-world deployment. Furthermore, Waymo’s research suggests that EMMA could be offered as part of a service or toolkit for developers working on autonomous technology, enabling them to integrate it into their existing platforms for enhanced multimodal task performance. In terms of infrastructure, municipalities and smart city projects could deploy EMMA in traffic management systems to predict and address congestion issues, enhancing overall urban mobility. These applications demonstrate how EMMA’s practical capabilities can immediately contribute to advancing autonomous driving and related fields.

  • Near plans to build world’s largest 1.4T parameter open-source AI model Near Protocol has announced plans to develop the largest open-source AI model, boasting 1.4 trillion parameters—3.5 times larger than Meta's Llama model. Revealed during the Redacted conference in Bangkok, the initiative will rely on crowdsourced contributions through its Near AI Research hub, starting with a smaller 500 million parameter model. The project will expand across seven progressive models, leveraging encrypted Trusted Execution Environments to protect privacy while incentivizing contributors. The estimated $160 million training cost will be covered by token sales, with returns generated from the model's usage. Co-founder Illia Polosukhin, a pivotal figure in AI innovation, emphasized the importance of decentralized AI for privacy and autonomy. He warned against centralized control of AI, advocating for decentralized solutions to maintain Web3’s philosophical relevance. With contributions from Near's experienced team and potential synergies with decentralized AI initiatives, the project aims to set new standards for AI development while upholding privacy and decentralization principles.

My take: Meta might as well say "hold my beer" after Near Protocol's ambitious announcement. With its Llama models already dominating open-source AI discussions, Meta is unlikely to sit idle while Near targets a 1.4 trillion-parameter model. Given Meta's resources and prior experience with cutting-edge AI, it's only a matter of time before they respond, potentially pushing the boundaries of AI innovation even further.

?

  • Hermes 3 - NOUS RESEARCH ? Hermes 3 is an open-source language model developed by Nous Research, fine-tuned from Meta's Llama 3.1 models with 8B, 70B, and 405B parameters. It offers advanced capabilities, including long-term context retention, multi-turn conversation handling, complex roleplaying, internal monologue abilities, and enhanced function-calling. These enhancements make Hermes 3 more adaptable and user-aligned compared to the original Llama 3.1 models, which serve as foundational models without these specialized fine-tunings.

About the company: Nous Research is a private applied research group founded in 2023, focusing on artificial intelligence and machine learning within the technology sector. The company has secured $5.2 million in seed funding from 14 investors. Nous Research fine-tuned Meta's Llama 3.1 to create Hermes 3, targeting specialized markets such as advanced customer support, creative content generation, and complex data analysis where extended context, enhanced conversational dynamics, and function-calling capabilities are critical. By leveraging open-source Llama, Nous achieves faster market entry and cost efficiency, positioning Hermes 3 for niche applications and attracting a developer ecosystem. The model's adaptability makes it suitable for industries like finance, healthcare, education, and enterprise automation. Nous's strategy not only differentiates Hermes 3 from Llama but also positions it for potential collaboration, acquisition, or premium enterprise deployments, filling gaps in general-purpose AI offerings with tailored solutions.

News?

Salesforce is hiring 1,000 employees to scale sales of its AI-driven Agentforce product, emphasizing the company’s strategic pivot towards AI-powered growth. Microsoft has revamped its classic Paint app with generative AI tools like fill and erase, transforming the decades-old software into a modern creative platform aligned with its broader AI strategy. X is experimenting with a freemium model for its Grok AI chatbot to expand accessibility, attract users, and boost its xAI initiative despite scrutiny over funding and implementation. OpenAI plans to release "Operator," an AI agent designed for complex task automation, marking a shift towards intelligent productivity tools and reflecting the industry's focus on multi-step AI capabilities.

  • Salesforce to Hire 1,000 People for AI Product Sales Push Salesforce is hiring over 1,000 employees to drive sales of its new generative AI product, Agentforce, which launched last month at $2 per agent conversation. This hiring surge follows the company's pivot toward AI agents capable of automating tasks like customer support and sales development. CEO Marc Benioff highlighted the strong customer feedback and momentum since the product’s release. While Salesforce has recently cut jobs and reduced sales expenses, this move signals a strategic focus on AI-driven growth. The announcement boosted Salesforce shares by 2.5%, reaching a record high of $322.81, marking an 18% year-to-date increase.?

  • Microsoft added AI to software it has barely touched since 1985. The results are astonishing | CNN Business Microsoft is integrating advanced AI tools into its classic Paint application as part of a Windows 11 update, transforming the decades-old app. New features include generative fill, which allows users to add AI-generated graphics to their artwork by simply typing a description, and generative erase, enabling seamless removal of objects without distorting the background. These updates leverage Microsoft’s Copilot+ and the DALL-E image generator, expanding AI accessibility to users, particularly in Europe. The enhancements mark a significant evolution for Paint, aligning with Microsoft’s broader strategy to infuse AI across consumer products like Notepad and remain competitive in the AI race.

  • X Experiments With Free Access to Its Grok AI Chatbot | Social Media Today X plans to expand access to its Grok AI chatbot through a freemium model, aiming to increase usage beyond its limited X Premium subscriber base (0.26% of users). This strategy seeks to attract more users, assess Grok’s value, and potentially boost X Premium subscriptions while supporting Elon Musk's xAI initiative. xAI, backed by $6 billion in funding and its Colossus supercomputer, is intertwined with Grok's success, as both entities face scrutiny over funding sources and Musk's possible role in the Trump administration. This expansion highlights X's efforts to showcase AI capabilities amidst regulatory and ethical challenges.?

My take: The freemium model for X's Grok AI chatbot holds potential value for enterprises, but its effectiveness will depend on its implementation and capabilities. If Grok can automate workflows, generate actionable customer insights, and enhance communication efficiency, it could significantly reduce operational costs and improve productivity. However, enterprises will carefully assess key factors such as data security, particularly regarding the protection of sensitive corporate information, as well as the chatbot's scalability to handle large volumes of interactions. Additionally, its ability to integrate seamlessly with existing enterprise systems and offer customization to fit specific business needs will be critical. While the freemium model provides an opportunity for businesses to test the chatbot, its ultimate utility will depend on whether it can reliably deliver actionable, enterprise-grade solutions.

  • OpenAI readies AI agent release | LinkedIn ? OpenAI is set to launch its advanced AI agent, codenamed "Operator," in January, as a research preview and through an API for developers. The agent is designed to autonomously perform tasks such as booking travel, writing code, and other browser-based actions, signaling a shift from conversational AI to practical automation. This move aligns with the broader industry trend of developing intelligent, multi-step task-handling AI tools, as rivals like Anthropic, Microsoft, and Google also advance similar offerings. OpenAI's CEO, Sam Altman, emphasized that agents represent the "next giant breakthrough" in AI, surpassing mere improvements in model performance. "Operator" aims to redefine productivity by automating complex tasks, marking a critical step in the evolution of AI. Read what LinkedIn community is saying about it.?

Regulatory?

The DOJ's updated guidance on corporate compliance programs emphasizes managing AI risks like bias, cybersecurity, and transparency while urging proactive oversight and stronger whistleblower protections. OpenAI highlighted the economic and strategic potential of robust AI infrastructure, projecting massive job creation and GDP growth from large-scale data centers but warned about challenges under new political leadership. Ofcom's warning to tech companies mandates risk assessments and user protection measures for generative AI tools under the upcoming Online Safety Act, with hefty penalties for non-compliance starting in December.

  • What DOJ's Latest Guidance on Artificial Intelligence Corporate Compliance Means for Businesses | Parker Poe Adams & Bernstein LLP - JDSupra The U.S. Department of Justice (DOJ) has updated its Evaluation of Corporate Compliance Programs (ECCP) to address AI-specific risks. The new guidance emphasizes risk management for AI and data-driven technologies, data transparency, and whistleblower protections. Key updates include assessing risks such as data confidentiality, cybersecurity, and bias, while requiring proactive monitoring and human oversight of AI systems. The ECCP also highlights the importance of data access for compliance teams, ensuring resources are proportional to technology investments, and strengthening whistleblower protections, including anti-retaliation policies. The guidance signals increased regulatory scrutiny in 2025, urging corporations to adopt adaptable, risk-focused compliance programs.?

  • OpenAI’s comments to the NTIA on data center growth, resilience, and security? OpenAI has emphasized the critical role of robust data center infrastructure in maintaining the United States' global leadership in artificial intelligence (AI) in its recent comments to the National Telecommunications and Information Administration (NTIA). Highlighting the economic and strategic potential of AI, OpenAI projects that building a single 5GW data center could generate approximately 40,000 jobs across various sectors, including construction, retail, and services, while contributing $17–20 billion to a state’s GDP. The organization underscores the importance of strategic investment in AI infrastructure to ensure continued economic growth, technological innovation, and global competitiveness. Drawing parallels to the transformative impact of past broadband investments, OpenAI argues that modernizing AI-related infrastructure can bolster local economies, support energy grid advancements, and expand semiconductor manufacturing capabilities. Additionally, OpenAI warns that without proactive U.S.-led investment, global infrastructure funding could pivot toward projects that undermine democratic values, highlighting the urgency of sustaining American leadership in AI development. The project’s scale and power requirements are monumental, with OpenAI estimating that AI models may consume up to 100GW of additional grid capacity by 2030, necessitating $50 billion in power generation investments. These efforts align with broader U.S. initiatives like the CHIPS Program, though the future of such policies remains uncertain following recent political changes. OpenAI's focus on partnering with entities like Microsoft and Crusoe Energy highlights its hands-on approach to addressing infrastructure demands critical for advancing generative AI capabilities.

And with the recent election of President Donald Trump, there is uncertainty regarding the future of such initiatives. The incoming administration may reassess or modify existing AI infrastructure projects, potentially affecting the implementation of OpenAI's proposal. Stakeholders need to monitor policy developments closely to understand the new administration's stance on AI infrastructure and related investments.

  • Ofcom Warns Online Platforms Over Generative AI Tools - Minutehack Ofcom has issued a warning to tech companies regarding their responsibilities under the upcoming Online Safety Act, specifically addressing generative AI tools and chatbots. The regulator's open letter follows reports of concerning incidents involving these technologies, including cases where chatbots were used to impersonate real individuals, including deceased children. Under the new rules, platforms allowing user-generated content, including AI-generated text, audio, images, or videos, or enabling users to create and share chatbots, will be required to protect users, particularly children, from harmful or illegal material. The Act also extends to AI tools that search across multiple databases, categorizing them as search services. Platforms must conduct risk assessments, implement measures to mitigate risks, and provide tools for users to report harmful content. Non-compliance could result in significant fines, potentially reaching billions of pounds. The Online Safety Act's first phase begins in December with risk assessments, and mandatory codes of practice are expected by March 2025. This marks a critical step in regulating AI-driven tools and ensuring user safety online.?

Regional Updates

The U.S. has ordered TSMC to stop shipments of advanced AI chips to China, targeting companies like Huawei in a bid to limit China's AI advancements amid export control delays. Google’s AI Opportunity Initiative expands to MENA with $15 million in funding, aiming to train 500,000 people and enhance AI research in healthcare and climate change while supporting regional innovation. At NVIDIA’s AI Summit Japan, CEO Jensen Huang unveiled collaborations on sovereign AI supercomputers, AI-5G networks, and robotics, positioning Japan as a leader in the next industrial revolution.

  • U.S. ordered TSMC to halt shipments to China of chips used in AI applications, source says The U.S. has instructed Taiwan Semiconductor Manufacturing Co. (TSMC) to cease shipments of advanced chips, specifically those of 7 nanometers or more advanced designs, to Chinese customers effective November 11, 2024. This decision, implemented via a Department of Commerce "is informed" letter, aims to prevent advanced AI chips, such as GPUs and AI accelerators, from reaching China, particularly companies like Huawei that are on the U.S. restricted trade list. The move follows TSMC's prior suspension of shipments to Chinese chip designer Sophgo after its chips were linked to Huawei's advanced AI processors, raising concerns about potential export control violations. This action enables the U.S. to bypass lengthy regulatory updates and immediately impose licensing requirements on specific entities. It builds on prior restrictions issued to Nvidia, AMD, and others regarding AI-related chip exports to China. The order underscores bipartisan concern about China's advancements in AI and semiconductors and reflects ongoing U.S. efforts to curtail China's access to advanced technologies that could bolster its AI capabilities. This measure comes amidst delays in updating broader export control rules, which have been in development for over a year but remain unpublished.?

  • ?Our AI Opportunity Initiative comes to the Middle East and North Africa This Google AI initiative focuses on the Middle East and North Africa (MENA), aiming to equip individuals and organizations with essential AI skills and tools to drive economic growth. With $15 million in Google.org funding by 2027, the initiative plans to train half a million people, particularly underserved communities, through partnerships and programs like "Maharat min Google." It also supports AI research in areas like healthcare and climate change, while expanding AI-powered features like Arabic-language Gemini tools. Google's partnership with Saudi Arabia’s PIF for AI infrastructure further highlights its commitment to enhancing AI accessibility and driving regional innovation.?

  • ‘Every Industry, Every Company, Every Country Must Produce a New Industrial Revolution,’ Says NVIDIA CEO Jensen Huang at AI Summit Japan | NVIDIA Blog? NVIDIA’s AI Summit Japan showcased the country’s pivotal role in the AI revolution. CEO Jensen Huang highlighted the synergy between AI infrastructure and robotics, emphasizing Japan's unique cultural and technological strengths to lead in digital and physical AI. Major announcements included NVIDIA's collaboration with SoftBank to build Japan’s largest AI supercomputer using the Blackwell platform, advancing sovereign AI initiatives. SoftBank also piloted the world’s first AI-5G network integration, signaling new opportunities for telecom providers. The summit also revealed NVIDIA’s partnerships with Japanese firms to bolster national AI infrastructure, targeting industries like healthcare and robotics. Huang praised Japan’s expertise in mechatronics, urging the nation to capitalize on AI breakthroughs. These initiatives underscore Japan's potential to spearhead a new industrial revolution driven by advanced AI technologies.

Partnerships

  • GitHub Copilot will support models from Anthropic, Google, and OpenAI - The Verge? GitHub is expanding its AI-powered Copilot tool to support multiple models from Anthropic, Google, and OpenAI, allowing developers to choose the most suitable model for their needs. This multi-model approach reflects GitHub's vision for greater flexibility in AI code generation. New features include Claude 3.5, Gemini 1.5 Pro, and OpenAI’s GPT-4o series, with developers able to toggle between models during conversations in Copilot Chat. GitHub also unveiled Spark, an AI tool for creating web apps using natural language. Spark leverages OpenAI and Anthropic models to generate live app previews, catering to both experienced developers and novices by allowing direct code manipulation or app creation via text prompts. This aligns with GitHub's ambitious goal of enabling 1 billion developers, lowering barriers to software creation. Additional Copilot updates include multi-file editing in VS Code (available November 1), Copilot Extensions (launching early 2025), public preview for Copilot in Xcode, and new code review capabilities. GitHub’s innovations mark a shift towards democratizing development and enhancing AI-assisted programming workflows.

Cost?

It’s all about the cost of chips and who can afford it!

  • Nvidia Blackwell 'Superchips' Will Cost Around $70,000 Each: Analyst | Extremetech NVIDIA's next-generation Blackwell architecture is expected to feature high costs, with estimates indicating individual "superchips" priced around $70,000. The company's flagship configuration, the GB200 NVL72 server, with 72 GPUs and 36 CPUs, is projected to sell for approximately $3 million. Midrange options, such as the NVL36 server, are expected to cost $1.8 million. Despite these steep prices, Blackwell's architecture is positioned as a drop-in upgrade for existing H100 setups, ensuring compatibility and performance enhancement. The launch of Blackwell comes amid fierce competition in the AI hardware space. While AMD's MI300 and Intel's Gaudi 3 are emerging alternatives, NVIDIA is anticipated to maintain dominance, with its AI hardware sales projected to reach $50 billion in 2024.?

My take: the price tag for NVIDIA's Blackwell architecture—ranging from $30,000 for accelerators to $3 million for a fully configured server—is far beyond reach for small or mid-sized companies. These systems are tailored for the "big guns"—global tech giants, research institutions, and organizations with massive AI workloads and deep pockets. Companies like Google, Microsoft, Amazon, and Meta, as well as specialized players in sectors like healthcare, automotive, and advanced research, are the primary buyers.

Smaller enterprises are likely to rely on cloud service providers like AWS, Azure, and Google Cloud, which can offer fractional access to such high-end hardware. It's a clear signal that the future of cutting-edge AI will largely be shaped by those who can afford to play at this scale. For everyone else, shared or scaled-down solutions will be the only viable option.

Investments

Perplexity AI aims for a $9 billion valuation with a $500 million funding round as it experiments with "sponsored follow-up questions" ads to compete with Google and OpenAI's search offerings. Amazon is investing $110 million in its "Build on Trainium" program to support AI research at universities, while also offering startups up to $300,000 in AWS credits, highlighting its commitment to fostering innovation. Major tech companies like Google, Microsoft, and Amazon are betting on AI-focused startups and research, providing extensive cloud credits and resources to embed themselves in future breakthroughs. The CRN report highlights promising startups like Liquid AI and HiddenLayer, focusing on efficient AI systems, security, and responsible AI development across industries.

  • Perplexity AI seeks valuation of about $9 billion in new funding round Perplexity AI, an AI search engine startup, is in talks to raise $500 million in its latest funding round, aiming to more than double its valuation from $3 billion in June to about $9 billion. This marks the company’s fourth funding round this year, fueled by the surge in generative AI interest. Perplexity, which seeks to challenge Google’s dominance in search, has faced plagiarism allegations from media outlets like the New York Times, though the company has denied the claims. And that is why Perplexity is starting its ads experiment this week. - The Verge The ads, labeled as "sponsored follow-up questions," appear alongside search answers for U.S. users. This move contrasts with OpenAI's decision to avoid ads in its ChatGPT Search tool but aligns with Google’s approach, which recently introduced ads in its AI search features, including AI Overviews, for specific queries on mobile devices. The experiment reflects broader trends in integrating advertising within AI-driven search experiences. Have you tried this yet?

  • Amazon invests $110 million to support AI research at universities using Trainium chips Amazon is investing $110 million in its "Build on Trainium" program to support university-led AI research, providing access to AWS Trainium UltraClusters for developing advanced AI architectures, machine learning libraries, and performance optimizations. The initiative targets large-scale computational tasks, addressing budget constraints faced by universities, and emphasizes open-source contributions to foster innovation. Participating institutions like Carnegie Mellon and UC Berkeley will leverage Trainium's resources for research on tensor compilation, ML parallelization, and hardware tuning. The program includes the Neuron Kernel Interface (NKI), enabling researchers to fine-tune chip-level computations for optimized performance. Alongside funding, researchers gain access to AWS’s technical education and community support.

AWS offers substantial support to startups through various credit programs. The AWS Activate program provides eligible startups with up to $100,000 in AWS promotional credits to help them build and scale their applications. Additionally, startups developing on AWS's custom AI chips, such as Trainium and Inferentia, may qualify for up to $300,000 in additional credits. Amazon Web Services

In June 2024, AWS announced a $230 million commitment to support generative AI startups, offering up to $1 million in credits per startup to facilitate the development, training, testing, and deployment of their AI solutions. Amazon Press Room

Major tech companies like Google, Microsoft, and Amazon are actively investing in startups and academic research to drive innovation in artificial intelligence (AI). These initiatives provide substantial resources, including cloud credits and access to advanced AI tools, to support the development of new technologies.

Google's Support for Startups: Google offers the "Google for Startups Cloud Program," which provides eligible startups with up to $200,000 in Google Cloud credits over two years. AI-focused startups may receive up to $350,000 in credits. The program also includes technical support, training, and access to Google's global startup community. Google Cloud

Microsoft's Initiatives: Microsoft's "Microsoft for Startups Founders Hub" offers startups up to $150,000 in Azure credits, access to development tools like GitHub Enterprise, and mentorship resources. The program is open to all startups, regardless of funding stage, aiming to democratize access to technology and support innovation. Microsoft Startups

This strategy isn't about immediate profits but about embedding themselves as indispensable to future innovators. Whether it’s the next big generative AI app or a breakthrough in biotech, these companies want to ensure that when the world-changing innovation happens, it's running on their cloud infrastructure. They’re not just offering credits—they’re buying a lottery ticket to the future.

  • ?https://www.crn.com/news/ai/2024/stellar-startup-ai-and-machine-learning-tool-vendors-to-know-in-2024 highlights nine promising AI and machine learning startups founded since 2018. Cranium AI (2022) specializes in securing AI applications through its AI Exposure Management platform, while Hatz AI (2023) supports MSPs in building AI-as-a-Service offerings with tools like an AI chat assistant and app builder. HiddenLayer (2022) focuses on safeguarding AI models and data for Fortune 1000 companies and government agencies. Liquid AI (2023), a spin-off from MIT, develops efficient general-purpose AI systems with its Liquid Engine framework. Since 2019, MSPbots has addressed MSP challenges by offering integrations with over 70 systems and thousands of widgets and dashboards. Tacilent.ai (2022) delivers its RESSETT SaaS platform to provide strategic insights using clean data and responsible AI for industries like healthcare and aerospace. Tecton (2018) empowers teams to activate data for AI applications efficiently, while Thread (2022) enhances customer service for MSPs with an AI-based collaboration platform. Lastly, Verta (2018) supports organizations in deploying, monitoring, and managing machine learning models at scale.

Research?

MIT’s graph-based AI model blends generative AI and computational tools to map interdisciplinary connections, driving innovations like sustainable materials inspired by art and biology. Deloitte's report emphasizes AI agents and multiagent systems as transformative tools for workflow automation and productivity, urging enterprises to adopt these technologies for competitive advantage. Ericsson’s ConsumerLab report shows GenAI users' willingness to pay up to 35% more for high-quality 5G, offering telecom operators a path to monetizing 5G investments through performance-based models. Research highlights GenAI's disruption of the labor market, with freelancing roles in writing, software development, and design seeing declines due to AI adoption. IDC forecasts global AI spending will hit $337 billion by 2025, with CIOs focusing on AI integration, governance, and workforce upskilling to unlock long-term value. Finally, studies on large language models reveal strengths in handling concurrent tasks but emphasize the need for architectural improvements for extended-context applications.

  • Graph-based AI model maps the future of innovation | MIT News | Massachusetts Institute of Technology A novel AI model developed by MIT professor Markus Buehler combines generative AI with graph-based computational tools, uncovering hidden connections between disciplines like science and art. This method employs category theory to teach the AI how to understand abstract relationships, enabling deep reasoning across domains. By analyzing 1,000 scientific papers, the model created knowledge maps that reveal links between concepts and suggest new material designs. For instance, the AI identified parallels between Beethoven’s Symphony No. 9 and biological materials, highlighting shared patterns of complexity. It also proposed a mycelium-based composite inspired by Wassily Kandinsky’s painting Composition VII, combining strength, adaptability, and sustainability. This material could have applications in sustainable construction, biodegradable plastics, and biomedical devices. The study demonstrates the AI's ability to generate novel predictions and inspire innovative material designs by blending insights from diverse fields. Buehler's work showcases how AI-powered knowledge graphs can drive interdisciplinary research and spark transformative discoveries across science, art, and engineering.

  • ?AI agents and multiagent systems | Deloitte US Deloitte’s latest report emphasizes the transformative potential of AI agents and multiagent systems in driving enterprise productivity and automation, far surpassing traditional generative AI (GenAI) models. AI agents excel in reasoning, planning, and executing multi-step workflows while integrating with external tools and real-time data. They utilize short- and long-term memory to deliver personalized, context-aware interactions, enhancing accuracy and adaptability. Multiagent systems amplify these capabilities by enabling role-specific collaboration, where agents share knowledge, validate outputs, and streamline complex workflows. The benefits include improved productivity, self-learning capabilities, enhanced accuracy, and transparency, with validator agents ensuring reliability. Adoption trends reveal that nearly 1 in 6 leaders saw significant transformation through GenAI by late 2023, and forward-thinking organizations are implementing AI agents to reengineer business processes. Deloitte advises C-suite leaders to prepare for this shift, highlighting AI agents’ potential to unlock new efficiencies, innovate workflows, and redefine industries. Download it here: https://www2.deloitte.com/content/dam/Deloitte/us/Documents/consulting/us-ai-institute-generative-ai-agents-multiagent-systems.pdf???

  • Analysis: Gen AI means consumers will pay more for 5G connectivity – Ericsson Ericsson's latest ConsumerLab report highlights the growing potential for 5G monetization, driven by the rise of generative AI (GenAI) applications. The study reveals that a quarter of GenAI users demand guaranteed performance, such as real-time responses, and are willing to pay up to 35% more for high-quality connectivity. Ericsson forecasts a 2.5x increase in weekly GenAI app users within five years, further boosting demand for reliable 5G services. The report advises telecom operators to shift from traditional unlimited plans to performance-based models, potentially increasing 5G average revenue per user (ARPU) by 5% to 12%. However, successful monetization depends on operators addressing coverage challenges, particularly for indoor areas, and embracing 5G standalone infrastructure. Ericsson emphasizes the need for targeted consumer segmentation and tailored offerings to maximize returns on 5G investments. Read the report here: Ericsson ConsumerLab: Rising use of Generative AI Apps boosts consumer interest in differentiated connectivity The study reveals that 35% of global 5G users are willing to pay more for guaranteed high-quality connectivity, particularly for critical applications like AI-driven apps, video calls, streaming, and online payments. Generative AI users, in particular, are a key driver, with one in four already willing to pay up to 35% more for fast and reliable network performance. Demand for these AI applications is expected to grow 2.5 times over the next five years, signaling a major shift in consumer expectations. Ericsson estimates that communication service providers (CSPs) could boost their Average Revenue Per User (ARPU) by 5-12% through performance-based and platform-driven business models. By exposing Quality on Demand (QoD) APIs to developers, CSPs can enable premium app experiences and unlock new revenue streams. However, success will depend on improving network reliability, addressing coverage issues, and tailoring services to consumer segments like “Assurance Seekers,” who actively seek elevated connectivity for essential tasks. This strategic transformation positions CSPs to capitalize on rising consumer interest in differentiated connectivity while monetizing their 5G investments effectively.

?

My take: the telecom industry is facing significant challenges, especially as traditional revenue streams like voice and text continue to decline, and operators struggle to justify massive 5G investments. Monetizing 5G through differentiated, performance-based models tied to generative AI and other high-demand applications seems to be one of the few viable paths forward.

Ericsson’s data underscores this: the willingness of GenAI users to pay more for guaranteed, high-quality connectivity offers a much-needed opportunity for telecom operators to boost ARPU. However, the path to monetization isn't straightforward. It requires operators to adopt 5G standalone infrastructure, address network performance issues (especially for indoor coverage), and develop customer segmentation strategies that cater to "assurance seekers" willing to pay a premium.

  • Research: How Gen AI Is Already Impacting the Labor Market Recent research published in Management Science explores the profound short-term impact of generative AI (Gen AI) on the labor market, with particular focus on online freelancing platforms. The study analyzed 1.4 million job posts and found significant decreases in demand for automation-prone roles, such as writing, software development, and graphic design, after the introduction of tools like ChatGPT and AI image generators. Writing jobs saw the largest decline (30%), followed by software development (20%) and graphic design (17%). This trend highlights Gen AI's capacity to replace certain job categories.?

  • CIOs to spend ambitiously on AI in 2025 — and beyond Research firm IDC forecasts global spending on AI technologies will reach $337 billion in 2025, doubling to $749 billion by 2028. CIOs across industries are ramping up investments in AI to support automation, productivity, and innovation. Two-thirds of AI spending will focus on embedding AI into core business operations, with many leveraging SaaS platforms and pre-trained models from providers like AWS, Microsoft, and Google Cloud. AI governance is becoming a priority, as firms address risks like data security and "shadow AI." CIOs are forming AI committees, establishing guardrails, and training employees in AI tools. While risks persist, the consensus among CIOs is that strategic investments in AI and cloud infrastructure will yield long-term benefits in efficiency, innovation, and competitiveness.

  • Needle Threading A study examining 17 leading Large Language Models (LLMs) evaluated their ability to handle long-context tasks, revealing a gap between their supported context limits (up to 630k tokens) and effective context usage, where performance remains high. Models showed strong "thread-safe" capabilities, maintaining accuracy when following multiple information threads simultaneously. The research identified task-specific challenges: precision declines for single-needle retrieval in mid-context, context length significantly impacts performance in multi-needle tasks, and clustered data improves retrieval in conditional tasks. While forward-moving threads are easier to follow, concurrent thread tracking showed minimal performance loss. These insights highlight the potential and limitations of LLMs for applications requiring extended context processing, emphasizing the need for optimization in model architecture and training.?

Concerns

Generative AI's transformative potential hinges on consumer demand, with businesses leveraging structured data, GenAI agents, and scalable platforms to stay competitive. Google's LLM agent demonstrated AI's cybersecurity potential by autonomously fixing a real-world database flaw, though experts warn of dual-use risks. Stanford's study on LLMs highlights their consistency but exposes biases on sensitive topics, suggesting a pluralistic approach to AI values. Researchers successfully jailbroke LLM-driven robots, revealing critical security vulnerabilities in robotics AI. Leading AI firms, including OpenAI, face challenges scaling next-gen models like Orion due to data scarcity and diminishing returns. Only 18% of companies report readiness for GenAI deployment, emphasizing the need for unified data and governance frameworks. AI chatbots revolutionize industries but require strict data privacy measures to balance innovation with security. Patent strategies remain vital for GenAI technology, with China leading in patents and the U.S. emphasizing human contributions in AI-driven inventions. Lastly, GenAI’s embedded biases highlight risks to inclusivity, trust, and innovation, underscoring the importance of fairness and accountability in AI design.

  • The real battle for generative AI in software Consumers, not companies, drive technological disruption. Generative AI’s rise in B2B stems from user demand, sparked by tools like ChatGPT. To stay competitive, businesses must empower developers with AI tools, leverage structured data for seamless workflows, adopt Gen AI agents for smarter decisions, and embrace a platform mindset for scalable innovation. Structured data remains a key advantage, enabling firms to develop Gen AI features that simplify workflows and deliver value directly within existing systems. Organizations must also rethink their processes, leveraging Gen AI agents to automate tasks and enable smarter, faster decision-making. Those who integrate AI as a core strategy will lead the future of tech.?

  • Google’s LLM Agent Finds and Fixes Real-World Bug in Popular Database - ClearanceJobs Google's LLM agent, part of the Big Sleep project, made history by autonomously identifying and fixing a vulnerability in the popular SQLite database before it reached an official release. This marks the first time AI has uncovered a real-world software flaw outside of a test environment, demonstrating the potential of large language models (LLMs) for proactive cybersecurity. Big Sleep equips LLMs with tools like a code browser, debugger, and sandbox for human-like vulnerability research, enabling AI to identify and address flaws faster than humans. While this success highlights AI's positive applications, experts warn of dual-use risks. Just as AI can enhance defense by securing software and flagging operational gaps, malicious actors could exploit it to identify vulnerabilities or compromise AI tools themselves. With the scale and complexity of cybersecurity threats outpacing human capabilities, integrating AI into routine code reviews is becoming essential to staying ahead in the security arms race.
  • ?Can AI Hold Consistent Values? Stanford Researchers Probe LLM Consistency and Bias Stanford researchers have investigated the consistency of large language models (LLMs) across diverse topics to assess potential bias. Testing 8,000 questions in multiple languages, they found LLMs like GPT-4 and Claude generally provide consistent answers, often outperforming humans on neutral topics. However, consistency drops significantly on controversial issues like euthanasia or abortion, highlighting that LLMs do not inherently hold values but reflect varied perspectives. The study suggests that inconsistency on sensitive topics may indicate a lack of bias. Researchers propose training models for value pluralism to represent diverse viewpoints rather than enforcing strict consistency, raising critical questions about how AI should embody societal values.

  • ?It's Surprisingly Easy to Jailbreak LLM-Driven Robots Researchers have developed RoboPAIR, an algorithm capable of jailbreaking robots controlled by large language models (LLMs), achieving a 100% success rate in bypassing safeguards across various systems. Experiments on robots like Boston Dynamics’ Spot and Nvidia’s self-driving vehicle simulator revealed vulnerabilities that allowed malicious actions, such as collisions or harmful commands. RoboPAIR exploits LLM prompts and APIs to manipulate robots, highlighting security risks in AI-integrated robotics. While the findings raise safety concerns, researchers emphasize the importance of understanding such threats to build robust defenses, advocating for enhanced oversight and interdisciplinary approaches to mitigate risks in real-world AI applications.?

  • OpenAI, Google and Anthropic are struggling to build more advanced AI? Leading AI companies, including OpenAI, Google, and Anthropic, are facing challenges in developing their next-generation AI models. OpenAI's upcoming model, Orion, has reportedly underperformed in tasks such as coding questions, failing to achieve a significant leap from GPT-4. Similarly, Google's new iteration of its Gemini software and Anthropic's Claude 3.5 Opus model have not met internal expectations, with delays in releases and performance concerns. Key obstacles include the scarcity of high-quality human-generated training data and the diminishing returns on investment for costly advancements in AI systems. OpenAI is currently refining Orion through post-training, incorporating human feedback to improve user interactions and functionality, with a potential release expected in early 2025. These developments highlight the increasing difficulty of achieving major breakthroughs in AI amid growing expectations and costs.

  • Data, data, data!!! The hardest part of deploying gen AI for most companies is having data that's ready While AI capabilities continue to advance, only 18% of surveyed companies report having fully accessible and unified data for deployment, according to a global study of over 1,300 tech and data executives. Organizations face hurdles in unifying siloed data, tagging and classifying it accurately, and implementing robust governance models to manage real-time data securely and ethically. Some companies are adopting AI readiness scores to measure progress, while new roles like data stewards and governance executives are emerging to manage AI systems effectively. Continuous monitoring, feedback loops, and model retraining are also critical to address issues like inaccuracies and hallucinations. The article underscores that achieving AI success requires prioritizing data infrastructure and governance alongside ethical considerations, not just focusing on model development.

  • ?Behind AI Chatbots: Innovation vs. the Battle for User Privacy and Data Security - Breaking AC? AI chatbots are transforming industries by offering personalized, efficient customer interactions powered by Natural Language Processing (NLP) and Machine Learning (ML). These systems analyze vast amounts of user data to adapt and improve responses, driving applications in healthcare, finance, and retail. However, their reliance on data raises significant privacy and security risks, including data breaches, unauthorized access, and third-party sharing. Developers must ensure robust data governance, encryption, and transparent policies to safeguard user information. Users should limit sensitive data sharing, use trusted platforms, and enable two-factor authentication for added security. Advancements in big data and cloud computing continue to enhance chatbot capabilities, but balancing innovation with stringent data protection remains critical to maintaining user trust and maximizing AI's potential.

  • Patenting generative AI technologies: opportunities and challenges | Reuters While trade secrets and copyrights offer some protection, patents remain crucial for excluding competitors. The patentability of GenAI varies among jurisdictions, with the U.S., Europe, Japan, and South Korea providing distinct guidelines. For instance, in the U.S., inventions must contain an "inventive concept" that transitions from an abstract idea to a patent-eligible invention. Patents provide a defined period of exclusion rights but require substantial disclosure. Challenges include potential divided infringement and high costs. The increasing complexity and investment in GenAI underline the need for robust patent strategies and the consideration of standard-essential patents (SEPs). Companies should conduct freedom-to-operate analyses, consider vendor indemnities, and actively engage in standards development to safeguard their interests. Leveraging patents effectively can provide strategic advantages and protect significant investments in GenAI technology.

China has emerged as a leader in the generative AI patent landscape, filing six times more patents than the United States between 2014 and 2023.

The U.S. Patent and Trademark Office (USPTO) has issued guidelines emphasizing the necessity for human involvement in AI-generated inventions. Patent applications must disclose the role of AI in the creation process, and patents will only be granted if there is a "significant" human contribution.

?

  • Who is GenAI leaving out, and does it matter? - I by IMD By 2026, over 80% of organizations are expected to use GenAI, driving global GDP growth by $7 trillion in the next decade. However, the technology faces significant risks due to embedded biases in its training data and development processes, which can perpetuate stereotypes and discrimination. The study points out that GenAI models often reflect societal biases, including gender and racial disparities. For instance, language models may associate men with professional roles and women with domestic tasks, while outputs related to African American English can be unfairly stereotyped. Additionally, the field of AI development remains dominated by a narrow demographic, limiting diverse perspectives and amplifying these biases. Unchecked diversity bias in GenAI poses risks to brand reputation, customer satisfaction, and organizational decision-making. It can hinder innovation, erode trust, and expose companies to regulatory challenges. Despite 72% of executives acknowledging the risks of bias, only 35% of organizations are actively addressing the issue. The report underscores the need for responsible AI principles embedded in organizational frameworks, emphasizing fairness, transparency, and accountability.??

Case Studies?

Generative AI is revolutionizing industries with use cases like optimizing logistics at DHL, where AI tools streamline proposals and legal processes, and reshaping education by encouraging AI literacy and ethical policies in higher education. In biotech, DeepMind's AlphaFold3 advances protein modeling with open access, while manufacturing leverages AI-human collaboration to enhance semiconductor production efficiency. Telenor’s AI Factory supports autonomous technology development, and Orange Business capitalizes on GenAI to drive revenue growth through innovative AI-powered solutions. In legal contexts, courts are tackling AI-related accountability, IP rights, and evidence admissibility as GenAI continues to challenge existing frameworks. Meanwhile, AI-powered applications in automotive and pharma industries accelerate R&D, improve safety, and drive operational transformation, while Google’s LLM sets a cybersecurity milestone by autonomously fixing real-world software vulnerabilities.

Use Cases

  • Five real-world generative AI use cases | RCR Wireless News Generative AI is driving real-world innovation across industries. At the DCD Connect Virginia event, experts highlighted key use cases: streamlining healthcare documentation, managing employee transitions by analyzing leftover data, assisting programmers with real-time code suggestions, generating insights from company data faster than traditional consultants, and enhancing customer service through advanced AI chatbots. These applications demonstrate Gen AI’s ability to optimize operations, improve productivity, and deliver impactful results.

Education

  • ?Generative AI in Higher Education: Navigating Policy, Ethics, and Skills Development - HEPI Generative AI is reshaping higher education, offering immense opportunities but raising ethical and policy challenges. A QS survey of over 1,600 students and academics reveals that over a third of students have been influenced by AI in choosing courses, universities, or careers, with nearly 80% using tools like ChatGPT. However, concerns over privacy, academic integrity, and over-reliance on AI remain significant barriers. The report emphasizes the need for universities to lead in setting ethical standards, reform curricula to include AI literacy, and prepare students for AI-driven industries through partnerships with employers. Faculty also require AI training to integrate these technologies responsibly into teaching. Initiatives like the QS AI Competency Framework and Future Skills Index aim to address these gaps, ensuring students are equipped for the evolving job market.

Pharma

  • Artificial Intelligence Can Help Researchers Develop New Drugs, MRDC Regulatory Experts Forecast? Artificial intelligence (AI) is transforming drug development, enabling researchers at the Medical Research and Development Command (MRDC) to accelerate discovery, enhance precision, and reduce costs. AI tools can identify drug candidates in half the time of traditional methods, streamline patient recruitment for clinical trials, and even predict regulatory hurdles to simplify FDA approval processes. With over 300 AI-related drug submissions received by the FDA this year, the agency is updating guidelines to ensure safety, efficacy, and ethical use of AI in medicine. MRDC is leveraging AI for groundbreaking projects, including tools that assess hemorrhage risk and sepsis in trauma and burn patients, showcasing its life-saving potential. However, experts stress the need for bias-free algorithms, robust privacy measures, and regulatory oversight to ensure ethical implementation.

Automotive

  • How AI is transforming the automotive industry The automotive industry is at a transformative juncture, driven by the rise of electric vehicles (EVs), heightened competition, and evolving consumer preferences. To stay competitive, automakers are increasingly turning to artificial intelligence (AI) for innovation across operations. AI is reshaping autonomous driving through advanced driver assistance systems (AD/ADAS), where deep learning and neural networks enhance object detection, decision-making, and path planning. It also powers connected vehicle ecosystems, enabling real-time data exchanges via Vehicle-to-Vehicle (V2V) and Vehicle-to-Infrastructure (V2I) communications to improve traffic safety and efficiency. Generative AI is revolutionizing research and development (R&D) by accelerating processes like material science, software testing, and product design. Many automakers are investing millions in generative AI tools to optimize battery components and structural materials. Beyond R&D, AI drives operational efficiencies in manufacturing, supply chain management, and customer experience enhancements, including subscription-based revenue models. As AI integration grows, automakers face challenges balancing opportunities with complexity. Success will hinge on creating a cohesive ecosystem where AI elevates rather than complicates the driving experience.

Telecom?

  • Here’s how Orange Business says GenAI is raising its revenue Historically focused on efficiency through traditional AI applications like network management and fraud detection, the company is now exploring GenAI to achieve top-line growth. In 2024, Orange Business reported a 7% year-on-year revenue increase in digital, data, and AI services, with GenAI products projected to drive double-digit annual growth in the future. Key initiatives include a GenAI-powered Threat Intelligence Platform for predictive cybersecurity, a Trusted GenAI Platform for on-premises AI solutions, and innovative applications like an SMS-based conversational AI system for public services. The company also plans to launch virtual assistants for voicemail and call handling to support small businesses. Orange's GenAI strategy emphasizes partnerships, leveraging diverse AI models via a "middleware" approach. Collaborations with providers like OpenAI, LightOn, and Google Cloud enable tailored solutions, such as LLM-as-a-service for sovereign French enterprises and off-the-shelf AI models customized with Orange's data. This multimodal and multi-cloud approach positions Orange Business to capitalize on GenAI's potential for transforming the telecom sector.

  • Telenor Launches AI Factory with Hive Autonomy as First Customer | TelecomTV Telenor Group has launched its AI factory to accelerate AI adoption across industries, with an initial investment of 100 million NOK. The facility leverages NVIDIA’s AI computing platform, including H100 Tensor Core GPUs and the NVIDIA AI Enterprise software. Hive Autonomy, a Norwegian autonomous technology company, is its first customer. The AI factory is designed to enhance local AI computing capabilities in the Nordic region, enabling businesses to optimize operations, reduce costs, and improve safety through AI-driven solutions. Hive Autonomy plans to advance its autonomous systems, focusing on safety and efficiency, supported by Telenor's infrastructure.

?

Logistics and Supply Chain

  • ?DHL Supply Chain experiments with GenAI? DHL Supply Chain is integrating generative AI (GenAI) to enhance its data management and customer service capabilities, in partnership with Boston Consulting Group. The company has introduced two GenAI applications targeted at specific user groups to refine logistics solutions. One application streamlines business development by analyzing customer data to craft personalized proposals more efficiently, while the second improves solution design by sorting and interpreting available data for tailored logistics concepts. The GenAI tools also help summarize customer queries and process legal documents, boosting productivity and operational efficiency. Managed through a "product funnel approach," DHL is piloting these applications before full deployment. This initiative aligns with DHL's broader efforts to use advanced technology, including automation, to enhance warehouse management and workforce retention, further cementing its role as a leader in innovative supply chain solutions.

Biotech

  • AI protein-prediction tool AlphaFold3 is now more open AlphaFold3, the latest iteration of DeepMind’s groundbreaking AI tool for predicting protein structures, is now partially open to the scientific community. Researchers can download the software code for non-commercial use, and academics can request access to training weights. This shift follows criticism of DeepMind’s initial decision to restrict access to the tool, which limited researchers’ ability to model proteins interacting with drugs. AlphaFold3 represents a significant advancement, allowing the modeling of proteins in complex interactions, such as those with DNA and potential drug molecules. The open availability of the tool’s code marks a key step in promoting reproducibility and innovation in the field. However, full commercial use remains restricted, and only certain elements of the tool, such as training weights, are available upon request. This move comes amid increasing competition in the AI protein-prediction space. Rivals like Baidu, ByteDance, and Chai Discovery have developed their own AlphaFold3-inspired models, although most remain restricted to non-commercial applications. Open-source projects like OpenFold3 aim to further democratize access by enabling drug companies to retrain models with proprietary data, potentially enhancing performance. Here is the source code: GitHub - google-deepmind/alphafold3: AlphaFold 3 inference pipeline.?

?Manufacturing

  • Generative AI and human decisions in high-tech manufacturing While traditional data science methods often struggle with the complexity and scale of semiconductor production data, generative AI bridges these gaps by integrating human expertise with advanced analytics. It helps engineers optimize processes, analyze sensor data, and reduce defect rates. However, challenges like security risks, ethical concerns, and the transition from pilot projects to production persist. Organizations are increasingly adopting a hybrid approach, blending AI with human oversight to contextualize insights, ensure compliance, and maintain ethical standards. Tools like Spotfire’s visual data science platform exemplify this integration by enabling engineers to analyze and visualize multi-source datasets, turning them into actionable insights. This human-AI collaboration is reshaping high-tech manufacturing, driving productivity, innovation, and efficiency while ensuring decision-making remains informed by both data and domain expertise.

Legal

  • AI on trial: How courts are litigating the GenAI boom - Thomson Reuters Institute? Courts are increasingly addressing the legal complexities posed by the rapid advancement of generative AI. Cases like Huang v. Tesla have tested the limits of accountability, as Tesla argued that Elon Musk’s public statements could not be trusted due to the prevalence of deepfakes. The court rejected this, emphasizing that public figures remain responsible for their statements despite AI-driven misinformation. In the realm of intellectual property, AI-generated content is under scrutiny. Copyright infringement cases against music AI generators and OpenAI’s ChatGPT focus on whether AI outputs replicate original works, while the U.S. Supreme Court recently reaffirmed that inventors listed on patents must be human, rejecting claims for AI-generated inventions. Meanwhile, the admissibility of AI-enhanced evidence varies. In State of Washington v. Puloka, AI-enhanced video evidence was deemed unreliable because it altered the original content, though AI-enhanced audio evidence, which clarifies existing material, is more likely to be accepted. As AI becomes integral to industries like healthcare and entertainment, courts will continue grappling with its implications, and the U.S. Supreme Court is expected to take on significant AI-related cases in the near future.

Women Leading in AI?

??Featured AI Leader: Candace P. Jones??We’re excited to present Candace P.J as a Featured AI Leader! Learn more about her in this post: https://www.dhirubhai.net/posts/women-and-ai_featured-ai-leader-candace-p-jones-activity-7261762139537702912-L35q?utm_source=share&utm_medium=member_desktop?

Learning Center

A recently updated IBM AI Engineering Professional Certificate equips learners with hands-on skills in machine learning, deep learning, and tools like PyTorch and TensorFlow, offering an accessible pathway into AI engineering careers through Coursera's flexible learning model. The "11 ChatGPT Prompt Frameworks" by ButterCMS empowers marketers with tailored frameworks like RACE and AIDA to streamline content creation, optimize workflows, and enhance marketing strategies. Google's experimental "Learn About" conversational tool reimagines interactive learning by delivering curated, multimedia-rich content, positioning itself as a competitor to Perplexity AI for engaging and citation-backed answers. For ML practitioners, "39 Lessons on Building ML Systems" consolidates foundational principles into a practical and structured guide, emphasizing execution and collaboration over revolutionary concepts.

Learning

  • 39 Lessons on Building ML Systems, Scaling, Execution, and More insights, while practical and valuable, largely reinforce well-established best practices in machine learning system design, production, and scaling. The advice, such as starting simple, focusing on user feedback, and designing for scalability and evaluation, reflects foundational principles that seasoned ML practitioners would recognize. What sets it apart is the structured presentation and clarity, making it a concise reference for practitioners at all levels. However, it doesn't break new ground or introduce revolutionary concepts. Instead, it consolidates and contextualizes existing knowledge with anecdotes from recent conferences, emphasizing execution and collaboration in real-world applications. It's a well-organized synthesis of known principles.

  • Unlock Your AI Career with the recently updated IBM AI Engineering Professional Certificate! - IBM Learning Blog? The IBM AI Engineering Professional Certificate is a comprehensive program designed to equip learners with the skills necessary for a career in AI engineering. The curriculum covers machine learning and deep learning techniques, including regression, classification, clustering, and recommender systems. Participants gain hands-on experience with tools such as SciPy, Scikit-learn, Keras, PyTorch, and TensorFlow, applying them to real-world challenges like object recognition, computer vision, text analytics, and natural language processing.? Offered through Coursera, the program operates on a subscription model, typically costing $49 per month. The total expense depends on the duration of your enrollment, with the program structured to be completed in approximately four months at a pace of 10 hours per week. While Coursera provides a 7-day free trial, full access to course materials and the certificate upon completion require a paid subscription. For those seeking financial assistance, Coursera offers financial aid options to eligible learners, making the program more accessible.

  • I am speaking on a virtual Topic Panel: How AI is changing the landscape for open RAN 20 November 2024 | 10:00 GMT. Join me and industry experts for an engaging MWL Unwrapped Webinar exploring the transformative potential of Open RAN and AI in modern telecommunications. Discover how intelligent automation and AI technologies are reshaping network operations, enhancing security, and optimizing resource allocation across the core and edge. will provide actionable strategies for implementing Open RAN AI, including fine-tuning AI models, streamlining operations, achieving zero-touch efficiencies, and leveraging AI/ML to enhance customer experience and energy savings. Don’t miss this opportunity to learn best practices and gain insights into cutting-edge advancements in network intelligence—register now! https://view6.workcast.net/register?cpak=7212329617787617?

Prompting

  • 11 ChatGPT Prompt Frameworks Every Marketer Should Know? The ButterCMS blog introduces 11 ChatGPT prompt frameworks designed to enhance efficiency in content creation and marketing workflows by guiding AI to produce accurate and actionable outputs. These frameworks cater to various tasks, such as the RACE (Role, Action, Context, Expectation) framework, which is ideal for detailed tasks like crafting customer segmentation strategies by specifying roles, actions, and desired outcomes. Simpler frameworks like TAG (Task, Action, Goal) streamline straightforward tasks, such as optimizing email open rates. For more structured workflows, the TRACE (Task, Request, Action, Context, Example) framework helps break down complex processes like automating lead follow-ups into actionable steps. Creative tasks benefit from frameworks like AIDA (Attention, Interest, Desire, Action) for persuasive content or CRISPE (Capacity, Insight, Statement, Personality, Experiment) to experiment with different campaign variables. Each framework provides marketers with tailored approaches to improve clarity, save time, and maximize AI's utility. While not groundbreaking, these tools empower marketers to effectively harness AI for a wide range of tasks, from strategic planning to creative ideation. Very helpful :)?

Tools and Resources

  • Learn About Google's "Learn About" is an experimental conversational learning tool designed to help users explore and understand various topics through interactive dialogues. Unlike traditional search engines that provide a list of links, "Learn About" offers curated content, interactive guides, and multimedia resources to facilitate deeper learning. This approach aligns with the functionalities of AI-powered platforms like Perplexity AI, which also focus on delivering concise, conversational answers with citations. "Learn About" can be seen as Google's initiative to enhance user engagement and learning, positioning it as a competitor to platforms like Perplexity AI.? Try it out and let me know which one you like better and why.




If you enjoyed this newsletter, please comment and share. If you would like to discuss a partnership, or invite me to speak at your company or event, please DM me.

Eugina Jordan

CEO and Co-founder (Stealth AI startup) I 8 granted patents/16 pending I AI Trailblazer Award Winner

4 个月

Another amazing edition! The shift that Gen AI is brining is real. That’s why upskilling is so important.

Eugina Jordan

CEO and Co-founder (Stealth AI startup) I 8 granted patents/16 pending I AI Trailblazer Award Winner

4 个月

The Massachusetts Institute of Technology graph-based AI model is a testament to interdisciplinary innovation, blending science, art, and engineering.?

Eugina Jordan

CEO and Co-founder (Stealth AI startup) I 8 granted patents/16 pending I AI Trailblazer Award Winner

4 个月

Anthropic’s multi-model support in GitHub Copilot shows the power of collaboration between AI innovators.?

Eugina Jordan

CEO and Co-founder (Stealth AI startup) I 8 granted patents/16 pending I AI Trailblazer Award Winner

4 个月

Meta's Llama models continue to dominate the open-source AI conversation—excited to see how they evolve next.?Renuka Bhalerao

要查看或添加评论,请登录

Eugina Jordan的更多文章

社区洞察

其他会员也浏览了