Gen AI for Business #16

Gen AI for Business #16

Here are your weekly key insights and tools on Generative AI for business, covering the latest news, strategies, and innovations in the B2B sector.

The week before last in AI was head-spinning. This week I would describe as complicated. Remember how Facebook allowed you to put your relationship status as "it’s complicated"? That’s how this week has been in the Gen AI for the business world.

What caught my eye this week is the US government's endorsement of open-source AI models (thanks, Meta, for changing the game) and Microsoft listing OpenAI as their competitor.

The EU AI Act is now in full force, and no one knows what it means… yet.

If you enjoyed this letter, please leave a like, comment, and share! Knowledge is power.

Have a wonderful day,

Eugina

News about models and everything related to them

Google DeepMind's Gemini Flash is a lightweight, efficient AI model with multimodal reasoning and a long context window, optimized for diverse applications. Google's Gemma initiative focuses on developing smaller, safer, and more transparent AI models, catering to various tasks while ensuring responsible AI development. Concerns from researchers about generative AI data models underscore the risk of feedback loops degrading data quality. Apple's AI models trained on Google's custom chips demonstrate tech giant collaboration. Galileo's Hallucination Index measures the accuracy of large language models, with OpenAI's GPT-4 excelling in minimizing hallucinations. Lastly, OpenAI's GPT-4o can generate outputs up to 64,000 tokens, offering new capabilities, while Meta's strategy to open-source Llama 3.1 aims to commoditize AI technology and drive engagement.

  • Gemini Flash - Google DeepMind Google DeepMind's Gemini Flash is a lightweight AI model optimized for speed and efficiency. It features multimodal reasoning and a long context window of up to one million tokens, allowing it to process extensive data, such as hours of video or large codebases. Gemini Flash offers sub-second average first-token latency and maintains quality comparable to larger models at a fraction of the cost. This model is designed for scalable deployment, making it suitable for diverse developer and enterprise applications.?

  • Smaller, Safer, More Transparent: Advancing Responsible AI with Gemma - Google Developers Blog Google has introduced Gemma, a new AI initiative focused on creating smaller, safer, and more transparent AI models. Gemma aims to enhance responsible AI development by improving security measures, reducing model sizes for better efficiency, and ensuring transparency in AI operations. Google's Gemma initiative encompasses several "flavors" or versions designed to cater to different needs. These include models optimized for various tasks such as natural language processing, image recognition, and data analysis. Each version of Gemma aims to maintain high standards of safety, efficiency, and transparency, ensuring responsible AI development while addressing specific application requirements.?

  • https://www.globest.com/2024/07/25/generative-ais-popularity-could-cause-their-data-models-to-collapse/?slreturn=20240729-45425? – highlights concerns raised by researchers from prestigious institutions, including Stanford University and the University of California, Berkeley, about the potential collapse of data models used in generative AI. These experts emphasize that as generative AI systems like OpenAI’s models gain popularity, there is an increased risk of these systems training on their own outputs. This practice can lead to a feedback loop that degrades the diversity and originality of the data, causing the models to produce lower-quality and biased outputs. The researchers warn that this cycle could eventually lead to the collapse of these AI data models, highlighting the importance of maintaining diverse and high-quality training data to ensure the robustness and reliability of AI technologies.

  • Apple says its AI models were trained on Google's custom chips Apple has revealed that its AI models were trained on custom chips designed by Google. This collaboration highlights the interplay between tech giants in the development and advancement of AI technologies. Apple's use of Google's custom chips underlines the importance of leveraging specialized hardware to enhance AI performance and capabilities.?

  • Galileo Releases New Hallucination Index Revealing Growing Intensity in LLM Arms Race Galileo has introduced a Hallucination Index to evaluate large language models (LLMs) for accuracy and reliability, using proprietary metrics like Correctness and Context Adherence. The index shows that OpenAI's GPT-4 excels in minimizing hallucinations, particularly in general knowledge and long-form text tasks. While closed-source models like Claude 3.5 Sonnet perform well, open-source models such as Meta's Llama-2-70b and Hugging Face's Zephyr-7b are catching up, offering cost-effective alternatives. This index helps enterprises choose the best LLMs, balancing performance and cost.?

  • GPT-4o Long Output | OpenAI OpenAI has introduced GPT-4o, an experimental version of GPT-4, capable of generating outputs up to 64,000 tokens per request. This allows for significantly longer completions, enabling new use cases. The model, accessible through the gpt-4o-64k-output-alpha name, comes with a higher per-token pricing due to increased inference costs: $6.00 per 1M input tokens and $18.00 per 1M output tokens.?

  • Why Big Tech Wants to Make AI Cost Nothing discusses the strategy behind Meta's decision to open-source its Llama 3.1 AI mode introduced a cfew weeks ago. By making AI models freely available, Meta aims to commoditize AI technology, thereby increasing the demand for complementary products such as server space and GPUs. This approach, similar to previous strategies by companies like Microsoft and Google, aims to boost overall engagement with Meta's platforms and drive ad revenue. However, this move poses challenges for AI startups that may struggle to compete with freely available, powerful AI models from big tech companies. See why the U.S. Government endorsed the open source approach in the Regional and Regulatory section of this newsletter.

News and Partnerships

Nvidia is rumored to be developing a new Titan GPU based on the Blackwell architecture, which could surpass the performance of the RTX 5090, although production challenges remain a concern. The Perplexity Publishers' Program is introduced, offering revenue sharing, API access, and enhanced data privacy for media partners to align AI technology with quality journalism. Nvidia expands its microservices library to support 3D and robotic model creation, partnering with Hugging Face to offer enhanced AI model deployment. Apple's new AI features, branded as Apple Intelligence, face delays due to EU regulations but promise advanced tools for text and image generation. Hugging Face launches an inference-as-a-service platform powered by Nvidia's NIM, providing efficient AI model deployment. AMD releases the Amuse 2.0.0 Beta for on-device AI image generation on modern AMD hardware. Microsoft now views OpenAI as a competitor despite their $13 billion investment, highlighting the complex dynamics of their relationship. Lastly, Canva's acquisition of Leonardo AI aims to enhance its design software capabilities, integrating advanced AI tools for more efficient and sophisticated content creation.

  • Nvidia’s new Titan GPU will beat the RTX 5090, according to leak? Nvidia is rumored to be developing a new Titan GPU, based on the upcoming Blackwell architecture, which would surpass the performance of the RTX 5090. Leaked details suggest this Titan AI card could be 63% faster than the RTX 4090, with the RTX 5090 being 48% faster. The information comes from tech leakers RedGamingTech and @kopite7kimi. However, it remains uncertain if Nvidia will release this product, as similar plans for a Titan based on the Ada architecture were scrapped. Nvidia scrapped plans for a Titan GPU based on the Ada architecture due to production challenges, including high costs and power consumption issues. These obstacles made it difficult to bring the product to market efficiently. As a result, Nvidia decided to focus on refining their upcoming Blackwell architecture for future high-performance GPUs, aiming to address the shortcomings encountered with the Ada-based Titan.

  • Introducing the Perplexity Publishers’ Program? The Perplexity Publishers' Program is designed to support media organizations and online creators by fostering collective success and leveraging new technology. Key components include revenue sharing, where publishers earn a share when their content is referenced in ad-related interactions, facilitated by a partnership with ScalePost.ai for AI analytics. Publishers also receive free access to Perplexity’s APIs, enabling them to create custom answer engines and integrate related questions technology into their content. Additionally, all employees of partner publishers get free access to Enterprise Pro for one year, enhancing data privacy and security for research and fact-checking. Initial partners include TIME, Der Spiegel, Fortune, Entrepreneur, The Texas Tribune, and WordPress.com. This program aims to align the interests of AI technology and quality journalism, ensuring trustworthy content remains central in the digital information landscape. Definitely move in the right direction!

  • Nvidia expands microservices library and support for 3D and robotic model creation - SiliconANGLE Nvidia announced an expansion of its microservices library to support 3D and robotic model creation at the Siggraph conference. The update includes new Fast Voxel Database microservices for three-dimensional modeling and USD-based tools for creating 3D scenes. Additionally, Nvidia has partnered with Hugging Face to offer inference-as-a-service on Nvidia’s DGX cloud, enhancing AI model deployment and efficiency. This expansion aims to streamline the development of AI applications, particularly in robotics and interactive visual AI.?

  • ?Apple's artificial intelligence features to be delayed, Bloomberg News reports | Reuters Apple's new AI features, branded as Apple Intelligence, are set to be delayed until October, missing the initial launch of iOS 18 and iPadOS 18 in September. This delay is attributed to the need for further testing and compliance with new European Union regulations. The AI features, which include tools for generating text, images, and other content, as well as enhancements to Siri and notification prioritization, will first be available to developers through beta versions of iOS 18.1 and iPadOS 18.1. The features will be compatible with iPhone 15 Pro, iPhone 15 Pro Max, and devices with Apple's M1 chip and later versions. Who will be upgrading their phone because they “accidently” dropped it?? More about Apple Intellgience you can read in this technical paper: Apple Intelligence Foundation Language Models?

  • Hugging Face offers inference as a service powered by Nvidia NIM | VentureBeat Hugging Face has launched an inference-as-a-service platform powered by Nvidia’s NIM (Nvidia Inference Microservices), enhancing the efficiency of deploying AI applications. This service offers up to five times better token efficiency, allowing developers to access and run Llama 3 NIM models seamlessly. By leveraging Nvidia’s GPUs, Hugging Face aims to provide a robust solution for AI model inference, making it easier a nd more cost-effective for enterprises to utilize advanced AI capabilities in their applications.??

  • Amuse 2.0 beta released for easy on-device AI image generation on modern AMD hardware AMD has released the Amuse 2.0.0 Beta, a software designed for on-device AI image generation using modern AMD hardware. It supports AMD Ryzen AI 300-series processors, Ryzen 8040 series, and Radeon RX 7000 systems, requiring substantial RAM (24GB or more). The software enables users to create high-quality images, convert paintings and drawings into digital formats, and apply custom AI filters. Key features include AMD XDNA Super Resolution for upscaling images. The beta release highlights the need for caution regarding potential copyright issues in AI-generated content.??

  • Whoa! Microsoft says OpenAI is now a competitor in AI and search Microsoft has now listed OpenAI as a competitor in its annual report, despite their $13 billion investment and ongoing collaboration. This shift comes after OpenAI's announcement of SearchGPT, which positions it as a direct rival in AI and search technologies. The competitive landscape has intensified, with both companies pursuing innovations in the AI sector. This dynamic underscores the complex nature of their relationship, balancing cooperation with competition as they each strive for market leadership in AI and search.? Microsoft's significant investment in OpenAI is now complicated by their new competitive dynamic. Despite the collaboration and financial ties, OpenAI's launch of SearchGPT positions them as direct competitors in AI and search. This rivalry could impact the strategic direction of their partnership. Microsoft might reassess their investment strategy and collaborative projects with OpenAI, especially as regulatory bodies scrutinize the relationship. The evolving competition might lead to adjustments in how both companies leverage their shared technologies and market approaches.

  • Canva adds a new generative AI platform to its growing creative empire - The Verge Canva has acquired Leonardo AI, a generative AI platform, to enhance its design software capabilities. This acquisition aims to integrate advanced AI tools into Canva’s platform, offering users more sophisticated features for content creation. By incorporating Leonardo AI's technology, Canva plans to improve the automation of design tasks, making the process more efficient and accessible for users, and maintaining its competitive edge in the design software market. The acquisition of Leonardo AI by Canva could suggest a shift towards a more B2B-focused strategy. By integrating advanced generative AI capabilities, Canva can offer enhanced tools that appeal to businesses looking for efficient and sophisticated design solutions. This move likely aims to broaden Canva's market reach, attracting more enterprise customers who require robust design software for their operations. I love Canva, so this is a good move for me!

Regional and regulatory updates

The European Commission has approved the final text of the EU AI Act, set to take effect in 2025, which imposes strict standards to ensure safety, transparency, and accountability in AI applications. This Act classifies AI systems by risk levels, imposing stricter requirements for high-risk applications to foster innovation while protecting fundamental rights. Apple has joined a voluntary U.S. government initiative to manage AI risks, reflecting growing regulatory scrutiny. OpenAI pledges to provide the U.S. AI Safety Institute early access to its next model for thorough safety evaluations, aligning with its commitment to transparency and accountability. Anthropic weighs in on California's AI regulation bill, suggesting amendments to balance safety with innovation. The U.S. Copyright Office urges Congress to outlaw AI-powered impersonation to address potential harms. Additionally, Chinese researchers have developed ShortGPT, a new technique to optimize large language models for resource-limited hardware, highlighting global advancements in AI technology. Lastly, the U.S. is considering new export restrictions on AI and memory chips to China, impacting major U.S. chipmakers like Nvidia and AMD, reflecting the ongoing geopolitical tensions in AI and tech industries.

  • European Artificial Intelligence Act comes into force? The European Commission has approved the final text of the EU AI Act, aiming to regulate artificial intelligence by setting strict standards to ensure safety, transparency, and accountability. The Act classifies AI applications based on their risk level, imposing obligations on providers and users to mitigate potential harm. This regulation aims to foster innovation while protecting fundamental rights and promoting trust in AI technologies. The EU AI Act will come into effect in 2025. This regulation establishes a framework for AI development and usage, emphasizing safety, transparency, and accountability. It classifies AI systems by risk levels, with stricter requirements for high-risk applications. This aims to ensure that AI technologies are used responsibly, protecting fundamental rights while fostering innovation and trust. You might ask “What does this all mean for me and my business?” For companies, this means adhering to new compliance requirements, potentially increasing operational costs, and ensuring market access by meeting EU standards. Individuals will benefit from enhanced protections, ensuring their safety, privacy, and rights are safeguarded against AI misuse. The EU AI Act includes provisions that require AI-generated content to be clearly labeled as such. This transparency measure aims to ensure that users are aware when they are interacting with AI-produced material, enhancing trust and accountability. The labeling requirements will apply to a wide range of AI applications, helping to prevent the misuse of AI for deceptive purposes and ensuring that consumers can make informed decisions. So, here is my disclosure. A human (me) created this newsletter, it take me about 10 hours a week to read, curate and design. I love it though! I do use AI to give me an input on the final version. I would upload it in ChatGPT and ask to rate the newsletter and ask for suggestions to improve it. It normally rates it (content, flow) at about 8-9 which is good for me. ;)

  • Apple signs on to voluntary US scheme to manage AI risks, White House says | Reuters Apple has signed a voluntary agreement with the U.S. government to manage AI risks, joining other major tech companies in this initiative. This scheme, promoted by the White House, aims to enhance the safe and ethical development of AI technologies. The agreement involves commitments to transparency, security, and monitoring to mitigate potential risks associated with AI. This move reflects growing regulatory and public scrutiny over AI advancements and emphasizes the need for industry cooperation in establishing robust AI governance frameworks.?

  • And then OpenAI pledges to give U.S. AI Safety Institute early access to its next model | TechCrunch OpenAI has pledged to provide the U.S. AI Safety Institute with early access to its next AI model to ensure it undergoes thorough safety evaluations before public release. This collaboration aims to address potential risks and improve the robustness of AI systems against misuse and adversarial attacks. Additionally, OpenAI has established a Safety and Security Committee comprising technical and policy experts to oversee critical safety and security decisions for its projects. This committee will develop recommendations to enhance OpenAI's safety practices and ensure compliance with regulatory standards. OpenAI's pledge to provide the U.S. AI Safety Institute with early access to its next AI model is likely to happen and is seen as a beneficial step. This initiative aims to enhance the safety and robustness of their AI systems by identifying and mitigating potential risks before public release. It aligns with OpenAI's commitment to regulatory compliance and ethical standards, demonstrating transparency and accountability. Additionally, collaborating with an independent safety body builds trust and confidence among users and stakeholders, reassuring them that OpenAI is proactively ensuring the safe and ethical deployment of its technologies. This move addresses the need to balance innovation with safety, setting a positive example for the industry and fostering a trustworthy AI ecosystem.

  • Exclusive: Anthropic weighs in on California AI bill Anthropic has expressed concerns about California's AI regulation bill, SB 1047, proposing significant amendments. The bill, aiming to hold AI developers liable for misuse and ensuring safety, is seen by Anthropic as potentially harmful to AI safety and innovation. They suggest shifting the focus from "pre-harm enforcement" to "outcome-based deterrence," allowing companies to develop safety protocols and be liable for catastrophes. Anthropic also recommends using the Government Operations Agency for regulation instead of creating a new state agency. These changes, they argue, would better balance safety with innovation.?

  • Copyright Office tells Congress: 'Urgent need' to outlaw AI-powered impersonation | TechCrunch The U.S. Copyright Office has urged Congress to outlaw AI-powered impersonation, highlighting the urgent need to address the legal and ethical implications of AI-generated content that mimics real people. This call to action emphasizes the potential harm and privacy violations that can result from such technology, pushing for legislative measures to prevent misuse and protect individuals from unauthorized AI-driven impersonations. If Congress decides to act on the U.S. Copyright Office's recommendation to outlaw AI-powered impersonation, it could take several months to years, depending on the legislative process. Immediate implications include increased scrutiny and potential legal risks for companies developing AI that can mimic real people. With the EU AI Act now in full force, which already regulates AI use and promotes transparency and accountability, there could be stricter regulations and enforcement in the U.S., aligning more closely with the EU's comprehensive AI framework.??

  • US mulls new curbs on China's access to AI memory chips, Bloomberg News says | Reuters The U.S. is considering imposing new export restrictions on AI and memory chips to China, aimed at limiting the country's access to advanced technology. These curbs are expected to affect major U.S. chipmakers like Nvidia and AMD, which rely heavily on the Chinese market for revenue. The potential restrictions could lead to a significant impact on their financial results and overall sales to China. Shares of U.S. chipmakers fell following the news, highlighting investor concerns about the financial implications of such restrictions. If implemented, these measures would further tighten the existing controls that already prevent companies like Nvidia from selling their top AI chips to China without a special license.??

  • And while we are on the China subject, CHINA Artificial intelligence: Beijing wants cooperation, but censoring content Beijing seeks international cooperation on artificial intelligence while maintaining strict control over content. China's approach involves leveraging AI for technological advancements and economic growth, but it also emphasizes stringent censorship to align with its political and social agendas. This dual strategy reflects the government's desire to balance openness in technological collaboration with tight domestic oversight to prevent dissent and control information flow and that it aligns with China's political agenda.?

  • New U.S. Commerce Department report endorses 'open' AI models | TechCrunch The U.S. Commerce Department has issued a report endorsing "open-weight" generative AI models like Meta’s Llama 3.1. The report highlights the benefits of open AI models in promoting competition and innovation. It also recommends that the government develop new capabilities to ensure the responsible and effective use of these technologies. The endorsement reflects a broader recognition of the potential of open AI models to drive progress in various sectors while emphasizing the need for safeguards to address potential risks.

The U.S. Commerce Department's endorsement of open-weight generative AI models could pose significant challenges for companies relying on closed models, such as OpenAI. This endorsement may lead to increased competition and a shift in customer preferences towards open models due to their transparency and potential for customization. As open models become more attractive, closed model providers might face pressure to adjust their pricing structures, impacting their revenue streams, especially if they are already operating at a loss. To stay competitive, companies like OpenAI will need to differentiate their offerings by emphasizing unique features, superior performance, or enhanced security measures. Additionally, they may need to increase their investment in research and development, which could further strain their financial resources. Strategic partnerships or collaborations could also be essential for closed model providers to bolster their offerings and reach. Overall, the endorsement highlights the importance of flexibility, transparency, and competition in the AI industry, requiring closed model providers to adapt to these changing dynamics to sustain and grow their market presence.?

  • Nvidia faces two DOJ antitrust probes over market dominance - The Verge? The U.S. Department of Justice (DOJ) has launched dual antitrust probes into Nvidia, focusing on the company's acquisition of Run and its dominant position in the AI chip market. One investigation examines Nvidia's recent acquisition of Run, a startup specializing in optimizing GPU usage for AI applications. This acquisition has raised concerns among regulators about potential reductions in market competition. The second probe investigates allegations of monopolistic practices, including claims that Nvidia pressures cloud providers to prefer its products and charges higher prices for networking gear when customers opt for competitors' AI chips like those from AMD and Intel.

The DOJ's actions are part of a broader effort to ensure competitive practices in the rapidly evolving AI sector, which has seen Nvidia capturing over 90% of the market for GPUs used in training generative AI models. These investigations reflect growing regulatory scrutiny of major players in the AI industry, aiming to prevent a few companies from monopolizing the market and to promote a healthier, competitive environment. The ongoing DOJ antitrust probes into Nvidia could lead to several outcomes. Nvidia might face substantial fines and penalties if found in violation of antitrust laws. Additionally, the company may be required to alter its business practices to foster a more competitive environment, such as changing pricing strategies and distribution agreements. In more severe cases, Nvidia could be forced to divest certain parts of its business, like spinning off Run. Increased regulatory oversight and compliance measures could also be imposed. Competitors like AMD and Intel might benefit from regulatory actions that level the playing field, potentially gaining market share. I am not a financial analyst, but I am thinking that Nvidia's stock could experience volatility based on the perceived severity and outcome of the investigations. What do y’all think about this development?

  • Local AI model is melting pot for African languages | ITWeb A new AI model developed locally in Africa is designed to support multiple African languages, promoting inclusivity and linguistic diversity in AI applications. This initiative aims to address the underrepresentation of African languages in technology, ensuring that native speakers can benefit from AI advancements. The model is a significant step towards preserving cultural heritage and enabling better communication and accessibility across the continent.

Existing AI models and open-source solutions often lack comprehensive support for many African languages, which are underrepresented in global datasets. Developing a local AI model specifically tailored for African languages addresses this gap, ensuring better accuracy and cultural relevance. This initiative aims to provide inclusive technology that respects and preserves linguistic diversity, enabling native speakers to benefit fully from AI advancements.?

Gen AI for Business Trends, Concerns, and Predictions:?

Runway's Gen-3 AI video generator has come under scrutiny for using unauthorized YouTube videos and pirated media in its training, raising serious ethical and legal concerns about copyright infringement. This issue reflects broader challenges faced by AI companies in sourcing training data ethically. Regulatory responses may include stricter rules and increased transparency requirements for AI training data. The European Union's AI Act, effective in 2025, mandates such transparency and compliance with copyright laws. Meanwhile, the U.S. Copyright Office is urging Congress to outlaw AI-powered impersonation, highlighting the need for legislative measures to protect individuals from unauthorized AI-generated content. This section also touches on the significant energy and water demands of generative AI models, which are straining the U.S. power grid. Efforts to address these challenges include Nvidia's new Grace Blackwell chip and innovative cooling technologies. Additionally, Gartner predicts that 30% of generative AI projects will be abandoned by 2025 due to unrealistic expectations and operational complexities. Lastly, Elon Musk's AI company, Grok, is using data from X (formerly Twitter) to train its models, raising privacy and data security concerns, while AI's role in sales tech is poised to disrupt traditional CRM platforms like Salesforce.

  • Runway’s AI video generator trained on thousands of scraped YouTube videos - The Verge Runway's Gen-3 AI video generator has come under scrutiny for training its model on a vast amount of YouTube videos and pirated media without proper authorization. An internal spreadsheet revealed that the training data included content from major entertainment companies like Disney, Netflix, and Pixar, as well as popular YouTube creators such as Marques Brownlee and Casey Neistat. The use of proxies to evade YouTube's detection raises serious ethical and legal concerns. This situation exemplifies the ongoing issues of copyright infringement and the use of unauthorized data in AI training, a problem also faced by other AI companies like OpenAI.?

To address the unauthorized use of YouTube videos by Runway's Gen-3 AI video generator, copyright holders can pursue legal action for infringement, potentially resulting in financial compensation and restrictions on the use of the content. Regulatory bodies might impose stricter rules on AI training data, requiring transparency and proper licensing. Industry standards for ethical AI development could be established, promoting fair use and self-regulation. YouTube and similar platforms can enhance content protection technologies to prevent unauthorized scraping. Advocacy and awareness efforts can also pressure companies to adhere to ethical practices and ensure accountability. Several regulatory frameworks are already in place or being developed to address the use of AI training data, particularly concerning copyright and ethical considerations.?

In the European Union, the AI Act, set to be enforced in 2025, includes provisions requiring transparency in AI training data and strict compliance with copyright laws. This Act categorizes AI applications by risk levels, with high-risk applications subjected to rigorous oversight and transparency requirements. In the United States, agencies like the Copyright Office and the Patent and Trademark Office have issued guidance affirming the necessity of human input for copyright protections in AI-generated works, with several ongoing litigations addressing the unauthorized use of copyrighted content by AI models.?

  • Generative AI requires massive amounts of power and water, and the aging U.S. grid can't handle the load The rapid expansion of data centers driven by the AI boom is straining the U.S. power grid. Generative AI models like ChatGPT demand significant energy, with one query using nearly ten times the power of a typical Google search. This has led to increased emissions from data centers, with Google's greenhouse gas emissions rising by nearly 50% from 2019 to 2023. Nvidia's new Grace Blackwell chip aims to reduce power consumption significantly, but these measures alone are insufficient. The power demand from AI-specific applications is expected to match or exceed historical cloud computing needs, with data centers projected to consume 16% of U.S. power by 2030. To manage this, companies like Vantage Data Centers are seeking renewable energy sources and building new infrastructure. However, the aging power grid, with its outdated transformers, remains a bottleneck, requiring costly upgrades. Additionally, cooling these data centers poses a significant water usage challenge. Efforts to address these issues include on-site power generation and innovative cooling technologies, but the sustainability of AI growth will depend on effectively balancing these demands.?

  • Researchers develop state-of-the-art device to make artificial intelligence more energy efficient | ScienceDaily The new computational random-access memory (CRAM) developed by researchers at the University of Minnesota is highly promising in reducing energy consumption for AI applications. The technology has shown significant potential in laboratory settings, with energy savings of at least 1,000 times compared to traditional methods. However, for widespread implementation, further development, testing, and integration into existing systems are needed. The researchers are optimistic about its scalability and practical application in future AI systems.??

  • Gartner Predicts 30% of Generative AI Projects Will Be Abandoned After Proof of Concept By End of 2025 Gartner predicts that by the end of 2025, 30% of generative AI projects will be abandoned after proof-of-concept. The primary reasons for this high abandonment rate include unrealistic expectations, integration challenges, and the complexity of operationalizing these AI models. Despite the potential of generative AI, many organizations struggle with moving beyond initial experiments to achieve sustainable, practical applications. I guess time will tell …


  • An AI walks into a bar... Can artificial intelligence be genuinely funny? In an experiment to test AI's capability for humor, professional comedian Karen Hobbs performed a set written by ChatGPT. Known for tackling tough comedy scenes, Hobbs was nervous as she followed three human comedians at the Covent Garden Social Club. Despite ChatGPT’s extensive data processing, it defaulted to male-centric jokes and struggled with creating genuinely humorous material, producing clichéd content. Experts like Alison Powell and Michael Ryan highlight AI’s current limitations in understanding and adapting to real-time humor, though advancements are underway. Studies, including one led by Ryan, show potential for AI-generated humor to improve, predicting genuinely funny AI comedy sets in the near future. However, for now, AI humor remains derivative and lacks the nuanced storytelling of human comedians.?

  • Are you surprised or concerned or both: Elon Musk calls Grok 'the most powerful AI by every metric' but 'secretly' trains the new model with your X data by default Elon Musk's AI company, Grok, is reportedly using data from X (formerly Twitter) to train its models. This practice has raised concerns about privacy and data security, as it involves utilizing user data without explicit consent. The integration of social media data aims to enhance the AI's capabilities but also highlights the ongoing debate over ethical AI practices and data usage transparency. Users worried about this can review and adjust their privacy settings, request a copy of the data collected by the platform, opt-out of data sharing if possible, and advocate for stronger data privacy regulations and transparency from tech companies.?

  • And another “yikes” prediction. “Death of a Salesforce”: Why AI Will Transform the Next Generation of Sales Tech | Andreessen Horowitz suggests that advancements in AI could disrupt traditional CRM platforms like Salesforce. AI-driven tools provide more efficient, personalized, and automated solutions for sales teams, potentially outperforming older systems that rely heavily on manual input and processes. As AI continues to evolve, it could render conventional sales platforms less relevant, forcing them to adapt or risk becoming obsolete. Salesforce has been integrating AI into its platform to stay competitive and enhance its capabilities. Through Salesforce Einstein, the company offers AI-driven insights, predictive analytics, and automation tools that help sales teams work more efficiently and personalize customer interactions. This integration aims to maintain Salesforce's relevance in an AI-driven market by providing advanced features that streamline processes and improve sales outcomes.

News and updates around? finance, Cost, and Investments

The current AI boom shows similarities to the dot-com bubble with VC interest and infrastructure investments, but today's AI companies have more sustainable models and cautious investment climates, making a bubble less likely. Microsoft's financial results highlight strong AI-driven growth, particularly in Azure, though the AI payoff may take longer. TikTok's $20 million monthly expenditure on OpenAI services via Microsoft underscores significant AI partnerships, with potential disruptions if TikTok faces regulatory issues. Accenture and Boston Consulting Group lead in profitable generative AI consulting, with large enterprises benefiting the most. Amazon's $16.4 billion investment in cloud and generative AI infrastructure aims to strengthen AWS against competitors like Google and Microsoft. These developments underscore the ongoing transformation and competitive landscape in AI and cloud markets, emphasizing the need for prudent investment strategies and awareness of broader economic factors.

  • AI: Are we in another dot-com bubble? - by Kelvin Mu It’s detailed analysis explores whether the current AI boom mirrors the dot-com bubble of the late 1990s. Drawing parallels, the author notes both cycles have similar ecosystem structures, significant VC interest, and substantial infrastructure investments. However, he highlights critical differences: AI companies are generating meaningful revenue earlier, the economic environment today is more cautious, and funding sources are different, with current investments coming primarily from private markets and big tech rather than public retail investors. Mu concludes that while the AI cycle shows signs of exuberance, it is less likely to be a bubble compared to the dot-com era due to more sustainable business models, reasonable valuations, and a cautious investment climate. Nonetheless, the potential for overinvestment and rapid market corrections remains, emphasizing the importance of prudent investment strategies and awareness of broader economic factors.?

  • Microsoft's slow cloud growth signals AI payoff will take longer | Reuters Microsoft reported strong financial results for the third quarter of its 2024 fiscal year, surpassing Wall Street's expectations. The company achieved a 17% increase in revenue, totaling $61.9 billion, and a 20% rise in profits, reaching nearly $22 billion. Earnings per share were $2.94, exceeding analyst predictions of $2.83 per share. The growth was driven by significant demand for AI technologies, with Microsoft's Azure cloud platform and related services experiencing a 31% revenue increase, partly fueled by AI innovations.?

  • TikTok is throwing $20 million a month at OpenAI via Microsoft TikTok's parent company, ByteDance, spends about $20 million monthly on AI services from OpenAI through a partnership with Microsoft. This partnership significantly contributes to Microsoft's revenue in the AI and cloud services sector. If TikTok were banned in the US, Microsoft's revenue from this partnership would likely be impacted. The ban could lead to a reduction or reallocation of ByteDance's spending on these AI services, potentially causing a noticeable dip in Microsoft's income from one of its significant AI customers.?

  • Who is winning the generative AI war? Accenture. - Sherwood News? – highlights the profitability of generative AI consulting, with major firms like Accenture and Boston Consulting Group leading the charge. These services are typically affordable for large enterprises due to the significant investment required. These smaller companies often struggle to afford the significant investment required for advanced AI solutions, potentially leaving them behind in the competitive landscape. However, as technology evolves, there may be more cost-effective options and services emerging that cater specifically to the needs and budgets of SMBs, helping them to leverage generative AI without the hefty price tag.

  • It just money, right? Move over Google and Microsoft: Amazon's putting $16.4 billion to develop cloud and gen AI infrastructure Amazon is investing $16.4 billion to develop its cloud and generative AI infrastructure. This significant investment aims to enhance Amazon Web Services' capabilities, positioning it as a strong competitor to Google and Microsoft in the AI and cloud markets. The funds will be used to expand data centers, improve AI technologies, and support customer growth, reinforcing Amazon's commitment to leading in the cloud computing and AI sectors. ? The outcome of Amazon's $16.4 billion investment in cloud and generative AI infrastructure depends on execution and market dynamics. Amazon's aggressive expansion aims to strengthen its position against Google and Microsoft in the AI and cloud sectors. Success will depend on how effectively they can leverage this investment to enhance their services and attract customers. Companies that fail to innovate or adapt to market needs may fall behind, potentially wasting resources. The competition will ultimately drive advancements, benefiting consumers with better and more efficient AI and cloud solutions.

What/where/how Gen AI solutions are being implemented today?

OpenAI faces a potential $5 billion loss over the next year due to high operational costs, highlighting significant financial pressures. The generative AI market is experiencing a reality check as investor enthusiasm wanes, with only 25% of planned AI initiatives succeeding, leading to more cautious spending. Taco Bell is expanding its AI drive-thru ordering system to hundreds of locations to improve order accuracy and efficiency, despite challenges faced by McDonald's in similar initiatives. Meta is shifting focus from its celebrity lookalike AI chatbots to more practical AI solutions with AI Studio. Rolls Royce and Conagra are successfully using generative AI for talent development, enhancing HR functions. The NSA has integrated generative AI tools into the workflow of over 7,000 analysts, improving cybersecurity and data analysis. Verizon is using generative AI to streamline customer interactions and enhance business operations. Indian courts are testing AI speech-to-text tools to boost judicial efficiency, and a generative AI tool has been deployed aboard the International Space Station to assist astronauts with various tasks, demonstrating the versatile applications of AI technology.

  • OpenAI might lose $5 Billion in operational costs in the next 12 months - iTMunch OpenAI faces a potential $5 billion loss over the next 12 months, driven by high operational costs, including $700,000 daily to run ChatGPT. The company projects $7 billion in AI training and $1.5 billion in staffing expenses for 2024. Despite revenue streams from ChatGPT subscriptions and LLM access fees, totaling up to $4.5 billion annually, the income falls short of covering the expenses. As OpenAI navigates financial and regulatory challenges, its ability to balance innovation with sustainability will shape the future of AI development. OpenAI's potential $5 billion loss highlights significant financial pressures, posing risks to its AI advancements. High operational costs, particularly for running and training AI models, could limit resources for innovation. Balancing financial sustainability with cutting-edge research may slow the pace of AI development and adoption. OpenAI's focus might shift towards monetizing existing technologies rather than pioneering new breakthroughs, impacting the overall progression of AI capabilities.

  • The End of Investors' Generative AI Honeymoon The generative AI market is experiencing a reality check as initial investor enthusiasm wanes. Despite the high potential of AI technologies, many companies are struggling with the practical implementation of generative AI. A recent study revealed that only 25% of planned AI initiatives have been successful, with significant concerns around high costs, data security, and accuracy issues persisting. This has led to a cautious approach from businesses, with only 63% planning to increase AI spending in 2024 compared to 93% in 2023. However, sectors like tech and retail are seeing some early successes with targeted applications. The shift towards more thoughtful AI adoption, focusing on governance and realistic applications, indicates a maturation in the market. Companies are now balancing the hype with the challenges, aiming for sustainable, long-term AI strategies.?

  • Taco Bell to roll out AI drive-thru ordering in hundreds of locations by end of year Taco Bell is expanding its AI drive-thru ordering system to hundreds of locations by the end of the year. This initiative aims to enhance order accuracy, reduce wait times, and boost profits for franchisees. The use of AI technology in drive-thrus allows for faster order processing and upselling, increasing the average check size. However, there are challenges to widespread adoption. Issues such as inaccurate orders and resistance from older customers need to be addressed. While AI can process more than 90% of orders without human intervention, it still requires improvements in understanding different accents and dialects. McDonald's previously attempted a similar AI drive-thru ordering system but faced challenges that led to the termination of their initial trials. In 2019, McDonald's acquired Apprente, a company specializing in voice-based AI technology, and later rebranded it as McD Tech Labs. McDonald's then sold McD Tech Labs to IBM in 2021 and conducted a larger-scale test in about 100 restaurants. However, the trial did not meet McDonald's standards due to issues such as difficulty in interpreting different accents and dialects, leading to inaccurate orders.

  • ?Meta moves on from its celebrity lookalike AI chatbots - The Verge Meta has decided to shut down its celebrity lookalike AI chatbots due to a lack of user adoption. The company is now focusing its efforts on developing the AI Studio, which aims to provide more versatile and practical AI solutions. This shift allows Meta to better allocate resources towards creating AI technologies that offer broader utility and engagement across its platforms.?

  • 8 months in: How Rolls Royce and Conagra HR teams use gen AI for talent development - WorkLife Rolls Royce and Conagra have been using a generative AI tool called Galileo for talent development over the past eight months. Rolls Royce employs it to upskill HR professionals, creating efficient frameworks for training and development. Conagra uses it to modernize talent development, streamline internal processes, and support succession planning. Both companies highlight the AI's ability to speed up tasks, centralize information, and enhance strategic HR functions while ensuring data security and integration within their specific business contexts.?

  • More than 7,000 NSA analysts are using generative AI tools, director says - Defense One The National Security Agency (NSA) has integrated generative AI tools into the workflow of over 7,000 employees. These tools are enhancing various tasks, from cybersecurity defenses to data analysis. The AI capabilities allow NSA workers to process vast amounts of information more efficiently, improving the agency's operational effectiveness and decision-making processes. This development highlights the growing role of AI in national security and intelligence operations, emphasizing both the potential benefits and the need for robust implementation strategies to ensure security and privacy.???

  • Verizon sees early success using generative AI to answer questions from business customers Verizon is leveraging generative AI to enhance its business operations. The technology is being used to streamline customer interactions, automate routine tasks, and provide more personalized experiences. This implementation aims to improve efficiency and foster stronger business relationships by utilizing AI-driven insights and capabilities. The move highlights Verizon's commitment to integrating advanced technologies to optimize its services and maintain a competitive edge in the market.??

  • Courts in India Test AI Speech-to-Text Tool to Boost Efficiency Courts in India are testing an AI speech-to-text tool to enhance judicial efficiency. This tool aims to transcribe court proceedings in real-time, reducing the time needed for documentation and enabling faster case resolutions. By leveraging AI technology, the Indian judicial system seeks to streamline operations and address the backlog of cases more effectively.??

  • And I save the best for last! Generative AI tool is deployed aboard the International Space Station - Nextgov/FCW The generative AI tool (by IBM) deployed aboard the International Space Station (ISS) is designed to assist astronauts with various tasks. This AI can help with communication, streamline operations, and provide advanced support in managing the complex environment of space missions. It leverages the capabilities of generative AI to improve efficiency and accuracy, crucial for the success of space missions. How cool is that?

Women Leading in AI?

New Blog:? Check out Jedidah K’s guest blog post about what she learned at the Breaking Barriers and Empowering Change panel discussion in San Francisco hosted by Women And AI. Featured AI Leader: ??Women And AI’s Featured Leader - Sanjana Raj ?? Sanjana is leading in AI as she builds her startup. She shares with us that, “As a first-time founder, AI is transforming the way I work.”

Learning Center and How To's

  • How to Write a Generative AI Cybersecurity Policy | Trend Micro (US) To write an effective generative AI cybersecurity policy, organizations should focus on four key areas: prohibiting the sharing of sensitive information with public AI platforms, maintaining clear data separation rules, validating AI-generated information, and adopting a zero-trust posture. These measures help protect the privacy and integrity of corporate data. Additionally, using advanced tools like extended detection and response (XDR) and security information and event management (SIEM) can further enhance AI security by monitoring for abnormal behaviors and minimizing risks.??

  • Building A Generative AI Platform outlines the essential components of a generative AI platform. The architecture starts with a basic model API, progressively adding elements such as context enhancement, guardrails, model routers, gateways, and optimization for latency and costs. The platform emphasizes the importance of observability and orchestration for monitoring and debugging. The post details the role of retrieval-augmented generation (RAG), input and output guardrails, and the use of model routers and gateways to manage complex AI applications securely and efficiently.?


Tools and Resources

  • SlidesGPT is like ChatGPT but for PPT. It can create you a great base presentation that you can build on.?
  • Create Your Own Custom AI With AI Studio | Meta Meta has introduced AI Studio, a platform for creating custom AI characters using Llama 3.1. This user-friendly tool allows the customization of AI personalities and avatars without technical skills. AI Studio supports integration with Instagram, Messenger, and WhatsApp, enabling AI characters to generate content, give advice, and automate responses. This platform is designed for both personal use and enhancing online presence for creators.?


If you enjoyed this newsletter, please comment and share. If you would like to discuss a partnership, or invite me to speak at your company or event, please DM me.

Hervé Poinsignon

IT Architect Salesforce, AI, MBA, Multicloud, SAP, Investor and Entrepreneur

2 个月

Please check new innovation process capability with AI assistance https://innovation-ai-booster.com/

回复

Thanks Eugina Jordan for sharing! It is a great read indeed.

Melissa Cohen

Personal Branding and LinkedIn? Presence Expertise | Build Your Brand, Find Your Voice, Build Your Business | Amazon Bestselling Author | The Good Witch of LinkedIn ?

2 个月

I look forward to this every week Eugina. So many incredible insights and updates. Just incredible.

Melanie Borden

I lead a team of creative, high-performing experts who transform businesses, executives, and leaders by increasing their reach, impact, and brand marketing effectiveness | CEO @ The Borden Group

2 个月

Very helpful! Thanks, Eugina!

Loren Rosario - Maldonado, PCC

I help multicultural leaders shatter barriers, boost confidence, and lead with impact with The C.H.O.I.C.E. Playbook??

2 个月

Great insights Eugina Jordan more examples of AI’s revolutionary impact in how we conduct business.

要查看或添加评论,请登录

Eugina Jordan的更多文章

  • Gen AI for Business Newsletter # 27

    Gen AI for Business Newsletter # 27

    Gen AI for Business # 28 newsletter covers key insights and tools on Generative AI for business, including the latest…

    22 条评论
  • Gen AI for Business Weekly Newsletter # 27

    Gen AI for Business Weekly Newsletter # 27

    October 20 newsletter Welcome to Gen AI for Business weekly newsletter #27. We bring you key insights and tools on…

    17 条评论
  • Gen AI for business newsletter # 26

    Gen AI for business newsletter # 26

    Welcome to Gen AI for Business weekly newsletter # 26. We’re back with the latest on all things Gen AI, from…

    11 条评论
  • Gen AI for Business Newsletter, edition #25

    Gen AI for Business Newsletter, edition #25

    October 6 newsletter Welcome to the 25th edition of Gen AI for Business! I am so grateful and thankful for each of…

    32 条评论
  • Gen AI for Business Newsletter # 24

    Gen AI for Business Newsletter # 24

    September 29 newsletter Welcome to Gen AI for Business #24, where we dive into the latest breakthroughs, strategies…

    4 条评论
  • Gen AI for Business # 23

    Gen AI for Business # 23

    Welcome to Gen AI for Business newsletter #23, where we dive into the latest generative AI news, trends, strategies…

    28 条评论
  • Gen AI for Business # 22

    Gen AI for Business # 22

    Welcome to the Gen AI for Business #22 newsletter. This newsletter provides key insights and tools on Generative AI for…

    42 条评论
  • Gen AI for Business # 21

    Gen AI for Business # 21

    Welcome to this week's newsletter, where we dive into a roundup of all the latest developments in AI. From regulatory…

    8 条评论
  • Gen AI for Business # 20

    Gen AI for Business # 20

    Welcome to September! As we settle back into our routines and the kids head back to school, the world of Generative AI…

    46 条评论
  • Gen AI for Business #19

    Gen AI for Business #19

    Welcome to this week's newsletter—packed with the latest news in Gen AI, tech tools, and some interesting updates on…

    20 条评论

社区洞察

其他会员也浏览了