Gen AI for Business # 15

Gen AI for Business # 15

July 28 newsletter

Gen AI for Business # 15

Welcome to Gen AI for Business # 15. This number signifies leadership and actual action and can be a powerful sign of success. There's also the energy of growth and expansion in there. It can serve as a reminder to pay attention to your goals and to stay focused and motivated.

This week in AI was like a tech showdown! Gemini Advanced flexed its muscles with a record-breaking million-token context window. Not to be outdone, Meta shouted, "Hold my beer!" and dropped Llama 3.1 with an eye-popping 405 billion parameters. Then, Open AI dropped its customization of the GPT-4o mini and then SearchGPT. And Elon Musk announced that his beast had begun training. Oh my. And we thought that summer was slow for tech news.?

And this was just the beginning of the week …?

We have curated key insights and tools on Generative AI for business, covering the latest news, strategies, and innovations in the B2B sector from last week.?

For those who don't know me, I am a technologist with 12 patents on Open RAN, AI, and 5G. As an award-winning CMO who created a new market category in telco, and the author of many industry articles and also an award-winning leadership book for women. My newsletter is your go-to resource for a roundup of news, updates on models, regulatory developments, partnerships, and practical insights from the past week.

If you enjoyed this newsletter, please share it with your network!

Thank you,

Eugina?

News about models and everything related to them

Meta has introduced Llama 3.1, the largest open-source AI model with 405 billion parameters, designed for practical applications like multilingual conversation and text summarization. Meta emphasizes open-source AI to democratize technology and enhance security, collaborating with companies like Amazon and NVIDIA. In response, OpenAI offers free fine-tuning for GPT-4o Mini, highlighting a shift towards efficiency. French startup Mistral launched specialized AI models for code generation and mathematical reasoning. Elon Musk's xAI is developing the Memphis Supercluster to create the world's most powerful AI by December 2024, despite environmental concerns. OpenAI’s CEO, Sam Altman, acknowledged the need for better product names but emphasized performance over names. Researchers warn against using AI-generated data for training to prevent "model collapse," stressing the need for high-quality data. Google expanded its Gemini AI platform, making it faster and more accessible, especially for educational use.

  • Introducing Llama 3.1: Our most capable models to date Meta has introduced Llama 3.1 405B, the largest open-source AI model in history, boasting 405 billion parameters. This model, significantly larger than its predecessor, aims to compete with leading models like OpenAI's GPT-4 and Anthropic's Claude 3.5. Despite its size, Meta emphasizes Llama 3.1's practical applications, including multilingual conversational agents and long-form text summarization. Trained on 15 trillion tokens and featuring a 128,000-token context length, Llama 3.1 showcases enhanced reasoning capabilities. Meta has prioritized safety, conducting extensive evaluations to ensure the model's outputs are secure and sensible across multiple languages. You can access the Llama 3.1 405B model through various platforms such as Hugging Face, GitHub, and Meta's official channels. Additionally, it is available from cloud providers like AWS, Nvidia, Microsoft Azure, and Google Cloud. However, due to the model's massive size, substantial hardware resources are required to run it effectively.

Source: Meta.

More in this post: Ahmad Al-Dahle on LinkedIn: With today’s launch of our Llama 3.1 collection of models we’re making… ?

And Mark Zuckerberg's blog Open Source AI Is the Path Forward | Meta Meta's commitment to open-source AI is driving innovation forward. By releasing Llama 3.1, including the powerful 405B model, Meta aims to make AI development more accessible and cost-effective. This move supports developers in creating customized models, ensuring data security, and avoiding dependency on closed systems. Collaborations with companies like Amazon, Databricks, and NVIDIA further strengthen the AI ecosystem. Mark Zuckerberg emphasizes that open-source AI will democratize technology, enhance security, and foster long-term industry advancements.

  • AI arms race escalates: OpenAI offers free GPT-4o Mini fine-tuning to counter Meta's Llama 3.1 release | VentureBeat On the same day, Open AI announced that it will now let developers customize GPT-4o Mini. Developers who pay for access to the efficiency-oriented model will now be able to customize and fine-tune it to their needs. According to the latest update, the mini version of GPT-4o now matches the base model's performance while being 20 times cheaper. This signals a shift in the AI industry, where efficiency is becoming as crucial as performance, especially with rising energy costs for startups.

  • Mistral Launches AI Models for Localized Code Generation, Math Reasoning French AI startup Mistral has launched two specialized language models: Codestral Mamba and MathΣtral (Mathstral). Codestral Mamba, with 7 billion parameters, excels in localized code generation and can handle up to 256k tokens, making it ideal for real-time code autocompletion and syntax error detection. Built on Mistral’s Mamba architecture, it processes sequences linearly and outperforms larger rivals like Google’s CodeGemma. Mathstral, designed for complex mathematical reasoning, collaborates with Project Numina and scores highly on benchmarks like MATH and MMLU. Both models are open-source and available on Hugging Face.?

Source: Mistral

  • https://x.com/elonmusk/status/1815325410667749760 Elon Musk’s AI company, xAI, began training its new Memphis Supercluster, aiming to create the "world's most powerful AI" by December 2024. This supercluster, a massive network of Nvidia GPUs, is being dubbed the "Gigafactory of Compute" with 100,000 liquid-cooled GPUs, using up to 150 megawatts of electricity and over a million gallons of water daily. The goal is to significantly outpace current supercomputers and train AI models faster, positioning Grok 3 as the top AI by year-end. Investors have pumped over $6 billion into xAI since its launch last year, but local environmentalists are worried about its huge energy use. The gigafactory might need up to 150 megawatts of electricity during peak times, enough to power 100,000 homes. Musk plans to use the site to train a new version of xAI’s Grok to compete with OpenAI’s latest model, develop AI products for Tesla and SpaceX, and conduct cutting-edge AI experiments. This move fuels the ongoing debate: Can bigger computers really lead to significantly better AI models, or are we hitting a point where more power only gives minor improvements? xAI’s gigafactory could soon reveal the answer to that vital question.

  • Sam Altman Admits Its Letters-and-Numbers Salad Product Names Like "GPT-4o Mini" Are Horrible Sam Altman, CEO of OpenAI, acknowledged the need for a new naming scheme for their products, following criticism over the cumbersome names like "GPT-4o Mini." Altman humorously admitted on social media that a revamp is overdue. While OpenAI has creative names for other projects, such as Sora and DALL-E, their GPT models' technical names reflect their functionality but lack flair. Despite this, the popularity of ChatGPT suggests any renaming might not happen soon. As a marketer who launched a new brand for Fortune 500, then launched a successful startup and created a new market category, I find this OK.? Sticking with technical and straightforward naming like "GPT-4" can maintain a sense of professionalism and clarity about the product's function. Flashy names aren't always necessary if the product's capabilities and reputation speak for themselves. As long as the functionality and performance are top-notch, the specific names may not significantly impact user adoption or satisfaction. He knows his audience and we do not really care. LOL.

  • AI-generated data causes LLM model collapse: Researchers Researchers have warned that using AI-generated data to train large language models (LLMs) can lead to "model collapse," where the quality of models deteriorates over time due to misinterpretations of reality. This happens because AI-generated data pollutes the training set for subsequent models. To mitigate this risk, it's crucial to maintain access to high-quality, human-generated data and ensure proper data management practices. Coordination among AI developers is essential to preserve data provenance and prevent degradation in model performance.??

  • The best version of Google Gemini is now out for more people, teens included | TechRadar Google has upgraded its Gemini AI platform, making the Gemini 1.5 Flash large language model (LLM) available to more users, including teenagers. The updated version is faster, more accurate, and features a larger context window for longer conversations. It also includes new functions like file uploads from Google Drive and enhanced citation features to combat AI hallucinations. Gemini's expansion aims to improve accessibility and usefulness for a broader audience, particularly in educational settings.?

Gen AI news from different industries

AI is revolutionizing digital forensics by improving data analysis speed and accuracy, essential for combating sophisticated digital crimes. B2B brands leverage generative AI like ChatGPT to automate content and enhance customer service, streamlining operations. UNESCO highlights AI's potential in education for personalized learning but warns of risks like biases, advocating for ethical guidelines. In healthcare, AI tools like Unfold AI have shown higher accuracy in detecting cancer compared to doctors, enhancing treatment precision. Despite challenges, over 70% of healthcare organizations are adopting generative AI to boost clinical and administrative efficiency.

Digital Forensics

  • Harnessing the Power of AI: Revolutionizing Digital Forensics for a Tech-Driven World AI is revolutionizing digital forensics by significantly improving the speed and accuracy of data analysis. It can process large volumes of information, detect patterns, and reveal hidden connections, which are crucial for cybersecurity and criminal investigations. AI tools help forensic experts tackle the growing complexity of cyber threats and digital evidence, enhancing the overall effectiveness of their work. This transformation is essential for adapting to a tech-driven world where digital crimes are increasingly sophisticated. That is a good thing, right? Scammers are using AI, let the good guys and gals also use AI.?

Social Media?

  • How B2B Brands Are Incorporating Generative AI [Infographic] | Social Media Today B2B brands are leveraging generative AI tools like ChatGPT and Jasper to automate content creation, personalize marketing, and enhance customer service with advanced chatbots. These tools enable the production of customized content at scale and improve data analysis for targeted campaigns. The adoption of generative AI is streamlining operations and significantly enhancing customer interactions. Are you using these tools to create your content?

Education

  • Generation AI: Navigating the opportunities and risks of artificial intelligence in education | UNESCO explores the impact of AI on education, highlighting both opportunities and risks. AI can transform learning experiences by personalizing education and enhancing administrative efficiency. However, it also poses challenges, such as potential biases and data privacy concerns. UNESCO emphasizes the need for ethical guidelines and inclusive practices to ensure AI benefits all learners while mitigating risks. The organization calls for strong normative frameworks, transformed educational systems, and responsible private sector investments to steer AI towards serving the public good. Countries like the European Union with its proposed AI Act, the United States, and China are developing or have developed laws and policies to regulate AI in education. These efforts focus on ethical guidelines, data privacy, and ensuring inclusive and equitable use of AI technology in the educational sector.

Healthcare

  • Artificial intelligence detects cancer with 25% greater accuracy than doctors in UCLA study | Fox News A UCLA study found that an AI tool, Unfold AI, detected prostate cancer with 84% accuracy, compared to 67% accuracy by doctors. This AI technology, developed by Avenda Health, uses a sophisticated algorithm to visualize cancer likelihood from clinical data. It helps create 3D cancer estimation maps, allowing for more precise and personalized treatments, potentially avoiding radical prostatectomy. Experts highlight AI's potential to improve cancer diagnoses and treatments but caution that it should complement, not replace, human clinical judgment.??

  • Generative AI in healthcare: Adoption trends and what’s next | McKinsey A McKinsey survey reveals that over 70% of healthcare organizations are either using or testing generative AI (gen AI) tools. These tools are primarily enhancing clinical productivity, patient engagement, and administrative efficiency. However, challenges like risk management, data infrastructure, and proof of value remain significant hurdles. Many organizations partner with third-party vendors for customized solutions. Despite the complexities, there's a strong interest in expanding gen AI capabilities to improve healthcare outcomes as you can see in the charts below.?


Source: McKinsey & Company
Source: McKinsey & Company


?News and Partnerships?

Meta and Snowflake are collaborating to optimize Meta's Llama 3.1 models within Snowflake Cortex AI, leveraging Snowflake’s data cloud to enhance scalability and integration. Tech Mahindra launched VerifAI, a GenAI validation solution ensuring data quality and security for enterprises. The Coalition for Secure AI (CoSAI) was formed by major tech companies to promote AI security through standardized practices and tools. GE HealthCare and AWS are developing generative AI applications to transform healthcare, while Microsoft is partnering with Mass General Brigham and the University of Wisconsin-Madison to advance AI models for medical imaging. Accenture, in collaboration with NVIDIA AI Foundry, introduced the AI Refinery framework for creating custom Llama LLMs. OpenAI is testing SearchGPT, an AI-powered search tool integrating real-time web data, which impacted Google's stock. SearchGPT's data privacy FAQ outlines user data management practices, emphasizing user control and privacy.

  • Snowflake Teams Up with Meta to Host and Optimize New Flagship Model Family in Snowflake Cortex AI | Morningstar Meta and Snowflake have teamed up to host and optimize Meta's new flagship model family, Llama 3.1, within Snowflake Cortex AI. This collaboration aims to leverage Snowflake’s data cloud platform to enhance the performance, scalability, and integration of Meta's AI models. By utilizing Snowflake Cortex AI, Meta's models can be deployed more efficiently, providing robust solutions for various AI-driven applications. This partnership highlights the importance of combining cutting-edge AI with powerful data management platforms to drive innovation and operational efficiency.?

  • The biggest names in AI have teamed up to promote AI security - The Verge The Coalition for Secure AI (CoSAI) has been formed by major tech companies to enhance AI security. Founding members include Google, Microsoft, OpenAI, Nvidia, Amazon, Intel, IBM, PayPal, and Wiz. These companies aim to develop standardized security practices and open-source tools to ensure safe AI deployment. CoSAI's initiatives focus on identifying and mitigating cybersecurity risks, managing software supply chain vulnerabilities, and creating a comprehensive security framework for AI systems. This collaboration reflects a unified effort to advance AI security and foster trust in AI technologies. While other recent initiatives, such as HiddenLayer's AI Threat Landscape Report and Zscaler's ThreatLabz 2024 AI Security Report, highlight the rising concerns and best practices for AI security, CoSAI distinguishes itself by fostering a collaborative ecosystem involving industry leaders and academia to proactively secure the entire AI lifecycle. This unified approach aims to ensure that AI systems are robust and secure by design, addressing both current and emerging threats.

  • Microsoft collaborates with Mass General Brigham and University of Wisconsin–Madison to further advance AI foundation models for medical imaging Microsoft has announced collaborations with Mass General Brigham and the University of Wisconsin-Madison to advance AI foundation models for medical imaging. The initiative aims to enhance radiology workflows, improve patient outcomes, and support clinical applications through generative AI. These efforts will leverage Microsoft's Azure AI platform and Nuance's radiology applications, focusing on developing advanced multimodal AI models to aid in disease classification, report generation, and data analysis, thereby addressing challenges like physician burnout and staffing shortages.??

  • Accenture Pioneers Custom Llama LLM Models with NVIDIA AI Foundry Accenture has launched the AI Refinery framework in collaboration with NVIDIA AI Foundry to create custom large language models (LLMs) using the Llama 3.1 collection. This initiative allows enterprises to refine prebuilt models with their own data, facilitating domain-specific customizations. Key features include a platform for selecting model combinations, an enterprise-wide data index, and autonomous AI systems. This framework aims to help businesses deploy generative AI applications tailored to their unique needs.??

  • SearchGPT is a prototype of new AI search features | OpenAI OpenAI is testing SearchGPT, a prototype that combines the AI model's capabilities with real-time web information to provide fast and relevant answers with clear sources. Launched to a select group of users and publishers, this tool aims to improve search experiences and support high-quality content discovery. SearchGPT emphasizes collaboration with publishers and offers controls for content management. The feedback from this prototype will help integrate its best features into ChatGPT in the future.

  • ?But what is interesting is what will happen to your data if you use SearchGPT. Care to know? SearchGPT Data Privacy FAQ | OpenAI Help Center The SearchGPT Data Privacy FAQ explains how OpenAI handles data for its AI-powered search tool. It shares de-identified (?) search queries and general location data with third-party providers to improve accuracy. Users can control precise location sharing and opt-out of using conversations to improve search functionality. Search and chat histories are managed separately, and deleted search logs are removed from systems within 30 days unless required for legal reasons.?

Regional and regulatory updates

AI has the potential to significantly boost Africa's economy, adding an estimated $2.9 trillion by 2030 through applications like predictive analytics in agriculture and energy. Despite infrastructure and energy access challenges, local investments and global partnerships are vital for success, as evidenced by Nigeria's Crop2Cash. The IMF assessed countries' AI readiness, with Denmark and the US leading, and warned about AI exacerbating global inequalities, advocating for equitable tech access. Nvidia is developing a compliant AI chip for China amidst U.S. export restrictions, balancing economic interests with national security. The Prompt Augmentation System from Peking University enhances AI model performance by optimizing prompts, with considerations for data privacy. Alibaba's Aidge AI toolkit, used by 500,000 merchants, improves e-commerce through AI tools. The USPTO updated AI patent eligibility guidelines to clarify application evaluations. The Biden-Harris Administration announced new AI safety actions and voluntary commitments from companies like Apple. California's SB 1047 proposes stricter AI regulations, sparking debate on innovation and consumer protection. The UN pushes for a global AI governance framework to unify international regulation and ensure ethical AI use.

  • The IMF assessed countries' readiness for AI integration, considering digital infrastructure and talent. Denmark leads with a score of 0.78, followed by other Western European nations and the US at 0.77. In the Middle East, the UAE and Israel rank highest, while Japan and South Korea top Asia with 0.73. China scores 0.64. The IMF warns of AI widening global inequalities and urges equitable access to advanced technology, supported by a recent UN resolution signed by 123 countries.

Source: IMF


  • Exclusive: Nvidia preparing version of new flagship AI chip for Chinese market | Reuters Nvidia is developing a version of its new flagship AI chip tailored for the Chinese market, designed to comply with current U.S. export restrictions. This strategic move aims to ensure Nvidia can continue to serve the significant demand for advanced AI technology in China despite regulatory constraints. The new chip version is expected to maintain high performance while adhering to export controls, highlighting Nvidia's adaptive approach in a complex geopolitical landscape. The U.S. government has permitted Nvidia to sell certain AI chips to commercial entities in China, but not its most advanced models like the H100 and H800. This move aims to balance economic interests with national security concerns. Nvidia’s adaptations ensure these chips stay just below the regulatory thresholds, allowing continued sales while adhering to U.S. guidelines. ?

  • PAS finds the best prompting technique for your LLM - TechTalks The Prompt Augmentation System (PAS) developed by Peking University and Baichuan can be accessed by researchers and developers through their official platforms. PAS is available for integration with various large language models, enhancing their performance by automatically optimizing prompts. While it offers significant improvements, users should consider security measures, particularly since it originates from China. Ensuring data privacy and compliance with international security standards is crucial when utilizing PAS.

  • Alibaba International’s Aidge AI Toolkit In Use By 500k Merchants Alibaba's Aidge AI toolkit, adopted by 500,000 merchants, enhances e-commerce through AI-powered tools like translation services, text and image generation, and customer service assistance. Available on platforms like AliExpress and Alibaba.com , Aidge has been used for over 100 million product listings, significantly improving content quality, click-through rates, and customer satisfaction. Small and medium-sized enterprises particularly benefit from these advancements, enabling them to streamline operations and reach new customers more effectively.??

  • USPTO Issues Guidance Update on Subject Matter Eligibility of Artificial Intelligence | WilmerHale The USPTO has updated its guidance on the subject matter eligibility of AI inventions, aiming to clarify how AI-related patent applications are evaluated. The new guidelines emphasize that AI inventions must meet the statutory criteria for patent eligibility and avoid being classified as abstract ideas, which are ineligible for patents. WilmerHale's analysis highlights that this update seeks to provide clearer boundaries for patent examiners and applicants, aiding in the consistent application of patent laws to rapidly evolving AI technologies. This move is part of broader efforts to keep patent practices in line with technological advancements.??

  • ?California AI bill: Scott Wiener explains the fight over his proposed bill, SB 1047 - Vox California's SB 1047, introduced by state Senator Scott Wiener, aims to regulate AI by imposing stricter liability on companies that develop and deploy powerful AI systems. This bill mandates that companies spending over $100 million on AI research and development implement robust safety measures and be held accountable for any harm caused by their AI technologies. The bill has sparked significant debate within the tech industry, with proponents arguing it is necessary for consumer protection and critics claiming it could stifle innovation.

  • ?Inside the United Nations’ AI policy grab – POLITICO The United Nations is pushing for a global AI governance framework to fill policy gaps and ensure cohesive international regulation. This initiative aims to address the fragmented landscape of AI policies and provide a platform for inclusive global representation, especially for countries that may be underrepresented in current AI governance structures. Despite criticisms and concerns about competing with existing efforts, the UN believes that a unified approach is necessary to effectively manage AI's global impact and ensure ethical and equitable use.?

Gen AI for Business Trends, Concerns, and Predictions:?

Research from MIT reveals that large language models (LLMs) don't understand language like humans, relying instead on statistical patterns, which can cause logical errors. Addressing this will require advancements in AI, such as integrating common sense reasoning. Meanwhile, a recent Microsoft outage caused by a faulty software update from CrowdStrike affected 8.5 million devices, raising concerns about big tech's reliability. In the entertainment industry, SAG-AFTRA plans a strike over AI's impact on video game performers. The first wave of AI innovation has ended, with current focus on practical applications in enterprise operations. Figma rolled back its "Make Designs" feature due to originality concerns. Academic authors were upset after Taylor & Francis sold their research to Microsoft for AI development without consent, highlighting the need for better legal protections. A Stack Overflow survey showed developers increasingly use generative AI but have trust issues regarding its accuracy and ethics. A Forrester survey found executives misunderstand generative AI's capabilities, stressing the need for better training. A Capgemini report indicates growing investment in generative AI, emphasizing the importance of data governance and ethical considerations to maximize benefits.

  • Large language models don’t behave like people, even though we may expect them to | MIT News – discusses research indicating that large language models (LLMs) don't truly understand language like humans do, due to their reliance on statistical patterns rather than experiential learning. This leads to logical and contextual errors. Fixing these issues will require significant advancements in AI, including better integration of common-sense reasoning and context-aware processing. Researchers are actively exploring ways to bridge this gap, but achieving human-like understanding in AI remains a complex and ongoing challenge.?

  • Microsoft Outage Could Hurt Trust in Big Tech to Safeguard AI - Business Insider Remember that little outage last week? Turns out, the Microsoft-CrowdStrike debacle was more than just a blip—it affected around 8.5 million devices and threw banks, airlines, and even pubs into chaos worldwide! The root cause was a faulty software update from CrowdStrike, which left systems unable to restart properly. This incident has sparked serious discussions about the reliability of AI and big tech, potentially shaking consumer trust.??

  • Video game performers will go on strike over artificial intelligence concerns SAG-AFTRA, the union representing video game performers, has announced a potential strike over the use of AI in video games. The union is concerned about AI replacing human performers and demands better protections and compensation for voice actors and motion capture artists. This move highlights growing tensions in the entertainment industry as AI technologies continue to evolve and impact traditional roles.

  • ?The first wave of AI innovation is over. Here’s what comes next - Fast Company The first wave of AI innovation is over, and the focus is now on practical applications, especially using enterprise data to drive insights and efficiency. Companies are integrating AI into real-world operations, enhancing reliability and ethical use. The emphasis is on refining AI tools, ensuring data privacy, and leveraging AI for strategic decision-making. This new phase aims to harness AI's potential to transform various sectors by improving workflows, predicting trends, and optimizing resources.

  • An Update on our Make Designs Feature | Figma Blog from Figma. Figma introduced the "Make Designs" feature at Config 2024, which used AI to generate UI design drafts based on user prompts. However, due to concerns over originality and similarities to existing designs, the feature was rolled back. Figma explained how this happened and is working on improving the feature to ensure it provides unique and valuable assistance to designers while addressing these concerns.??

  • Academic authors 'shocked' after Taylor & Francis sells access to their research to Microsoft AI Academic authors have expressed shock after Taylor & Francis sold access to their research to Microsoft for AI development, without informing them or offering an opt-out option. The deal, worth $10 million, has raised concerns about transparency and authors' rights. The Society of Authors criticized the lack of consultation and potential impacts on copyright, data protection, and traditional sales. Authors are urged to check their contracts and seek guidance on their rights in such partnerships. This deal highlights the lack of laws protecting authors in such situations, as current regulations do not mandate consent or offer sufficient protections. Authors and advocacy groups are calling for better legal frameworks to safeguard intellectual property and ensure fair treatment in AI partnerships.?

  • Developers aren’t worried that gen AI will steal their jobs, Stack Overflow survey reveals | VentureBeat A recent survey by Stack Overflow revealed a growing adoption of generative AI among developers, with 70% incorporating it into their work. Despite this, trust in AI-generated content lags, as only 30% fully trust the technology. The survey highlights concerns over accuracy, potential biases, and ethical considerations in AI outputs. This discrepancy between usage and trust suggests that while developers recognize the utility of generative AI, there is still significant hesitation regarding its reliability and impact.?

  • https://www.forrester.com/blogs/even-genai-trained-execs-are-confused-about-it/ A Forrester survey found that even executives trained in generative AI (genAI) often misunderstand its capabilities. For example, 82% incorrectly believe genAI models can look up and validate facts, and 70% think these models will always produce the same outputs given the same prompt. This confusion is alarming since these execs are key decision-makers. Forrester suggests more effective training programs to bridge this knowledge gap and ensure proper use of genAI in business strategies. Yikes.?

  • https://www.capgemini.com/wp-content/uploads/2024/07/Generative-AI-in-Organizations-Refresh-1.pdf A Capgemini report highlights that 80% of organizations have increased their investment in generative AI since 2023, with 24% integrating it into their operations, a significant rise from 6% the previous year. Generative AI is being adopted across various functions, driving improvements in productivity (7.8%) and customer engagement (6.7%). The report also underscores the importance of robust data governance, strategic partnerships, and ethical considerations to maximize AI benefits while mitigating risks. The emergence of AI agents is noted as a key future trend.?

Source: Capgemini


News and updates around? finance, Cost and Investments

  • India’s Gen AI investment surges, but funding dips: Nasscom - The Economic Times India's investment in generative AI is surging despite a dip in funding, according to a NASSCOM report. The report highlights a growing interest and adoption of generative AI technologies across various sectors in India. However, the overall funding for AI startups has seen a decline. The trend suggests that while the enthusiasm for AI applications remains high, investors may be becoming more cautious or selective in their funding choices. In contrast, the U.S. generative AI market is experiencing a significant surge in investment. According to the AI Index Report 2024, AI investments in the U.S. reached $67.2 billion in 2023, with generative AI funding specifically nearly octupling to $25.2 billion compared to previous years. This reflects a robust and accelerating growth driven by substantial fundraising rounds from major players like OpenAI, Anthropic, and Hugging Face (AI Index ) (McKinsey & Company ). The active involvement of the U.S. government in promoting AI research and innovation through various initiatives and policies also contributes to this upward trend.

What/where/how Gen AI solutions are being implemented today?

Intel has partnered with the International Olympic Committee to introduce a GenAI RAG solution, including AthleteGPT, for the 2024 Paris Olympics, leveraging Intel's Gaudi accelerators and Xeon processors to assist 11,000 athletes. Google’s Gemini AI will enhance the Olympics broadcast, providing personalized content and interactive features, including for NBC's Leslie Jones. Meta's Ray-Ban Meta smart glasses offer real-time information and hands-free capabilities. The integration of AI in the Olympics showcases how advanced technology is enhancing the athlete and viewer experience. AI is also making strides in various sectors: Visa combats fraud with AI, five cities enhance smart living with generative AI, social media platforms tackle misinformation, and AI transforms business travel and homecare services. Iconic brands like Nike and Coca-Cola use generative AI for innovative campaigns, and Diageo's Johnnie Walker features a generative AI bottle offering personalized tasting experiences.

  • Intel unveils GenAI RAG solution to support Olympic athletes Intel has partnered with the International Olympic Committee to introduce a GenAI RAG (retrieval-augmented generation) solution for the 2024 Paris Olympics. This includes AthleteGPT, a chatbot on the Athlete365 platform, designed to assist around 11,000 athletes with real-time information. The solution leverages Intel's Gaudi accelerators and Xeon processors to enhance performance and efficiency. This initiative showcases Intel's commitment to using AI to support athletes and drive innovation. I am looking forward to learning if this tool was helpful and how it was used. If you heard anything, put it in the comments.

?

  • Google’s Gemini AI will be all over the Paris Olympics broadcast - The Verge Another Olympic deployment involves Google's AI technology at the 2024 Paris Olympics. Google will use its Gemini AI to enhance ad experiences and support various aspects of the Games. This includes personalized content for spectators and interactive features to engage fans both at the event and remotely. Google's initiative aims to create immersive experiences and improve how audiences interact with Olympic content.? Additionally, Gemini is being used by Leslie Jones, NBC's "chief superfan commentator," to engage viewers by learning new sports and sharing insights during broadcasts. Athletes and fans are also using Google Lens, Circle to Search, and immersive features in Google Maps to explore Paris and its Olympic venues. Early feedback highlights the innovative use of AI to create more interactive and informative coverage, though detailed user experiences and further feedback are still emerging. Meta is enhancing the 2024 Paris Olympics experience with its latest Ray-Ban Meta smart glasses. These glasses come equipped with Meta AI, allowing users to get real-time information about what they're seeing through voice commands. Features include landmark identification, scientific explanations, and the ability to make hands-free video calls via WhatsApp and Messenger. Meta's Ray-Ban Meta smart glasses provide various features like hands-free voice commands, real-time information about landmarks, and video calling through WhatsApp and Messenger. However, these capabilities are different from Google's Gemini AI technology, which offers in-depth AI integration for personalized content and interactive features during the 2024 Paris Olympics. Google's Gemini focuses on enhancing viewer experience with AI Overviews, AI-driven content during broadcasts, and leveraging Google Maps Platform's Photorealistic 3D Tiles for detailed venue insights. Are you at the Olympics and are you using any of the AI technology mentioned above??

  • How Visa employed artificial intelligence to check $40 billion in fraud as scammers also take to AI Visa has utilized AI and machine learning to combat $40 billion in fraudulent activities. These technologies help detect and prevent fraud by analyzing transaction patterns and identifying suspicious behavior in real-time. The advanced algorithms enhance security measures, ensuring safer transactions for consumers and businesses alike. Visa's proactive approach in integrating AI underscores the growing importance of technology in maintaining financial security and trust in digital transactions. These AI systems were developed and integrated over several years, leveraging advanced algorithms to detect and prevent fraudulent transactions in real-time.?

  • These 5 cities are making innovative use of generative AI The World Economic Forum highlights how five global cities are leveraging generative AI to enhance smart city initiatives. Buenos Aires uses a versatile chatbot, "Boti," for public services; Singapore has developed over 100 generative AI solutions, including educational tools; Amsterdam focuses on sustainable material generation; Dallas tests generative AI-powered autonomous vehicles; and Boston employs generative AI to envision bike-friendly infrastructure. These innovations demonstrate how cities are integrating advanced AI to improve urban living.??

  • Social Media Platforms Implement GenAI Tools and Policies While Fighting Misinformation - Acceleration Economy Social media platforms like TikTok and LinkedIn are integrating generative AI tools to enhance user and advertiser experiences while tackling misinformation. TikTok’s Symphony AI Suite helps advertisers create effective ads quickly, while LinkedIn's AI features simplify job searches and resume writing. However, platforms face challenges with AI-generated misinformation and deepfakes. Efforts to label and remove such content are ongoing, highlighting the balance between innovation and user protection.?

  • 7 Ways AI Is Changing Business Travel AI is transforming business travel by personalizing travel recommendations with tools like FCM Travel Solutions' AI platform, automating expense management through SAP Concur, and enhancing travel safety with International SOS's real-time updates. It streamlines booking processes with Lola.com , predicts travel disruptions using Amadeus’ AI, assists travelers through Egencia’s chatbots, and strengthens data security. These advancements are making business travel more efficient, safer, and tailored to individual needs. Which ones have you tried???

  • Exclusive: How The New York Times' Granular Gen AI Tool Drives Campaign Performance The New York Times' granular GenAI tool was developed in collaboration with IBM Watson Advertising. This tool helps enhance campaign performance by providing highly targeted ad recommendations based on detailed data analysis. The integration of this AI technology demonstrates a shift towards more sophisticated and data-driven marketing approaches in the media industry.??

  • From Nike to Coca-Cola, How Iconic Brands Are Innovating with Generative A.I. Iconic brands like Nike and Coca-Cola are leveraging generative AI to enhance their advertising campaigns. Nike's "Never Done Evolving" campaign used AI to feature Serena Williams in an innovative ad. Coca-Cola's "Create Real Magic" contest invited users to create AI-generated artwork, resulting in 120,000 unique interpretations of the brand. Generative AI is helping brands create engaging, personalized content, improve campaign performance, and drive creativity without replacing human input.??

  • Diageo launches generative AI bottle for Johnnie Walker Diageo has jazzed up Johnnie Walker Blue Label with a limited-edition "Generative AI Edition" bottle. Scan it with your phone, and you'll get a personalized tasting experience narrated by master blender Emma Walker. This tech-savvy twist blends traditional whiskey tasting with cutting-edge AI, making your drink not just a sip, but an experience! Might make a good present for someone :).??

Women Leading in AI?

New Podcast:? Join us as we dive into "Exploring Emerging AI Trends and Entrepreneurship " with the brilliant Lindsey Witmer Collins, founder of WLCM "Welcome" App Studio. She shares the game-changing potential of distributed AI and how personal AI could revolutionize our online interactions, Featured AI Leader: ??Women And AI’s Featured Leader - Madhu Vohra ?? Madhu is the Founder and CEO of dabbL a 24/7 guidance counselor app for students.? She shares that the dabbL team is, ”harnessing AI to make our day to day very effective so every minute is productive.”?

Learning Center and How To’s

  • AI is confusing — here’s your cheat sheet - The Verge The Verge article explains key AI terminology in simple, human-friendly terms. It covers concepts like machine learning, neural networks, deep learning, and natural language processing, helping readers understand how these technologies work and their implications.??

  • Byte-Sized Courses: NVIDIA Offers Self-Paced Career Development in AI and Data Science NVIDIA has introduced self-paced career development programs in AI and data science through its Deep Learning Institute (DLI). These programs include free access to courses, webinars, and certifications designed to enhance skills and career growth in AI. The initiative aims to make advanced AI education accessible to professionals and students alike, emphasizing the importance of networking and leveraging community resources. NVIDIA's offerings highlight the growing demand for AI expertise across various industries. Which one have you taken and which one will you take??

Prompt of the week

Prompting - OpenAI Developer Forum The OpenAI Developer Forum's "Prompting" category is a hub for discussing and refining prompts for AI models. Users share insights on optimizing prompts, addressing issues like model behavior and prompt effectiveness. Topics include using system messages, preventing undesired responses, and integrating APIs. It's a valuable resource for anyone! Have you tried it yet???

Tools and Resources

  • https://techcommunity.microsoft.com/t5/educator-developer-blog/build-powerful-rag-apps-without-code-using-langflow-and-azure/ba-p/4193542 Microsoft's Tech Community blog details how to build powerful Retrieval-Augmented Generation (RAG) applications without coding using LangFlow and Azure OpenAI. LangFlow is a drag-and-drop framework that allows users to create custom GenAI applications by assembling various components visually. The guide provides a step-by-step tutorial on setting up LangFlow, integrating it with Azure OpenAI, and building applications like a food recommendation system based on dietary guidelines. This approach simplifies the development process, making advanced AI capabilities accessible without extensive programming knowledge.

  • https://github.com/hrishioa/mandark The GitHub repository "Mandark" by user hrishioa is an open-source project focused on developing an AI-driven assistant. This project leverages machine learning techniques and is designed to assist with various tasks through a conversational interface. The repository includes documentation and code to help users set up and contribute to the project. It aims to create a user-friendly and efficient assistant capable of handling a wide range of queries.?


If you enjoyed this newsletter, please comment and share. If you would like to discuss a partnership, or invite me to speak at your company or event, please DM me.

Ewa D.

AI Product & UX Advisor | UX4AI | Product & Design Leader | LinkedIn Top Voice: UX, User Experience Design, AI

3 个月

another great issue, Eugina Jordan. A study suggested 90% marketing assets will be created by GenAI by 2025 (that's very soon), are CMOs concerned about the quality and effectiveness of AI-generated creatives? How will they ensure that these assets align with their brand identity and marketing strategy?

Melissa Cohen

Personal Branding and LinkedIn? Strategy | Build Your Brand, Find Your Voice, Build Your Business | Amazon Bestselling Author | The Good Witch of LinkedIn ?

3 个月

Thank you for this recap Eugina. There is so much going on and so many advances at such a rapid pace that it can be difficult to keep up! I appreciate your newsletter keeping me up to date.

Uzma khan

Freelance Community Builder | PR words | Content writer

4 个月

Eugina Jordan, thank you for another insightful edition of Gen AI for Business! The updates on AI models, partnerships, and regulatory shifts are invaluable for staying ahead in the rapidly evolving tech landscape. The spotlight on women leading in AI and the trends and predictions section are particularly inspiring. Keep up the fantastic work in advancing AI knowledge and leadership.

Jenny Kay Pollock

Fractional CMO | Driving B2C revenue & growth ?? ?? | Keynote Speaker | Empowering Women in AI

4 个月

Another great read thanks for helping me stay up to date with all things AI!

要查看或添加评论,请登录

社区洞察

其他会员也浏览了