This Week In Gen AI - Week 8

This Week In Gen AI - Week 8

This Week in Generative AI, brought to you by ChatGPT. A summary of Generative AI news from Week no. 8. starting on February 19, 2023.


https://generativeaiconference.org/news?


Some ways for generative AI to transform the world

Author: Bryan Alexander

  1. Generative AI is predicted to improve in quality and grow in instances, becoming better at making content, such as text, images, audio, video, games, 3D printing, and XR.
  2. Humans will increasingly fill the world with AI-generated stuff and new categories of generative AI will emerge, resulting in another great round of new media cultural reformations.
  3. The nature of truth in the media might come to the fore, and passable fakes like GAI-crafted evidence might cause problems in courts, politics, intimate betrayals, business fraud, etc.
  4. Generative AI will affect various domains and markets, and new businesses and cultures will emerge, while others will take hits or fail completely, depending on how they respond to the generative AI age.
  5. Governments will fumble with policy and regulation, and militaries and spy agencies will exploit generative AI, leading to legal disputes, while political actors can increasingly use GAI to make propaganda in various media and forms."

Generative AI News Rundown 2 - Bing Goes Wild, Google Bard Code Red, and More

Author: Bret Kinsella

  1. Voicebot.ai and Synthedia produced a weekly show called the Generative AI News Rundown.
  2. Topics include Microsoft Bing's early success and misadventures, Google Bard testing ramps up inside of Google, new generative AI enterprise solutions from Jasper AI and Veritone, and how large language models will change web browsing.
  3. Eric Schwartz from Vociebot.ai joined Bret Kinsella for a live discussion about the generative AI news of the week.
  4. Jasper AI goes from writing assistant to enterprise API, while Veritone introduces two new generative AI solutions for HR and media.
  5. Announcements from Opera and Yext are more evidence that web browsing and SEO practices are about to change.


Early thoughts on regulating generative AI like ChatGPT - Brookings

Author: Alex Engler

  1. Generative AI models, which aim to produce aesthetically pleasing text, images, or audio outputs by mimicking patterns in their underlying data, have gained public attention and interest from policymakers, given the risks of both legitimate commercial use and malicious use.
  2. Generative AI is not the same as comprehension, as exemplified by ChatGPT's and DALL-E 2's failure to pass basic tests of comprehension. Text generation models may also generate authoritative-sounding text without truly knowing much about the world.
  3. The commercial risks of generative AI stem from the joint development process, where the downstream developer may overestimate the generative AI model's capacity, leading to errors and unexpected behavior, especially for high-stakes decisions that can affect people's lives. Policymakers should carefully scrutinize the use of generative AI for such applications, and generative AI companies should proactively share information with downstream developers.
  4. The malicious use of generative AI can lead to non-consensual pornography, hate speech, targeted harassment, disinformation, and scams. Tech companies that provide generative AI models can take measures to prevent malicious use, such as manipulating input data, monitoring users, and watermarking generated text.
  5. The efficacy of risk management efforts in preventing malicious use should be considered more like content moderation, where the best systems only prevent some proportion of banned content.

Could Big Tech be liable for generative AI output? Hypothetically 'yes,' says Supreme Court justice

Author: Sharon Goldman

The article discusses the potential liability of Big Tech companies for the output of generative AI systems, based on a recent Supreme Court hearing. Here are the main points:

  1. During a hearing about a Google case that could impact online free speech, Supreme Court Associate Justice Neil M. Gorsuch raised the issue of potential liability for generative AI output.
  2. The case in front of the Court involves a family of an American killed in an ISIS terrorist attack arguing that Google and YouTube did not do enough to remove or stop promoting terrorist videos.
  3. In lower court rulings, Google won with the argument that Section 230 of the Communications Decency Act shields it from liability for what its users post on its platform.
  4. Gorsuch used generative AI as a hypothetical example of when a tech platform would not be protected by Section 230, suggesting that AI-generated content that goes beyond picking, choosing, analyzing or digesting content might not be protected.
  5. The article notes that legal battles have been brewing for months over generative AI, including a class-action lawsuit against GitHub, Microsoft, and OpenAI, and the first class-action copyright infringement lawsuit around AI art. Questions about liability may move to the forefront of legal issues surrounding Big Tech and generative AI."


Coca-Cola Signs As Early Partner for OpenAI's ChatGPT, DALL-E Generative AI

Author: Lisa Johnston

  1. The Coca-Cola Company has announced it will use OpenAI’s generative AI technology for marketing and consumer experiences, making it one of the first major consumer goods companies to use this technology.
  2. Coca-Cola will work with OpenAI and Bain & Company to use OpenAI’s ChatGPT and DALL-E platforms to develop personalized ad copy, images, and messaging.
  3. Coca-Cola sees opportunities to enhance its marketing efforts and is also exploring ways to improve business operations and capabilities using generative AI.
  4. OpenAI and Bain & Company have formed an alliance to bring these technologies to Bain’s clients, and Coca-Cola is the first company to sign on.
  5. While generative AI has received some criticism for hallucinating and potentially infringing on copyrights, companies like OpenAI and Microsoft are working to improve the models and make the technology safer, and CMOs are expected to identify accountability for ethical AI in marketing as a top concern."


Andrew Ng: Even with generative AI buzz, supervised learning will create 'more value' in short term

Author: Victor Dey

  1. VentureBeat conducted an interview with Andrew Ng, a prominent figure in the AI industry and founder of Landing AI, where he discussed his thoughts on generative AI, its future prospects, and his perspective on how to efficiently train AI/ML models.
  2. Ng believes generative AI is similar to supervised learning, a general-purpose technology, and its specific use cases to transform different businesses are yet to be figured out.
  3. Landing AI is currently focused on helping users build custom computer vision systems with supervised learning, but has internal prototypes exploring use cases for generative AI.
  4. The ethical questions and concerns about generative AI need to be taken very seriously and companies are still figuring out the best practices for implementing this technology in a responsible way.
  5. Ng encourages non-AI-centric companies to start small and try to get things working with actual data, rather than only thinking about it, and he disagrees that deep learning is reaching its limits.


How Nvidia dominated AI — and plans to keep it that way as generative AI explodes

Author: Sharon Goldman

  1. Nvidia's GPUs were instrumental in the development of AlexNet, the neural network that won the 2012 ImageNet competition, leading to major AI successes over the past decade, including Google Photos, Google Translate, Uber, Alexa and AlphaFold.
  2. Many experts predict that Nvidia's AI dominance will continue, with Nvidia's ecosystem of hardware and software optimized for its chips, its strong platform strategy and software-focused approach making it difficult to beat. However, competitors such as AMD and Google, as well as geopolitical forces, pose potential threats.
  3. In 2014, Nvidia's CEO, Jensen Huang, fully embraced the AI mission, realizing that AI was the future of the company, and they bet everything on it. For the past seven years, Nvidia has focused on building deep learning software in libraries and frameworks that abstract the need to code in CUDA, putting CUDA-accelerated libraries into more widely used Python-based libraries.
  4. By 2018, GPUs were used for inference, and Nvidia became more of a software company than a hardware company. Nvidia optimized AI development standards and frameworks for GPUs and embraced the ecosystem to provide industry-defining software. With Nvidia's channel strategy and the ecosystem growth flywheel, customers are buying into the Nvidia value chain.
  5. Nvidia's lead in the AI chip market has diminished with new players such as Habana Labs and Google's TPU4, but Nvidia is expected to launch its Hopper H100 chip with a new architecture designed for AI infrastructure.
  6. Nvidia's software strategy, including its industry-defining software and higher-level use-case specific software, may help maintain its position in the market.


Generative AI Changes Everything We Know About Cyberattacks

Author: Patrick H.

  1. Threat actors are using generative AI and chatbots like ChatGPT to launch cyberattacks, increasing the speed and variation of their attacks and modifying the appearance of video and voice.
  2. The development of new technologies using AI, like OpenAI's ChatGPT, Google's Bard, and Microsoft's AI-powered Bing, will be used by organizations and cybercriminals to perpetrate and stop cybercrime. As these new technologies become available, they will be used by organizations and cybercriminals, and both sides will use them to perpetrate and stop cybercrime.
  3. The abuse of AI for malicious intent is in its infancy, but it is heating up everywhere and fundamentally changing everything we know about how cybercriminals develop and deploy attacks with increased speed and variations.
  4. Organizations can prepare by fighting AI cyber threats with AI cybersecurity technology, including AI-based cybersecurity products to manage detection and response, and developing cyber defenses capable of stopping malware and business email compromise (BEC) threats developed with generative AI.
  5. The abuse of AI is something we knew would happen for a long time, and organizations should expect to see an increase in cyberattacks targeting people and increasing their attacks in LinkedIn, Microsoft Teams, Messenger, and Slack, taking advantage of the most vulnerable part of organizations.

Hasurungan Tobing

DNR-Discipline's No Reason. Senior Biology Teacher

2 年

Great for your weekly newsletter Franki Tabor. Thanks for sharing. #news #generativeAI

Patrick H.

CEO | Founder | investor | Cybersecurity | AI | Analytics | Edge

2 年

Thanks for sharing!

Francesca Tabor

Driving Innovation with AI-Powered SaaS Solutions

2 年
回复

要查看或添加评论,请登录

Francesca Tabor的更多文章

社区洞察

其他会员也浏览了