AI Prompt Engineering:Too Long Didn't Watch (TL;DW) how to create amazing Summaries of Videos with Google Bard
Jocelyn DeGance ??
Customer Success focused Marketing Leader, AI Strategist & Board Advisor
If you are using an AI chatbot that has direct internet access, such as Google Bard, you can do more than just summarize webpages and docs; you can summarize videos including asking for key moments, quotes, and time stamps.?In this post, I'll be walking you through how to do this step by step.
While watching "The AI Dilemma", I wondered if I could get a full summary, not a pre-written transcript or overview of the content. So I prompted:
Review this video and create a summary: https://www.youtube.com/watch?v=xoVJKj8lcNQ
Here’s what I got back:
"The AI Dilemma" by Tristan Harris and Aza Raskin. In this talk, Harris and Raskin argue that the development of artificial intelligence (AI) is facing a major dilemma. On the one hand, AI has the potential to solve some of the world's most pressing problems, such as climate change and poverty. On the other hand, AI also has the potential to be used for harmful purposes, such as creating autonomous weapons systems or mass surveillance tools.
Harris and Raskin argue that the current approach to AI development is not sustainable. They say that we need to fundamentally rethink the way we develop and deploy AI, in order to ensure that it is used for good and not for harm. They propose a number of ways to do this, including:
Great start but kinda stale in terms of the content, so I prompted:?
What is the most controversial viewpoint that this video conveys, use quotes to highlight
"The most controversial viewpoint conveyed in the video "The AI Dilemma" by Tristan Harris and Aza Raskin is that we are facing a "moral alignment problem" with AI. They argue that AI systems are currently designed to maximize engagement and profit, regardless of the consequences. This could lead to AI systems being used to manipulate people, spread misinformation, or even harm them.
They highlight this viewpoint with the following quotes:
This viewpoint is controversial because it challenges the conventional view of AI as a neutral tool that can be used for good or bad. Harris and Raskin argue that AI is not neutral, and that it is important to consider the moral implications of its development and use.
This viewpoint is also controversial because it suggests that we need to take action to prevent AI from being used for harmful purposes. This could involve regulation, self-regulation by the tech industry, or public education about the risks of AI.
Whether or not you agree with Harris and Raskin's viewpoint, it is important to consider the potential risks of AI and to have a conversation about how to ensure that it is used for good."
Even better–now for the real test, isolating the part of the video that I wanted more info on. I knew from watching the first 20 mins that the speakers referred to something called ‘Gollum class AIs’.? So I prompted:
领英推荐
In the talk "The AI Dilemma", Tristan Harris and Aza Raskin refer to "Gollum class AIs", what does that mean?
"Gollum is a new class of AI that is capable of generating text, images, and videos that are indistinguishable from human-created content. They argue that this new class of AI poses a significant risk, because it could be used to create deepfakes, propaganda, and other forms of disinformation.
The name "Gollum" is a reference to the character from J.R.R. Tolkien's Lord of the Rings trilogy. Gollum is a creature who is corrupted by the One Ring, a powerful magical artifact. He is obsessed with the Ring and will do anything to possess it, even if it means harming others.
Harris and Raskin argue that Gollum class AIs are like Gollum in that they are driven by a single goal: to generate content that is as realistic as possible. They say that this goal is so important to these AIs that they are willing to sacrifice other considerations, such as truth or morality.
The potential risks of Gollum class AIs are significant. They could be used to create deepfakes that are indistinguishable from real videos, or to spread propaganda that is designed to manipulate people's opinions. They could also be used to create content that is harmful or offensive.
It is important to be aware of the potential risks of Gollum class AIs. We need to be careful about how we use this technology, and we need to develop safeguards to prevent it from being used for harmful purposes.
Here are some of the specific risks associated with Gollum class AIs:
It is important to be aware of these risks and to take steps to mitigate them. Some of the things we can do include:
The development of Gollum class AIs is a significant technological advancement. However, it is important to be aware of the potential risks associated with this technology and to take steps to mitigate them."
(This is all correct, nice job, Bard and really like the Tolkien reference). For the final test, I prompted:
At what point in the video do they talk about Gollum? give me the exact time stamp and duration
"Sure. They talk about Gollum at 17:30 to 17:50 in the video. The exact quote is:
"We're building systems that are like Gollum. They're obsessed with one thing, and they're willing to sacrifice everything else to get it. And that one thing is engagement."
Pretty close– they talk about it from 17:44 to 18:12– still it’s enough of a pointer in an hour long video discussion to be able to find and watch the part I am looking for.?
Not sure if Tristan and Aza will love or hate that I used AI to summarize their video, and to experiment with my new prompts, but here it is! Hope you enjoyed the AI power user tip!
Creative Consultant at The AE Videos
1 年Thanks for sharing
Chief Conversational AI Disruptor @ ChatFusion/ContactLoop | E&Y Entrepreneur of the Yr '08 | $150mn Exit ‘08 | AI Insights for Marketers & Sales Executives
1 年Jocelyn DeGance ?? Fantastic info - amazing job. ??
Presales Solutions Manager - IoT | Worker Health & Safety | Mental Health Advocate
1 年There are so many layers to your content. Proper prompting to get AI to tell us what we want/need to know is a standout along with so many benefits. However, and I’m not a pessimist, but the capability of its use in controlling and manipulative ways is hard to ignore given humanities track record. Great post!