Practical AI for Marketers: Where to begin?
My daily dose
AI. I love it. I use it all day, every day. It acts as my creative foil, sounding board, editor, and questionable fact checker. It helps me generate healthy recipes and understand my dogs’ weird behaviors. It’s one thing to create recipes, though, and it’s another to put AI to work practically in our roles as marketers.
AI is top of mind for many of my friends and colleagues in Marketing. We all seem to agree that AI is going to have a major impact, but where opinions seem to diverge is on the value. Is it ready for prime time? Is it being overhyped? Can it actually help?
The answer to all three of these questions is, “Yes!”
Getting my start with AI
When I joined Corero last April, I was just learning about AI and could clearly see how it was going to change the marketing game. At the same time, I was grappling with how our small team and modest budget could achieve the business’s ambitious goals. It was clear that AI would be critical to our success, so I wanted to get started yesterday. But it wasn’t so simple. There were, and still are, scant resources and it's been a slog.
After a year of plugging away, I thought I’d share my experiences in hopes that they may help others. So, where do we start??
Step 1
Start with realistic expectations.
Your success, or lack thereof, will hinge upon the expectations you set at the beginning and how malleable those expectations are as you continue on your journey. To put it bluntly, if you are not finding success, the issue does not lie with the technology. It is more likely due to unrealistic expectations.
Here’s what I’ve come to learn:
Bring me a rock
If you’re unfamiliar with the “Bring me a rock” metaphor, it goes like this:
Imagine that you say to one of your team members, “Bring me a rock.” They go outside, grab a rock, and bring it to you. It’s not what you wanted, so you say, “This isn’t the one I was thinking of. Please try again.” They come back with another rock, but it’s not what you wanted, so you send them back out for a different rock. And on it goes to the mutual frustration of both parties.
Prompting an AI large language model (LLM) like ChatGPT with a broad prompt like, “Write me a solution brief about our firewall” is the AI equivalent of “Bring me a rock.” The more specific the prompt, the better the results. Think of the LLM as a summer intern with no knowledge of your business. What guidance and detail would you provide to help them be successful for any given assignment? Now apply that approach to your prompts. The quality of your prompts will directly affect the quality of the output you receive.
Think big. Start small.
There are so many possibilities with AI that just the idea of actually using it becomes overwhelming. In my case, I could see the highly automated, AI-powered marketing team of the future. And man, what a beautiful future. We humans could focus on the strategy while the machines handled the tedium of execution. The one obstacle? I had absolutely no idea how to start.
I take lots of notes in meetings, but not when I’m the one speaking and I talk a lot. So, in my weekly team meetings, I often missed a number of action items that weren’t in my notes and was too busy to write up meeting minutes and action items. I knew that I was not following up on important tasks. Here was my first AI use case: Let ChatGPT create the meeting minutes and action items from a transcript of the meeting.
Now that I had a use case, I'd identified the tool I’d use to start the project.
领英推荐
This changes everything!
Go to YouTube, search on AI news, and look at how often everyone proclaims that the latest update to whatever has changed everything. Is this due to a lack of creativity? Are these just click bait? Sure, there’s some of that, but everything in AI is changing... all the time.
Don’t waste your time trying to pick the best tool. Pick one tool and stick with it. The goal here is to build skills, not to use the latest and greatest. That awesome tool you’ll choose today will be eclipsed by something “better” within a week or less.
Case in point: Prompting. Having an understanding of how LLMs work will help you to write more effective prompts. (You don’t need to be data scientist to write effective prompts any more than you need to be auto mechanic to drive a car. A rudimentary understanding is sufficient.) Once you gain a level of proficiency, you can make informed choices when it comes to testing other tools.
In my meeting minutes use case, I started with ChatGPT and was not impressed. This compelled me to dig deeper into prompting best practices and to continuously iterate on my prompts. Finally, I reached the point that I felt confident that the issue was less my prompts and more ChatGPT. It was then that I started comparing ChatGPT and Claude 2.
Today, I use both knowing which performs better for different use cases. It was because I’d invested the time to build my knowledge with one tool that I could articulate my requirements in testing another and find a workable solution.
Rinse and repeat and repeat and repeat
The state of AI today is, at best, in its alpha testing stage. Expect a lot of inconsistent results and weirdness to creep up here and there:
These things are going to happen, but that doesn’t mean the value in these tools has been diminished. It just means you have to develop a sense of their limitations. They can’t do everything. So, if a tool fails repeatedly for one of your tests, you’ve still made a constructive discovery. Now move on to a different use case and test. Each trial will help you identify what can be, and what shouldn’t be, in scope.
It's still all about us
By now, I think we’ve all heard the story of the attorney who used ChatGPT as his basis for filing a lawsuit. Imagine his surprise when he found out that all of his citations were bogus. Had he bothered to check, he would have known this before he submitted his brief to the court. While an extreme example, it’s the epitome of why humans need to stay in the loop. We are the subject matter experts, and we always need to be sure to check AI’s work.
LLMs are predictive models that use complex calculations to string their responses together. They have no ability to understand truth or facts. That’s OK for many use cases but it doesn’t negate the need – the requirement – for a human to oversee the LLM’s work.?
Here’s a personal example. I wanted to compare and contrast Corero’s threat intel report with some of our competitors’ reports. I popped three reports into Claude 2 and asked it to produce key findings for each report. At first, I was excited to see how quickly the results came back. This was going to save me so much time. Then I noticed that some of our key findings were being attributed to a competitor, while some of competitors’ key findings were being attributed to us. I caught the error only because of my knowledge of our research. Now I know: Only summarize one document per chat session and fact check everything.
What next?
Start. Don’t overthink it. Just plant a flag in the ground and start.
In the meantime, I can’t recommend enough taking advantage of the work that the Marketing AI Institute does:
And no, I have no affiliation with, nor do I receive any financial incentives from them. They just do really great work.
Feel free to drop me a line if you have a question or want to compare notes. It’s fun for me to nerd out on this stuff and my family and co-workers, I’m sure, would appreciate the break.
Have fun and please share your experiences with others.
Arkbridge | Consulting, building, & scaling SaaS companies.
1 年Can't wait to read your insights on adopting AI in a practical way!