4 actions marketers can take to drive responsible AI
This Weekly Blast is guest-written by Rebecca Freedman, MBA .
Innovative marketers are in a love affair with AI tools. AI is quickly becoming the Swiss Army knife of our marketing toolkit. But what's both fascinating and frightening is that in this newest boom in AI—I’m talking the large language models trained on everything ever posted on the internet—we foresee the ethical challenges on the horizon.
Take those large language models for instance. They've been game-changers, from content generators that can produce a month’s worth of social media ideas to business-personalized AI chatbots. But we are also still very much in the honeymoon phase and issues of copyright infringement, privacy, and oh… the end of humanity, loom large.?But what should we in the marketing world being doing?
The following are some practical actions that you can take as a marketer to ensure you are being responsible as you venture into AI tool exploration. While these are intended to do your part in driving ethical use of AI, they might also have the side effect of getting you attention as the 'AI guy' in your organization.
Get educated
I use the phrase, “I know just enough to be dangerous,” a little too frequently. To avoid becoming the sorcerer's apprentice, I’m starting to put in the time on learning about AI ethics. I think that this is the most actionable item on this list because it is available to everyone. We can attend webinars, read academic papers (yes, the ones with footnotes), and listen to podcasts where people drop words like "algorithmic equity." When I posted about this recently, a connection shared a great podcast from MIT, “In Machines We Trust,” and I recommend starting with the episode, Concerning AI Ethics. "Freakonomics Radio," which is my very favorite podcast, also did a three part series called How to Think About AI, which covers a lot of ethical ground as well.
Keep data clean
If data is the oil in our marketing machine, let's make sure that oil isn't contaminated. You wouldn't put bad gas in your car, so why feed your AI models bad data? And by bad, I mean biased, inaccurate, or data that's collected from folks without their knowledge. If stepping into the use of questionable data is a concern, you may want to be a leader in creating a data quality framework for your organization; something that includes audits, consent logs, and validation procedures.
领英推荐
Transparency
Legal and marketing need to be lockstep on this, (which gives me a chuckle just writing because they butt heads so frequently in my experience), but a public policy on how and when AI is being used will become increasingly important in the coming months. And marketing, education, and communications need to be thinking about how to humanize whatever legalese is being produced. The end result is that you create transparency. When you use AI to interact with customers, make it clear when they're talking to a machine. If you are utilizing generated content, make sure you have your ethical considerations outlined publicly.
Create a council
Want another way to become a leader in the AI space at your organization? Assemble a multidisciplinary team of sharp minds and contrarians to regularly review and guide AI practices. This is a great place to hold conversations about aforementioned data validation and policy considerations. Think about what parameters need to be in place in the organization–transparency, approved use, approved and unapproved tools, education programs, bias and algorithm auditing.
Closing thoughts
The short of it is that while we haven’t had a chance to see much of the side-effects that large scale AI use might have, there are some ways that we can act right now to be on the side of good. I’m sure that there are more ways to behave responsibly than the four above and would love to get feedback and your thoughts. And of course, if you would like my mind as part of your AI council, I’m on the market for a new marketing opportunity.
The Fusion Syndicate understands the critical importance of human insight and oversight when it comes to AI tools, and supports like-minded thinkers in expressing their insights. Let us know if you’d like to see more!