Crafting Content Responsibly in the Age of Artificial Intelligence
Facing the wave of AI and its implications. Image made by author with Midjourney.

Crafting Content Responsibly in the Age of Artificial Intelligence

Article written by Lara Hill and edited by Elizabeth Manning.


There has been a wave of excitement around new AI tools being released and announcements of more that are coming soon.?

If you’ve been hearing bits and pieces of the buzz around AI and thinking, what's going on? What we are seeing are fundamental shifts to the way we operate that have MASSIVE implications to our society.?

I started testing ChatGPT back in December, quickly followed by experimenting with Midjourney. When I first became interested in incorporating AI into my workflows as a content creator and marketing consultant, I experienced excitement and wonder alongside cautiousness and concern. With any powerful tool, there exists the potential to benefit society as well as to cause harm. But, I realized that these tools were here to stay and were causing seismic shifts in my work domain, so I needed to understand their capabilities.

It would be nice if it was just as simple as saying yes or no, take it or leave it; AI is good or bad.

But AI is already interwoven into our lives if we are using social media, search engines, streaming services, or other digital applications. AI is deciding what search results we see, which YouTube video pops up next, which Netflix show is recommended. It has been shaping our worldview for quite some time.?

AI could have been involved in any number of important decisions in your life by now including college acceptance, whether your job application got picked for an interview, what interest rate you qualified for, and even decisions around your medical care.?

AI is not new, but one of the major aspects that IS new is the general public having access to an easy to navigate and on-demand interface to implement the technology for themselves (i.e., chat based generative AI like ChatGPT). Now we have the opportunity to use AI for our benefit, to help with our businesses, our workflows, and our personal lives.?

When you realize AI is simply a tool, you understand that it can be used in ways that benefit the world, as well as in ways that cause harm (whether intentional or not).

No alt text provided for this image
What's all the buzz about AI? Image made by author with Midjourney.

For a lot of us, these tools are drastically changing the way we work. For many roles, if we don’t learn these tools, the people who do are going to become more efficient in their workflows and, therefore, become more valuable to businesses. I felt like I didn’t really have a choice but to research and test these tools, because they are not going away and I need to stay relevant in a competitive space.?

I can already vouch for AI improving my quality of life and work output. I was able to write this article in a shorter time than it would usually take; likely making the difference between my ideas sitting in a notebook somewhere collecting dust, and actually having them reach audiences with impact. Part of that time I was “writing” while walking around in a beautiful forest, taking in sunshine and fresh air, (thanks, speech-to-text transcription AI).?

I am an able-bodied person, so while these benefits made a difference in my ease to produce content, these ways I am benefitting are mild improvements compared to the ways assistive technologies help those living with disabilities. There are AI-powered speech recognition tools that help those with speech or hearing impairments communicate. There are other AI-driven technologies that help with visual impairments, AI-powered prosthetics, AI voice assistants like Siri that help those with mobility or dexterity impairments to access information and communicate. AI also offers empowering solutions in areas such as mental health support, environmental control, sign language recognition, and health monitoring and management.


While my social media posts on the topic have been mostly of the “wow look at what these tools can do” variety, I wanted to offer some deeper considerations? for us all to be aware of as we figure this out together.?

I have put my concerns around AI into four categories:

  1. Deep Fakes, Misinformation, and Safety
  2. Privacy, Bias, and Discrimination
  3. Transparency?
  4. Copyright and Legal

I recently got access to Google’s Bard (experimental version) after being on the waitlist for about a day. So I decided to test it out to help me define for you in this article some key areas of concern around AI. (Kind of ironic, I know. But I did check for accuracy. Text copied from Bard designated with italics.)


1. Deep Fakes, Misinformation, and Safety

In case you’re not familiar with the term Deep Fake:

A Deep Fake is a video or audio recording that has been manipulated using artificial intelligence to make it appear as if someone is saying or doing something they never actually said or did. Deep Fakes can be used to create fake news stories, to spread misinformation, or to damage someone's reputation.

Deep Fakes are becoming increasingly sophisticated, and they can be difficult to detect.?

We are living in a time where massive amounts of false information can be spread at a scale like nothing ever seen before. Our current legal systems are not prepared to handle this. The EU has been working on passing the first law aimed at regulating AI, expected to be finalized this month (March 2023). The US lacks any regulatory framework, although the US Chamber of Commerce is calling for policymakers to quickly ramp up their efforts.

2. Privacy, Bias, and Discrimination

Keep in mind, whatever data you input into tools like ChatGPT or Bard is not confidential. I’ll let Bard explain further why we should be concerned about AI as it relates to privacy.?

  • AI systems can be used to collect and analyze large amounts of personal data, including data about people's online activities, their physical movements, and their personal preferences. This data can be used to track people's movements, to predict their behavior, and to target them with advertising.
  • AI systems can be used to make decisions about people's lives, such as decisions about whether or not to grant loans, to grant social benefits, or to release people from prison. These decisions can have a significant impact on people's lives, and there is a risk that AI systems could be biased or unfair.
  • AI systems can be used to manipulate people, such as by using targeted advertising or by using social media to spread misinformation. This can have a significant impact on people's beliefs and behavior, and it can be difficult for people to know when they are being manipulated.
  • AI systems can be used to erode privacy, such as by using facial recognition technology to track people's movements or by using voice recognition technology to listen to people's conversations. This can make it difficult for people to maintain their privacy, and it can make them feel like they are being constantly watched.

I recently watched the documentary Coded Bias to get a better understanding of bias in AI. The film highlighted many examples of bias in algorithms, including MIT Media Lab researcher Joy Buolamwini's discovery of flaws in facial recognition technology. I highly recommend watching it, you can find it on Netflix.

AI can produce bias not because there is racism or sexism in the underlying mathematical structure, but because bias is baked into the dataset it draws from. Data is a reflection of our past. AI simply replicates the world as it exists - as it existed at the time of the data.

“If we use machine learning models to replicate the world as it is today, we’re not actually going to make social progress.”?
- Meredith Broussard, Data Journalist

3. Transparency

This concern has to do with the transparency around how the AI works. Different companies take different approaches with how they leverage AI, either in an open or closed model.

For clarification, the company called OpenAI, who released the game-changing ChatGPT, is no longer operating in an open AI model. They decided to make theirs closed.

Explanation of the two from Bard:

Open AI and Closed AI are two different approaches to the development of artificial intelligence. Open AI is an approach that emphasizes transparency and collaboration, while Closed AI is an approach that emphasizes secrecy and competition.

Open AI is based on the idea that the best way to develop AI is to share information and resources openly. This allows researchers to build on each other's work and to make progress more quickly. Open AI also encourages collaboration between researchers and companies, which can help to accelerate the development of new AI technologies.

Closed AI, on the other hand, is based on the idea that the best way to develop AI is to keep information and resources secret. This allows companies to develop AI technologies without having to worry about their competitors copying them. Closed AI also allows companies to keep control of their AI technologies, which can help them to generate profits.

In the words of Cathy O’neil, author of Weapons of Math Destruction, “What worries me the most about AI is power. Because it’s really all about who owns the code. The people who own the code deploy it on other people. And there’s no symmetry there. There’s no way for people who didn’t get credit card offers to say ‘ooh I’m gonna use my AI against the credit card company’. It’s a totally asymmetrical power situation. People are suffering algorithmic harm, they’re not being told what’s happening to them, and there is no appeal system. There’s no accountability.”

4. Copyright and Legal

This is the one I’m hearing the least amount of talk about amongst businesses looking to more quickly churn out content. Who owns AI-generated content?!? The US Copyright Office launched an initiative this month (on March 16, 2023) to examine copyright law and the scope of copyright in works generated using AI tools and the use of copyrighted materials in AI training.

“This initiative is in direct response to the recent striking advances in generative AI technologies and their rapidly growing use by individuals and businesses. The Copyright Office has received requests from Congress and members of the public, including creators and AI users, to examine the issues raised for copyright, and it is already receiving applications for registration of works including AI-generated content.”

The current law states that if the work lacks human authorship, the Office will not register it. It’s not clear that companies and content creators understand that the current law does not protect their work.?

If you use AI to generate content, you do not own it. If you use AI to generate content for a company, they do not own it.?

As of the time of this writing, anyone can take it, use it however they want, without any legal ramifications. I’m obviously not a lawyer and this is not legal advice. If a company has a legal department, they are likely to ban the use of these AI-tools for company content.?


We are living in a time where there are a lot of opportunities for AI to benefit us on the one hand.? On the other hand, there are many uses that are either accidentally or intentionally harmful. Unless stakeholders come together and create corporate governance and laws to minimize harm, and individual consumers take responsibility to educate themselves to be informed users, we will inevitably continue to experience the negative impacts along with the positive impacts.

This space is evolving so quickly that I have struggled to publish an article about it before new information emerges that I feel the need to consider.

I welcome your (human written) feedback. Whether it's an error, a lack of nuance, or just something you did or did not appreciate, you can message me on LinkedIn.


Recommended Resources:

Coded Bias Documentary that investigates bias in algorithms.

Algorithmic Justice League is on a mission to raise public awareness about the impacts of AI, equip advocates with resources to bolster campaigns, build the voice and choice of the most impacted communities, and galvanize researchers, policymakers, and industry practitioners to prevent AI harms.

Ethical Intelligence Making ethics accessible by providing EaaS (Ethics as a Service) to startups and SMEs using its worldwide network of renowned interdisciplinary EI Experts.?

Distrubuted AI Research Institute an interdisciplinary and globally distributed AI research institute rooted in the belief that AI is not inevitable, its harms are preventable, and when its production and deployment include diverse perspectives and deliberate processes it can be beneficial.

“More Than a Glitch: Confronting Race, Gender, and Ability Bias in Tech” and “Artificial Unintelligence: How Computers Misunderstand the World.” Books by Meredith Broussard.?

Weapons of Math Destruction - Book by mathematician and data scientist Cathy O’neil explaining how algorithms reinforce discrimination and undermine democracy.

Marketing AI Institute’s Responsible AI Manifesto: 12 principles outlined in an open template for organizations and leaders who want to pilot and scale AI in an ethical way.?

Foundations of Humane Technology: Free online course from the Center for Humane Technology, a nonprofit on a mission to align technology with humanity's best interests.?

Sharon Lewis, MBA

Customer Insights | Marketing Strategies | Key Market Trend Evaluation | Buyer & User Behaviors | Brand Audits | Marcom

1 年

Great article covering a variety of points. Laura Hill Always interesting to reflect on the many examples where our lives have already long been managed by AI, as well as the critical element around data quality. Who remembers the GIGO acronym: Garbage In Garbage Out. Similar to the hesitations that we have all learned to deal with regarding social media truths, your point about Deep Fakes that are generated through AI presents a logical thread. Who decides the ultimate truth? Therein lies the continued role of humans. LOL

Ilaria Merizalde

Is Your Business AI Ready? Check the link below and find out

1 年

A well-written and well-researched article, thanks for sharing, Lara. I've been immersing myself in "how it works" and "what the consequences could be for us marketers, workers, humans" might be in the not-so-distant future. Right now I am thinking that the ratio of negative vs. positive consequences for humanity in general is about 50/50. Without sufficient safeguards (not to mention the lack of transparency) things could definitely go south on one end, but on the other hand, there is amazing potential in medicine and science (even creativity). With that said, in the short term I do plan to spend more time with the nitty-gritty-productivity boosting subset of AI for writing and marketing. I look forward to continuing the dialogue.

Denise Aday

Online shopkeep. Book and word lover. Occasional editor. Multiply neurodivergent. ???????

1 年

Well said and great article, Lara! I'll be checking out the recommended resources.

Crystaline Randazzo

Storytelling Strategist | Somatic Story Coach | I Empower Authors & Thought Leaders To Own Their Voice & Message

1 年

Thanks for this article. I have been thinking about AI alot and wondering about ways to use it ethically so this was really interesting.

Elizabeth Manning

Trusted C-suite CHANGE & Internal COMMS Partner | High Stakes FACILITATOR | Patient & Social ADVOCATE | COACH |??Expert in cross-fx collaboration | Saved months & millions moving 1000s to INNOVATE BETTER TOGETHER??

1 年

Thanks for opportunity to help you with this important contribution to an underrepresented dialogue on LinkedIn. Your transparency, curiosity, and articulation provide unique and easily understandable message with a clear call to action and supportive resources. Congrats on getting it posted before too much more changes ??????

要查看或添加评论,请登录

Lara Hill的更多文章

社区洞察

其他会员也浏览了