Ethical AI Content: The Three Laws of Robotic Writing

Ethical AI Content: The Three Laws of Robotic Writing

AI ain’t going to police itself—at least not yet. How can we be better humans and not make the Internet worse?

This article originally appeared on Substack.

Much of our understanding of modern life comes directly from classic science fiction writers. A networked world where we must pay small digital transactions to use household appliances? That’s Philip K. Dick’s Ubik (1969). The whole-planet city of Coruscant from Star Wars? That’s Trantor from Isaac Asimov’s Foundation series (1951–1993).

Asimov also introduced the Three Laws of Robotics in his Robot series (1950-1990), a set of ethical guardrails that govern the actions of thinking machines that are faster, smarter, and stronger than the humans they serve. Robots must follow the laws in this order:

  1. A robot will not harm a human being or, by inaction, allow a human being to suffer harm.
  2. A robot must obey the orders given by humans, unless these orders conflict with the first law.
  3. A robot must protect its own existence to the extent that this protection does not conflict with the first or second law.

Asimov loved to play devil’s advocate with his own decision structure, casting his robots as suspects in murder investigation. These rigid laws can bend depending on semantics. What’s meant by “harm” and “suffering?” What if, somehow, following an order to kill does not conflict with the first law?

Of course, robots don’t create such guardrails—we do. There’s nothing to prevent us from plugging some map coordinates into a killer drone and pointing it in the direction of our enemies. These flimsy restrictions are how we get Westworld’s theme part android cowboys, Battlestar Galactica’s Cylons, The Matrix’s synthients, The Terminator’s T-800s, Ex Machina’s Ava, and so very many others.


We tend to conflate “robots” and “artificial intelligence.” Stories about AI often anthropomorphize AI, showing it as androids with smoothly curved white carapaces and friendly Disney eyes. AI is a massive room full of humming processors connected to Internet routers. We interact with it the same way we send pics and emojis to our WhatsApp family group.

Suddenly, we have access to a tool that looks like a DM window but can morph into anything else we want it to. LLMs like Claude and ChatGPT can read and build spreadsheets, edit images, collect information from drivers following an accident, and reconcile form entries where people from ND, N.D., North Dakota, and N. Dakota are all understood to be from the same state.

We should be less worried that people won’t trust the hallucinations and invasion of privacy that AI presents us today. Instead, we should worry more about how easy it is to offload cognitive tasks to a synthetic thinker.

People tend to follow the Principle of Least Effort and use the least amount of energy to complete a task. This is why people prefer cars with automatic transmissions over manual transmissions. It also leads app designers to build interfaces with big buttons instead of embedded links.

It’s too easy to let AI take over our entire creative output, except maybe what we want to put in our professional portfolios. These tools can save us hours of clerical and low-level creative work. What’s the harm in letting ChatGPT or Grammarly Pro help us dash off a few emails, write a job description, or put together a social media campaign?

Incrementally, it makes the world worse. We get used to a lower level of personalization when all of our correspondence sounds like a form letter. We can improve this effect by asking for a more conversational tone or raising response parameters like burstiness and perplexity. (You can see this in action when you ask for a rewrite—except make it 30 percent more informal, bursty, and perplex.)

Don’t Be Evil: Three Laws of AI Content

There is only so much variation possible in AI responses. Human writing variation is more granular. It’s dependent on the billions of individual perspectives we carry around in our heads.

We need certain practices, either self-regulated or enforced by society, that keep all human creativity from homogenizing into pink slime, a metaphor for that goo we use to make chicken nuggets.

But, I understand the temptation of using AI to produce writing. That’s why we need to seek inspiration in Asimov’s Three Laws and Google’s former motto, “Don’t Be Evil.” Writers need guardrails for ourselves that our audiences can remember and hold us accountable for.

In the spirit of Asimov, I propose these Three Laws of AI Content:

1. Don’t Damage Yourself or Other People

Instead of “harm” or “hurt,” let’s use a broader term: damage. This includes small, subtle harms that aren’t always obvious—sometimes not even felt. It also includes financial harm that comes from erosion of ownership.

This kind of damage can include:

  • Copyright issues. This includes an artist’s loss of control or revenue when AI remixes protected works stored in its training data. It can also result in our own creative output being unprotected when copyright offices refuse to protect AI-generated intellectual property.
  • Creative Worker Underemployment. If employers and audiences alike perceive AI-generated content as “good enough,” there will be less demand for an educated, creative workforce. Worse, we might hand off “click work” to skilled professionals such as former copywriters who work in content sweatshops to make AI content sound more human. What disillusioning and dreary work for imaginative people!
  • Thought Replacement. When we use AI to make decisions for us, we delegate our autonomy. Imagine the very real impact of a business executive allowing AI to make staffing decisions or order perishable goods. We should always pour as much of our insight into a prompt and only accept outputs we can personally verify. That said, there’s nothing wrong with using AI to scale up great human work.

The worst kind of damage is discouraging people from being creative in the first place. If AI can do it for us, why learn to write, read, draw, play music, or act? If we see AI as a power tool rather than a replacement, we can select AI-literate creators who can infuse their classical training and talent into ambitiously modern projects.

2. Be Credible and Useful

In its mission to not be evil, Google has adjusted its rankings to penalize the pink slime that tends to bubble up into our search results. This kind of mirage content looks helpful on the surface but seems to dissipate when we click through.

Google’s article on creating helpful, reliable, people-first content urges us to pay attention to several aspects of writing that its developers believe make the internet better. I urge you to click through and memorize this article, but I’ll summarize it here:

  • Get the Fundamentals Right. Proofread yourself and make sure the page is attractive and easy to navigate. Don’t include “easily verified factual errors.”
  • Demonstrate Mastery. Make something original you’d be proud to see referenced in a publication you respect. Cite your sources.
  • Be People-First. Think about the audience you want to attract. Make sure people who read it come away satisfied.
  • Don’t Make the Internet Worse. Content made for search engines, not people, doesn’t add value and can be frustrating. Don’t use AI to mass produce lots of articles so you can get better rankings.
  • Ask Yourself: “Who, How, and Why?” Is your byline and brief bio on the page, or is it otherwise obvious who wrote it? Do you reveal your process for creating the content? (“I made chocolate chip cookies using 24 different recipes and tried them all!”) Does your content have a reason to exist?

Google’s mission is to make the internet better, and they are now penalizing creators who don’t fall in line. To surface helpful content, Google’s core rankings focus on an “EEAT Score,” Experience, Expertise, Authoritativeness, and Trust.

3. If You Use AI, Say So

You really should disclose how you use AI to create your content. If you’re not using AI, proudly brag about this. AI content disclosure is already law in several U.S. states and the European Union.

The Partnership on AI has published its recommended Responsible Practices for Synthetic Media. This work is still in its infancy and far from standardized, but as a content creator, you ought to consider the effect of disclosure on your work and make informed decisions that set you up for success.

The Laws Are for Us, Not the Robots

AI will happily create shallow one-shot blogs, emails, white papers, websites, slide decks, annual reports, and anything else you need. In time, it will be better about hallucinating and selecting from reliable, relevant sources online.

It’s even possible to use AI to create high-quality content that sounds like you and reflects your thoughts, feelings, opinions, experiences, and connections. You just have to make sure your prompts contain those things before you ask it to write.

But if you follow the principle of least effort and let the LLM drive itself, you’ll be spewing out words that rob readers of enjoyment and surprise. They may even decide to tune out or use an AI of their own to reformat your words into a podcast or something else they find more entertaining.


This article was 100% human written. Its research was 100% human-Googled.



Richard Burke

Sales Professional at USI Consulting Group

3 个月

Dave: “Open the Pod Bay Doors Hal” HAL 9000: “I’m sorry Dave, I’m afraid I can’t do that” I Still can’t get past this…..written in 1968!

  • 该图片无替代文字
David Cawley

Independent SAP Consultant

3 个月

This is a push-pull we'll all be dealing with for a long time. The tool we built is designed for humans to carefully curate and then combine beliefs and instructions when giving the AI an assignment. But in a time of awe at what can be automatic, some customers want the human out of the process and just enjoying final results. They want a quick job of building the reusable library and then they want it to be stable. The world is always changing, our beliefs and pains change over time, even language evolves. If we fix our AI prompts today and reuse them for years, I'm afraid we'll be slowing our progress. However, with humans at each end, maybe they detect the drift when a workflow hours off course and they are triggered to update their library of beliefs and instructions. But if we're all doing this, will language languish? And get frozen in 2023? I hope not.

回复

要查看或添加评论,请登录

Mark Gillespie的更多文章

社区洞察

其他会员也浏览了