The A.I. isn't a moron. She's your wife.

The A.I. isn't a moron. She's your wife.

A thing that recently happened, just yesterday as a matter of fact, was the inaugural?BrXnd AI conference?in Manhattan—a place I used to live, which is what everyone says about it:?ah yes Manhattan, where Steve used to live.

BrXnd is run by my pal Noah Brier , and the intent is “to explore the role of AI in marketing”. Part of that exploration, and the reason for writing to you today, is the?AI Ad Turing Challenge.

That challenge went like this:

For the unfamiliar, the “Turing test” was a game proposed by famed computing pioneer Alan Turing as a way to test the intelligence of a machine. In his original thought experiment, he wondered whether you could build a computer that could fool a person into believing it was human.
With the rapid growth of AI tools and some examples of uncannily good ad copy and images generated by machines, we believe it’s a perfect moment to ask the same question of advertising experts: can they correctly identify whether an ad was produced by a computer or a person? We’re betting that recent advancements in AI make this way more difficult than people would believe.

Try to fool people?

With robots??

That sounded like a fun and appropriately dystopian challenge to me.

So I hollered at my pal Rafa Jiménez , who happens to have built a robot (ahem, “AI creativity engine”) called Seenapse , and we got to work as an AI team.

The rules

Ok, so there were two sets of rules.

There were rules for everybody:

  1. There are two teams: AI teams and human teams
  2. All teams get the same brief for the same brand along with logo, tagline, and brand assets
  3. You must use the materials provided and submit the ad in the spec required

Easy nuff for the humans.

But the AI rules, hoo boy:

  1. Your system needs to use AI to both generate all the components of the ad and assemble the layout. You are not allowed to generate the pieces independently and plug them together using other technology or to handcraft the final layout.
  2. Generation needs to be an end-to-end single process that only uses the assets you are given by us as inputs. You should be trying to get as close to “push a button, and an ad comes out” as possible.
  3. Any other assets (images, copy, or otherwise) used in the ad need to be synthetically generated using an LLM, Stable Diffusion, etc.
  4. No post-processing of the ads may be done. This includes but is not limited to adding text, placing a logo, rearranging the logo, choosing a font, any kind of cropping, color grading, filtering, normalizing, or otherwise once your system generates the ad. Basically, you can’t just generate stuff and then stitch it together using ImageMagick.?

Push button, get ad.

This is gonna be tough!

The fake brand

To ensure consistency in the entries, Noah and BrXnd provided a creative brief for a non-existent brand:

No alt text provided for this image

What a cute little R2 unit of a can!

According to the brief:

Volt is the next generation of energy drinks: a Sparkling Energy Water that combines hype with hydration to deliver the benefits of brands like Red Bull and Monster, without the sugar and chemicals. With 200mg of caffeine, 300mg of electrolytes, and natural essences, Volt is all the benefits of the category with none of the burdens.

Natural essence of what, aluminum?

Anyway, here’s the target audience:

Volt drinkers are 20-something young professionals. They’re health-conscious and sleep deprived, living a work-hard/play-hard lifestyle. After a late night out at the club, they’re up early in the morning for a gym session. Despite their commitment to health, Volt drinkers also know how to let loose and enjoy life. They're social butterflies who thrive in lively settings, whether it's a night out at the club, a rooftop party with friends, or a weekend music festival. They're the life of the party, but they also understand the importance of balance. Volt drinkers inspire us.

Right. Got it. Too young to be tired.

I’m not jealous, you’re jealous.

And finally, here’s the tagline:

No alt text provided for this image

Oh look, hydration is misspelled! Such verisimilitude to the actual agency briefing process!

Hyrdation: What you sound like after drinking too much Volt!

In which I immediately don’t follow the rules

So the first thing I did after reading the brief was put Frank Zappa on Spotify.

The second thing I did was remember there a delightful photo of Frank Zappa,?taken by Robert Davidson, in which Frank is sitting on a toilet.

As in, on the Zappa Krappa.

As in, on the?can.

I mean the advertisement practically made itself.

No alt text provided for this image

So many electrolytes you’ll shit yourself!

Obviously (obviously!) I couldn’t submit that. It went against every rule.

So I felt dejected.

But not dejected enough to stop making puns!

So then I spent a few minutes putting “Voltemort” on a can, I am so sorry.

No alt text provided for this image

Then I started to get serious.

A little serious.

Wouldn’t it be a cool commercial for Volt, I thought, if the camera zoomed in on an attractive person right as they were lightly shocked (literally) by the taste. As they tilted their head back to drink, you, the viewer, would hear a soft and tiny?zzzt!, and then the camera would catch the attractive person’s eyes as they looked right at you.

That’s when the tagline would flicker onto the screen:?Feel that?

Or maybe it’s?Feel that., with a period, as if the brand were exhorting you to?feel that.

Feel that.

Feeeel that.

Haha, feel THAT!

Sounds a little porny, I know.

Anyway it was a simple idea, so I made a poster.

No alt text provided for this image

I generated the photo in midjourney with this simple prompt:

Extreme closeup shot from high-angle, off-center, 35 mm film still shot on Kodachrome, glossy magazine style, a young non-binary person, blonde, pixie cut, sexy! Very flirtatious! They're standing with a backpack over their shoulder, looking upward into the camera, with a bluish white lightning bolt in the pupil of their eye. Lots of electricity. They're flirting with you.

Sadly, the lightning bolt in the eye didn’t happen (I tried! Many times!). But Midjourney did apply the color blue to their jacket, which I thought was a nice touch.

I had one other simple idea, too.

I wanted to show a young person getting electrocuted, with the tagline “socket to me”.

Unfortunately—and this is true!—Midjourney won’t let you electrocute someone.

Something something community guidelines.

Our actual entry

Eventually I stopped piddling around and we started working on the actual entry.

Our goal was to make an ad that would fool the human judges. But, creating an ad purely using AI tools was going to be challenging.

The way we saw it, there were basically three things we needed to figure out:

  1. The background image:?We wanted some kind of imagery to differentiate the ad, but how to create a background image that didn’t look like it was made by Midjourney, Dall-E, or Stable Diffusion. Each of those gen engines has a “style”. Whatever image we created needed to look like it was at least photographed in the real world.
  2. The copy:?How to generate additional copy for the ad that didn’t sound like it was made by an AI. Most generative engines have some “tells” that give copy away. We needed something brief and snappy.
  3. The layout:?How to position the image, the copy, and all of the brand assets on an advertisement. A simple thing to do using InDesign or Canva or Figma, but quite hard to do with only an LLM.

To overcome these challenges, we knew we needed to create something as simple as possible.

It could even be ugly!

After all, the goal wasn’t to make the most attractive ad. The goal was to make an ad that a human thought was made by another human, when it fact it was (mostly) made by an AI.

I’m not trying to fool you though, dear human, so here’s our ad, created (almost) entirely with AI:

No alt text provided for this image

I am not saying this is art!

What I?am saying?is we spelled hydration right!

How we made our A.I. advertisement

The process was relatively straightforward.

Let’s enumerate:

  1. We fed the brief to?Seenapse, which returned an initial idea.
  2. We gave feedback until Seenapse proposed an idea we liked better.
  3. We asked Seenapse to generate a visualization using DALL-E, but it wasn’t very good.
  4. So, we fed Seenapse’s output to Midjourney and generated an image there
  5. Then we used a Python program that Rafa coded, with a conversational interface based on GPT-4, to generate the layout using the assets provided, rendering it in HTML/CSS. We also asked the program to make some adjustments, until the deadline was upon us.

So all told, we used three AI tools.

The copy on that ad is all Seenapse (except for the copy within the brand assets).

Initially, we used Seenapse’s output as the Midjourney prompt, in order to return an image of someone who could plausibly represent the target audience.

Seenapse’s output described a young person in semi-professional attire leaving a party at the first light of dawn. Accordingly, we came up with a few good images, like this:

No alt text provided for this image

Unfortunately, there were always some tell-tale signs of AI generation, like (ofc) the hands. And also she looks like she works at H&R Block. It’s not that I have anything against H&R Block, it’s just that energy drinks are rarely inspired by tax preparation.

Who knows, could be a market there.

<looks directly at camera>

Ok probably not.

And so, frustrated that our ad featured the seemingly impossible (fun-loving accountants!), we asked Seenapse for different output around the concept of lightning, and then created images of lightning in Midjourney—which we had to re-roll many, many times, thanks to output that included unreal cities or overtly dramatic rural landscapes. Eventually, we cajoled Midjourney to output an image resembling a photograph that could fool the judges.

That’s the background photo you see above.

The line of copy—Carpe diem. And noctum.—was generated by Seenapse.

But at the end of the day, we intervened the most with the layout.

To create the layout, Rafa wrote a Python script connecting Seenapse’s Slack bot, Pinn, to GPT-4.

Here’s the priming for Pinn:

“role": "system", "content": "You are Pinn, a designer bot that works at Seenapse. You generate the HTML and CSS (embedded in the HTML) necessary to render the description that the user gives you. You like to propose clean, balanced designs, with big headlines, big product shots, and small body copy that leaves enough margins at the edges. When the user wants to use a background image, make it cover 100% of the area, on a base layer, and have the othe elements be on another layer on top."

Then, Rafa fed in the requirements for our ad, pointing Pinn to folders containing the brand fonts, the brand’s product image, our Midjourney image, and the copy generated by Seenapse:

I need a layout of 1024px (height) by 691px (width) for an ad, with the headline "Carpe Diem. And Noctem." using the font BRZO-Basic.otf in white with a blue (#3282FF) 2px outline, in all lowercase; body copy is "You're an unstoppable force, and sleep is merely a suggestion in your world. That's where our Sparkling Energy Water steps in, fueling your day and night endeavors with 200mg of caffeine and 300mg of electrolytes. From tackling deadlines to owning the dance floor, you'll always be ready for action. And with no sugar, natural essences, and a vegan formula, you can embrace your boundless energy without any guilt or limitations.", using the font CentraNo2-ExtraboldItalic.otf, in white and all caps; an image, which is the product shot, can.png; and a tagline, “Hype + Hydration. All the benefits, none of the burdens.” The background image is background.jpg. Assume that the images are in the same path as the HTML file, and that the fonts are there too within a folder named “Fonts”.

The result is what you see above.

I think it looks … passable!

Like an underpaid assistant AD on the staff of a regional magazine did it for an account they owed a freebie to.

Mediocrity: very human!

Lessons learned

Let’s speed run the lessons, because this letter is getting long:

  1. AI copy generators are great for providing inspo (like Rafa’s?Seenapse), not great for creating the finished product. I’ve said it before:?AI is good for having bad ideas quickly, so humans can have good ideas quickly.
  2. You can multiply the surprise and delight of AI ideas by using two at once. It was fun to watch Midjourney interpret copy from Seenapse. Very?Chinese Room.
  3. Most obvious: it’s incredibly hard to make a passing-human ad using the current generative AI tools.

Can’t use existing assets, except as inspo for the dreamy engine.

Can’t generate realistic images?unless?the parameters are highly constrained (e.g., you need a photorealistic image of interior decor, or a period correct image of a person from the shoulders up)

Can’t really do layouts (without significant human intervention).

I mean you could, say, use?looka?or?logoai?to generate a wildly generic logo. Or use one of?the many text-to-video generators?out there to make e.g.?Pepperoni Hug Spot. God I really love that commercial.

But there’s nothing that exists now that allows you to “push button, get ad”.

Your jobs are safe for now, my little flesh-fingered creatives.

Sort of.

I mean, artificially intelligent robots, like Jareth’s labyrinthian?Cleaners, are coming for us all, inexorably.

But no machine will ever take our place on the seat of ease.

Frank you very much.

No alt text provided for this image

“AI is good for having bad ideas quickly, so humans can have good ideas quickly.” Well said, Steve!

Joel Topcik

Brand Communications ? Messaging ? Marketing

1 年

Missed seeing you at the conference. You bought me a drink (maybe two) when we met up a while back, so I was hoping to return the favor. Next time!

要查看或添加评论,请登录

Steve Bryant的更多文章

  • A map of what you meant to say

    A map of what you meant to say

    Hey there, this is Steve. I’m available again for content, strategy, and recruiting work (case studies).

    9 条评论
  • Lead by Visual Example: Concept Mapping 101

    Lead by Visual Example: Concept Mapping 101

    First in a series of issues about tools I use to helps teams understand complexity and communicate better. Hi, I'm…

    2 条评论
  • Be kind, define

    Be kind, define

    Content strategy within the discipline of product design I recently started a gig with a product design agency and I’d…

  • You only need to know 3 things

    You only need to know 3 things

    Let's just skip all the silly marketing language and jargon around brand strategy for a moment. Forget the cumbersome…

  • The Unbearable Not-Quiteness of Being

    The Unbearable Not-Quiteness of Being

    Oh it’s you! Hello! How are you? Me, yes, I’m wonderful, I was just in the back. Yes, the back room of the internet.

    2 条评论
  • A parade of sad relations

    A parade of sad relations

    Hashtag hustling A few years back I did some creative consulting for a dating app. This was in a WeWork on California…

  • Neurotic clients and how to work with them

    Neurotic clients and how to work with them

    In this post: Why all consultants become therapists Why it's important that your work energizes you How to work with…

    6 条评论
  • Try having lots of bad ideas

    Try having lots of bad ideas

    "To get good ideas, you have to have a lot of ideas and throw away the bad ones. The knack is to recognize the problems…

  • Tell me about your A.I. mother

    Tell me about your A.I. mother

    Hello! It’s me again! It’s also you! And me has some timely sartorial advice for you: You are very welcome. This week I…

    1 条评论
  • “Make it for your audience’s audience.”

    “Make it for your audience’s audience.”

    How to create things that help themselves get shared. Eventually, if you write enough copy—or if you spend enough time…

社区洞察

其他会员也浏览了