The Creative Brief: AI Imaging Update—Midjourney 5.2, Adobe’s Firefly Flack, and Two New AI Courses

The Creative Brief: AI Imaging Update—Midjourney 5.2, Adobe’s Firefly Flack, and Two New AI Courses

Welcome to another edition of?The Creative Brief, my roundup of news and trends for graphic designers, video producers, and creative professionals of all kinds. These roundup editions publish every other week, and I publish topic-specific “sidebar” editions in between.

In this issue:?AI update—Midjourney keeps getting better, Adobe gets some Firefly flack, and two great new AI imaging courses.

Midjourney’s progress is zooming

The Midjourney AI imaging tool just keeps getting better. Version 5 brought a dramatic leap forward in imaging quality, including a greater likelihood that the people you render will have the correct number of fingers.

Version 5.2 is out , and among other things, it introduces a Zoom Out feature. After you create an image, you can tell Midjourney to zoom out—to create a new version of the image as seen from a more distant vantage point.

Here’s an example I created. (My prompt—because I love you—was “a graphic designer at a layout table.”)

Here’s my original:

No alt text provided for this image

Then I zoomed out 2x:

No alt text provided for this image

Midjourney did a beautiful job extending the image, maintaining the lighting and perspective.

And then I zoomed that version:

No alt text provided for this image

Pretty impressive, but a problem surfaced: the perspective of those new windows on the left side of the image doesn’t match the angle of the wall.

Clearly, in some cases, there’s a limit to how far you should zoom.

In some ways, the Zoom Out feature is superficially similar to the outpainting feature in DALL-E or the generative fill feature in Adobe Photoshop beta. It’s a great new addition. I still wish Midjourney didn’t rely on Discord for its user interface, but I can’t argue with the quality of its results.

Firefly's flack

Much has been said about the more ethical approach that Adobe is taking in training its Firefly AI imaging technology—rather than using images scraped from the web, Adobe is using images from its vast Adobe Stock library.

And that's a problem, according to some photographers who have contributed to Adobe Stock. A recent article in VentureBeat puts it this way:

A vocal group of contributors to Adobe Stock, which includes 300 million images, illustrations and other content that trained the Firefly model, say they are not happy. According to some creators, several of whom VentureBeat spoke to on the record, Adobe trained Firefly on their stock images without express notification or consent.

Ah, but there's a problem. When a creator signs up to sell their work through Adobe Stock, they agree to this clause in the terms of service (emphasis mine):

You grant us a non-exclusive, worldwide, perpetual, fully-paid, and royalty-free license to use, reproduce, publicly display, publicly perform, distribute, index, translate, and modify the Work for the purposes of operating the Website; presenting, distributing, marketing, promoting, and licensing the Work to users; developing new features and services; archiving the Work; and protecting the Work.

Boom.

Hot take: In my previous career life, I was a senior contributing editor and columnist for Macworld magazine. Every now and then, the magazine would require its authors to sign an updated contract. One year, language was added that gave the magazine's parent company the right to publish my writing “in all media now known or hereafter devised.”

This was in the late 1980s and early 1990s, when print publications were toying with the idea of creating interactive magazines on CD-ROMs. (Kids, ask your parents.) It seemed like print wasn't going away, so I—and legions of other writers—agreed to terms like those.

Then the web came along, and just like that, publishers had a new way of monetizing my work without paying me anything extra.

The takeaway: when creators sign away rights to their content for just about any use, there's a good chance that the content will be used for something else down the road. Like training an AI platform.

To its credit, Adobe says it's working on a system for compensating Adobe Stock contributors. According to a tweet from March 21, “We are developing a compensation model for Stock contributors and will share details once Firefly is out of beta.”

Adobe Stock contributors can complain, but those five words in the terms of service—developing new features and services—give Adobe every right in the world to use Adobe Stock images for Firefly training.

It’s time to learn DALL-E...

We just published two courses covering two major players in generative AI imaging.

In DALL-E: the Creative Process and the Art of Prompting , chief creative director Chad Nelson breaks down the art of writing prompts for DALL-E and explores DALL-E in the context of the creative process—how DALL-E and similar tools can greatly streamline the ideation and brainstorming parts of the creative process. It’s a thoughtful look at gen AI imaging from one of the many creatives who are embracing this technology.

Chad also explores some common problems and shares strategies for avoiding them:

...and Stable Diffusion

DALL-E and Stable Diffusion form a fascinating study in contrasts when it comes to the different approaches tech companies can take on their path to the marketplace.

DALL-E is something of a walled garden: a friendly, approachable user interface but rather restrictive in terms of the kinds of images you can create. (Forget using DALL-E to create a deepfake photo of the Pope wearing a puffy coat.) Nor can you train DALL-E with your own images or expand DALL-E to add new features, such as the ability to generate video.

If DALL-E is a walled garden, Stable Diffusion is a chaotic anarchy of creativity. It's open source, and a lush crop of extensions and other add-ons have sprouted in its soil. And unlike all other AI imaging tools, Stable Diffusion will run on a desktop PC equipped with sufficiently meaty graphics hardware.

in Stable Diffusion: Tips, Tricks, and Techniques , Ben Long dives into this happy but often complicated chaos, detailing the various ways you can run Stable Diffusion (through a web service or locally on your own PC), writing great prompts, training Stable Diffusion on your own images, and much, much more. At a total running time of six hours (!), there isn't a more detailed exploration of this emminently expandable generative AI platform.

This video is a great illustration of just how different Stable Diffusion can look depending on how you access it.

I'm taking a couple of weeks off, so The Creative Brief will return in late July. If you're hitting the road over the US Independence Day holiday, you might want to check out the summer travel edition of this newsletter. See you in a few!

___

I’m a senior content manager for the?Creative library ?at LinkedIn Learning, responsible for planning courses on graphic design, video production, art and illustration, and related topics.?The Creative Brief?is my biweekly newsletter spotlighting news, trends, and data points of interest to creative professionals. Please consider sharing it, and thanks for reading!

Neil Rhodes

Digital Restoration Artist @ Image-Restore.co.uk | BA (Hons) Photography

1 年

I love Stable Diffusion the problem with any course on the subject is that SD is changing so fast, techniques for training faces using many images, (dream booth, lora and model merger) is now surpassed by new techniques that do a better job. Things change in a matter of weeks. SDXL is about to be launched advancing on SD. So much is happening I feel that courses will find themselves behind the minute they are released teaching us old inneficient ways. ControlNet a big power tool of SD changes every other week with new tools and methods. A difficult subject to stay up to date.

回复
Leonard Rodman, M.Sc. PMP? LSSBB? CSM? CSPO?

IT System Administrator | AI Implementation Analyst | Agile Project Manager | 44k followers & 20M views/16mo | 8k followers on Twitter | 5k on Instagram | 4k newsletter subscribers | ChatGPT, Midjourney, Runway and more!

1 年

The whole reason I bought Photoshop Beta is it was advertised that the AI was ethically trained. Turns out it’s a joke!

要查看或添加评论,请登录

Jim Heid的更多文章

社区洞察

其他会员也浏览了