Refine Your Image Outputs in Dall-E with new Select Tool - The Daily Dose of Digital - 05/04/23
James Gray
VP Experience Marketing | 20+ Yrs as an Innovative Digital, Marketing & Tech Professional
I'll be the first to admit I am more of a MidJourney fanboy than Dall-E - the outputs I have been able to generate in MidJourney typically surpass those with similar prompts from Dall-E, and for my most common use cases MidJourney outputs are more polished and less 'cartoony' in the most part.
However, when it comes to the ability to 'edit' and amend outputs, MidJourney's region adjustment and 'image weight' techniques leave a lot to be desired in both output and user interface.
This is why I think Dall-E's new Select Tool and editing solution could be a step in the right direction. Here's OpenAI 's post demonstrating how it works on a cute puppy...
As can be seen in the demo, the ease of simply highlighting a section of an image and instructing Dall-E to "add a dash of creativity here" or "enhance the vibrancy there" makes editing outputs pretty intuitive. The era of swooping paintbrush icons and fingertip enhancements is upon us.
Testing the new Dall-E Select Tool
I ran an example myself, using the initial prompt: "Create an image of a young professional male marketer, dark brown hair swept to one side, wearing clear square framed glasses, wearing a black polo shirt and ripped black jeans, holding a macbook pro, full length view, standing in an office. Make this hyper realistic and the output should be in 9:16 ratio. "
The first output:
I then selected a couple of areas to edit to test the select tool:
领英推荐
I then asked Dall-E to change it to a Windows Laptop (I know...I know... once you go back you don't go back... but... for the sake of experimentation), and change the glasses to thicker framed brown versions; as well as changing the shirt for red.
This is the output:
Not too bad at all, as much as I still think this isn't as realistic as a MidJourney output might be.
The future of Gen-Ai image refinement?
I can see this being used even more naturally in future where, on a tablet users can simply use their finger to outline the area to be edited then use a voice command to update it. Workflow efficiencies will become enhanced, as well as the visual outputs.
This now means we have Dall-E, MidJourney and even Generative Fill / Edit through Firefly in Photoshop taking the ability to refine image generation to a new level.
What is your favourite tool of choice? Or do you take MidJourney images and feed them in to Photoshop for editing for example? Let me know in the comments.