Adobe Firefly will put an end to the struggle of finding the perfect stock image.

Adobe Firefly will put an end to the struggle of finding the perfect stock image.

I spent some time this weekend exploring the beta version of Adobe’s Firefly photoshop integration. For those who don’t know, Firefly is Adobe’s generative AI art tool. It claims to be an ethical AI art generator , training off authorised images within the Adobe Stock library as well as open source and out of copyright images. This means that all the legal issues being debated about generated content are not an issue with Firefly, and that images generated through the algorithm are cleared for commercial use.


With that in mind I decided to take it for a test run. Using a selection of my own images, I put the beta release through its paces, checking out how well it would handle a number of tasks over different kinds of images.


The first test took an iPhone photo of Melbourne’s 2022 MPavilion, taking a summer night photo and turning it to an arctic winter. In a series of four text based prompts I first added a bank of snow to the front and right sides, a build up of snow to the roof line, snow amongst the background floor and to finish it off an aurora borealis over the Melbourne City skyline.


No alt text provided for this image
Melbourne’s 2022 MPavilion. Image: Stephen Rollestone


No alt text provided for this image
Melbourne’s 2022 MPavilion after scene change using Adobe Firefly. Image: Stephen Rollestone


It’s important to note that the image took a few adaptations to the prompts to get it looking natural. For example the snow needed the prompt to say ‘with red glow’ in order to match the lighting (to my surprise it correctly identified the location of the red light source) and to maintain the skyline and tree line the sections had to be individually selected. Which I found odd as Adobe's Lightroom has a select sky AI tool, yet Firefly did not.


The second attempt was to introduce a new element into an image. I took an image of Victoria Harbour in Melbourne, selected an empty spot of water and prompted to add a breaching whale. It was with this test that I realised that Firefly will choose to be lazy where it can. Where the generated whale overlapped the shoreline, Firefly chose to generate a new background to replace the existing one rather than cut around it. This took away some of Melbourne’s recognisable buildings and replaced them with buildings you would likely find in Hong Kong. However, on the plus side elements get generated as new layers with layer masks so it was already set for me to start refining the mask’s edges and remove the background intrusions. What this particular image did was make me release how much easier it will be for people to make believable fakes with minimal effort. It’s also notable that on the term ‘whale’ Firefly automatically went for Orcas, and while they are found in Australian waters I would hazard a guess that most people in Australia (particularly on the east coast) would think of humpback whales first. Possibly a sign that the database of images Firefly is trained off doesn’t have a wide enough library as of yet.


No alt text provided for this image
Victoria Harbour, Melbourne. Image: Stephen Rollestone


No alt text provided for this image
Victoria Harbour, Melbourne with AI Orca added with Adobe Firefly. Image: Stephen Rollestone

?

On further tests I was able to give this Magpie Lark a friend at its puddle. Notably Firefly struggled with generating a seagull that looked real and natural. This image was the 9th option it generated for me. It was created by selecting the area for the new bird, including over the puddle, and stating ‘add seagull and show reflection in puddle’.?


No alt text provided for this image
Image of Magpie Lark at puddle on hot day. Image: Stephen Rollestone


No alt text provided for this image
Image of Magpie Lark at puddle on hot day with AI seagull added with Adobe Firefly. Image: Stephen Rollestone



I took a high contrast black and white photo and asked it to add a horse. While it didn’t get the shadows perfect it did manage to capture the shadow at the right angle for the image. (Well really I attempted to add a unicorn but it didn’t quite get past the horse).


No alt text provided for this image
High contrast black and white image of Victoria Harbour Promenade. Image: Stephen Rollestone


No alt text provided for this image
High contrast black and white image of Victoria Harbour Promenade with horse added through Adobe Firefly. Image: Stephen Rollestone


Then, just as I was feeling confident for Firefly to produce practically anything I asked of, it ran into trouble. I took an image I had of a building courtyard looking up to a clear blue sky. I wanted to add an image of a plane as if I was standing directly underneath as it flew overhead. The type of image where you could only see the underbelly of the plane. However, either Firefly could not comprehend the request or the database was limited in training material, as it simply could not get the angles right, the plane flying in the direction I wanted it to, and sometimes just not looking like a normal plane. I produced about 30 variations, here is a small selection of images that emerged.


No alt text provided for this image
Courtyard looking up. Image: Stephen Rollestone


Despite the failure of the plane, I was feeling confident to try something more challenging. I pulled up a photo of mine of the Westgate Bridge and decided I wanted to see a giant tentacle reach out from the water and wrap itself around the bridge, like a B-grade monster movie. This was an interesting experiment that identified a few more of Firefly’s limitations. Firstly the prompt struggled with the concept of wrapping around an existing element in a photo (not so much an issue when generating its own image as I found out in later experiments).


No alt text provided for this image
Westgate Bridge, Melbourne. Image: Stephen Rollestone


No alt text provided for this image
Westgate Bridge, Melbourne with giant tentacle added with Adobe Firefly. Image: Stephen Rollestone


Secondly, it was during this image I realised that Firefly will avoid generating images near the edge of your selection area. To create this one, the initial prompt was ‘Giant tentacle emerging from water and wrapping around bridge’, admittedly this is a more complex prompt and I could have simplified it. What this prompt produced though was a floating tentacle that extended from the base of the road to the curl at the top of the arm. No amount of changing the prompt would have the tentacle emerge from the water. So I extended the selection area, leaving the curl of the tentacle alone, and repeated the process with a simpler prompt, ‘giant tentacle’. This prompt extended the tentacle to where it needed to be but with the need to edit the mask so the arm went behind the pylon and behind the fence line.


Finally, as part of this exercise I challenged Firefly to extend two images. One a stock photo of a couple moving house and the second a high res photo stitch I made of Monet’s Vétheuil painting. I won’t go into much detail into how I did this, as it was basically just extending the canvas then pressing generative fill.?


No alt text provided for this image
Stock image of young couple moving house. Image: Shutterstock


No alt text provided for this image
Stock image of young couple moving house, where the image has been expanded and elements added via Adobe Firefly's Generative Fill. Image: Shutterstock and edited by Stephen Rollestone


No alt text provided for this image
Hi-Res photo stitch of Monet's Vétheuil. Image: Stephen Rollestone


No alt text provided for this image
Monet's Vétheuil expanded using Adobe Firefly. Image: Stephen Rollestone


The results speak for themselves. The only issue I faced was on the Monet painting, I wanted to add in a sun, all the painted images it threw back didn’t match the Impressionist style. Again an indication that there are limitations to the data set that the algorithm is training off. I think this feature of being able to extend backgrounds is going to be a useful aspect for most designers, in quite a few areas. Extending backgrounds manually is a time consuming process that doesn’t always bring the results you would want but this can do it in seconds.


So what’s the verdict? Yes it has Adobe Firefly has limitations but it is also the first integration of a generative AI into a professional tool and the results can be used commercially without concern. At the moment, I don’t think it’s coming for anyone's job either. It’s currently going to help those who already use Photoshop do their tasks more efficiently. While I would love to see Firefly publish a reference list with any works produced using it so it can be more transparent, Firefly currently seems best placed for modifying and enhancing a base artwork rather than creating an image from scratch, making the designer feel like an integral part of the process as the tool feels more like using a prompt as a paintbrush than creating an instant image. It will definitely save designers time in selecting their stock imagery. No longer will you find the perfect image but be limited in its use by the way it's cropped or find yourself wishing that the model was wearing a leather jacket instead of a suit jacket. What's more exciting is Firefly currently working on an auto resize prompt, 3D element output, sketch to image function and prompt to vector. For someone such as myself who has been equally excited and cautious of AI generative art, it feels like Adobe have managed to find the right balance in implementing it.

要查看或添加评论,请登录

社区洞察