The latest trend in Photography is all about prompts
Gloria Maria Cappelletti
Editor in Chief & Creative Director at RED-EYE
Faced with an unprecedented flood of digital images, it is necessary to wonder whether photography’s traditional status as a means to document the world has been irrevocably altered. New technologies, in the form of digital cameras, computer editing and the latest AI generative tools, are impacting how we view photo images today.
The transition in photography from an analogue medium that relied on chemically developed light-sensitive emulsions, to a medium that uses digital technologies to capture and store images, began in the late 1980s with the introduction of the first consumer digital cameras, and the introduction in 1990 of the first version of Adobe Photoshop, a software program to edit and manipulate digital image files. Because digital images could be transmitted and edited far faster, by decades end almost every newspaper and magazine had switched over to the digital workflow process, and their photographers were using digital cameras designed for professionals. Many artists using photography as their medium developed creative approaches taking advantage of the seamless alterability of digitally altered images, expanding on a long history of photographic collage, double print, and other pre-digital forms of manipulation.
Digital cameras and social media websites have allowed organizations and businesses to make photos more accessible to larger and diverse populations. As the field of photography becomes more mainstream, digital photo types and photographers are diversifying. In addition, the convergence of still and moving digital photographs with moving video images, and the proliferation of Web design tools allowing for animation, motion control, and sound editing, created a creative field where photography was just a tool in creating a media experience. This ushered in a new culture of images, where photographs took on new prominence in our digitally-mediated world, especially in photo-sharing on platforms such as Instagram, where it is measured by likes, comments, and reposts, all monitored through algorithms.
The trend toward aesthetic evolution and re-creation has been enabled by the way that many modern apps (such as KitCam, Camera+, and Instagram) incorporate editing options, and filters which employ algorithms to manipulate digital images pixilated grids in order to emulate styles from traditional photo technologies like daguerreotypy, Kodachrome, and Polaroid cameras. From Pompeii portraits, through to Kodachrome slide shows, home movies, the singular relationship dynamics of Polaroid photography, and digital selfies, models of self-imagery and representation are established, sustained, and evolving in tandem with emerging technology paradigms. Despite the rapid growth of photography from a niche curiosity to a mass media over the last century-and-a-half, there is something unmistakably, but undeniably, different about digital-age visual culture - something that is both unique and profoundly embedded within the very core of the photo-image itself.
In the current world we are facing a new radical evolution. We are talking about DALL-E 2, the newest AI Image Generator created by OpenAI, one of the biggest companies right now in AI. DALL-E 2 is a new artificial intelligence system that can create lifelike images based on text descriptions. DALL-E 2’s ability to generate realistic images from textual descriptions opens a world of possibilities to artists, designers, and anyone else looking to generate visual representations of their ideas.
OpenAI has released a few examples of images generated by DALL-E 2, including a dog walking along the beach, a product design for a toothbrush, and a three-dimensional model of a cityscape. The results speak for themselves. DALL-E 2 can create images in any kind and style that you can imagine. DALL-E 2 can create a variety of types and styles of images continuously, even as inputs change (your descriptions).
Last, but definitely not least, DALL-E 2 can produce an array of source images that look like the existing images that you uploaded. You can also tell DALL-E 2, using natural language, to perform a particular modification on the existing images. Currently, users can upload an image and make variations, but we cannot control the way they are altered: There is no accompanied text prompt, so DALL*E simply plays Chinese Whispers with DALL*E to create sibling images that are similar, but not identical. Perhaps DALL*E can map a face-filled upload onto the fingerprints of images that have been generated before, and zap them, so that these generations could be controlled later as well.
DALL*E could (optionally) augment with another AI that is focused specifically on making faces look less strange, either in the initial query or as a side tool to be deployed on the targeted areas of an image. DALL-E 2 is capable of creating photorealistic faces from any text-based prompt (only limited by the fact that no known famous or well-known faces are generated, in order to avoid fake news and similar problems), and works extremely well. The first, and what DALL-E 2 is best known for, is the fact that it can generate a picture or piece of art from a description. Generation means that you enter the description, and DALL-E 2 creates an art or an artwork based on what you have entered.
领英推荐
Anybody who has an idea for a visual can use DALL-E 2 to turn that into reality. OpenAI said that wider availability is made possible by new approaches for mitigating biases and toxicity in the DALL-E 2s generation, and evolutions to policies that govern images created by the system.
When users asked DALL-E 2 to create an image including a set of people, DALL-E now draws from an openAI-claimed data set of samples more representative of global diversity. According to OpenAI’s own tests, users are 12 times more likely to say that the DALL-E 2 output includes people from different backgrounds. Sometimes, we find an image, or series of images, that has a particular style, but we do not know how to word it -- or, even if we did, we cannot be sure that DALL*E will word the exact same terms.
Once photography was a silence made of light and space, today it is becoming a two-dimensional textual dialogue given to the algorithm. If you're interested in using the software, join the waitlist.
AI-Generated text edited by Gloria Maria Cappelletti, editor in chief, RED-EYE
All images generated with DALL-E 2
RADAR by RED-EYE is the first disclosed AI co-generated newsletter, that explores everything we need to know about the future.