Dadaist Dreaming, Prompt Wars, and Sentient Systems in an era of Infinite Artefacts
Prompt: "system of variations in modular design elements, details, classical drawing, timeless beauty" (Dream Studio)

Dadaist Dreaming, Prompt Wars, and Sentient Systems in an era of Infinite Artefacts

Creative AI in Art & Design –?is this the beginning of the end for creatives, or simply the dawn of a new era?

By now you will surely have noticed the influx of creative AI experiments in your social feeds; many of them shared here on LinkedIn.

First it was Craiyon and Dall-E 2, then Midjourney, most recently we’ve seen the emergence of open-source platform Stable Diffusion. The user adoption in this space is mindblowing.

The Midjourney community alone currently has 1M+ active users on Discord.

No alt text provided for this image

Above: Concept artwork by Jeff Han, created with Midjourney, that went viral in the early days of summer.


What is Creative AI?

These are AI-driven tools that generate creative outputs based on prompts, text or image inputs, from its’ users. The current mainstream discussion is primarily focused on AI art (imagery) but this narrative will evolve, and new forms of expression will be established, moving forward.

No alt text provided for this image

Above: Three variations of imagery, based on the same prompt across three different AI engines. Comparison by Fabian Stelzer.

The AI is taught principles of style, structure and composition through data sets and learning models from the world of imagery, art, photography, and illustration. It improves with each user interaction. It can “see”, understand, and generate artistic output, in real-time.

No alt text provided for this image

What started out as light-hearted provocations and novelty juxtapositions in visual expression has quickly turned into profound aesthetic discourse around the future of image making and artistic storytelling.

A couple of weeks ago, a series of AI-generated artworks won 1st prize in the Digital Art Category of a Fine Arts competition in the US, causing Twitter outrage and opinion pieces across media.

No alt text provided for this image

Artistic communities are torn.

Is this a positive or negative evolution for art and artistic expression? Is the machine serving the creator, or is it the other way around? Is this the beginning of the end of human craft? Is this the beginning of a creative renaissance, or simply the starting point for a dystopian nightmare?

Whatever your opinion is, now is a good time to start to pay attention.

We have seen similar experiments online over the past few years, but recently the difference this time is truly felt and seen in the quality of the output. This new wave of work is evocative, emotional, and at times, even beautiful. It feels relevant and enticing. Raw and poetic.

No alt text provided for this image

Above: Renaissance style Artwork created by Nin and a machine.

And this is just the beginning. ?

In a few years we will find ourselves experiencing in what could be considered one of the most transformational eras for creativity, where we will collectively work alongside these AI powered tools to dream and imagine new futures; in completely novel ways. It seems like an undeniable consequence of what we’re witnessing right now.

The way we make things (the craft) will arguably evolve to new dimension, a point of no return even, and it will eventually touch all forms of creative expression and forms of creative craft out there; product design, fashion, digital concept art, writing, film making, animation, poetry, photography, music, sound design, art direction, typography, architecture, production, fine art, 3d production. You name it.

No alt text provided for this image

Above: AI explorations are expanding into new fields of creativity where different inputs can generate unexpected outcomes; here in music and fashion.

While some might discuss what art forms and modes of creative discussions are “safe” from being over-taken by these technologies, others are starting to think about where new tools and processes might take us a few years down the line, assuming things will pick up in pace.

This is a space that inspires me. I believe it’s our collective responsibility to help shape these technologies to move us in the right direction, due to its inevitable impact.

I personally believe we’ll see our creative processes, when moving from inspiration to idea, from idea to sketch, from sketch to artefact, from artefact to system, completely re-imagined moving forward. And I believe it will happen in the coming 3-5 years.

?It will happen gradually, then suddenly. Buckle up.

?*

So, what are some of the evolving themes during this transformation? What should we pay attention to? Here are a few themes I’m observing…

Hyper-democatization

Obvious, but worth pointing out. Anyone, anywhere, with a curious mind, and an Internet connection, will be able to generate high quality art, photography and design in real-time. Peak inclusion. The mainstream tool stack will explode and grow significantly. Instagram, TikTok and Snapchat all brought new creative tools to the masses and help shaped a maker mindset in the creative economy.

Creative AI will take this wave of democratization to a whole new level and make previously impossible feats reality. When the world of Midjourney meets the world of TikTok there will be no point of return. I can’t even imagine what will happen, but the barriers for creating mindblowing stories and narratives will be completely gone, the only restriction will be our collective imagination and appetite for the new.

No alt text provided for this image

Everything is a Training Model

Any and everything we interact with will ultimately learn from us and feed something back into whatever system it is we’re using. Every piece of software, every digital product, every service, every user interaction, every repository of tagged assets is a potential training model to make an AI engine stronger, in a continuous loop of input and output.

Right now every product team creating design tools out there, is thinking about how to bring in Dall-E or MidJourney type intelligence into their tools and onto their platforms. As I’m writing this, users are already building the plug-ins to guide the way.

No alt text provided for this image

Above: Figma and Photoshop users are building plug-ins that bring Stable Infusion onto the software platform and into the design process.

The next few iterations of your go-to creative production tool will start to incorporate prompts and new forms of intelligence and guidance to help ease your creative process. It will incorporate a general level intelligence and can propose creative suggestions and variations to its user throughout the process, and it will develop specific level intelligence about the user, helping him/her to improve their skills over time. Ultimately the tool will be learning from its community of users continuously, bringing back insights and learnings back to the general AI engine for refinement.

In adjacent areas of applications, businesses like Pinterest, Getty Images and Unsplash might eventually pivot to use their massive collections of tagged visual assets to 1) train one or multiple AI engines, and 2) generate/create new outputs to users, based on their search queries (prompts), rather than showing the typical search results. Every single tool you use, will learn from you, and give you a better experience in return.

No alt text provided for this image

Above: What comes after art... Architectural inspiration? Interior Design? How soon can we turn these images into schematics and 3D renderings?


The Prompt Engineer

We’ll see a new breed of creatives and designers that focus primarily on driving unexpected outcomes via text prompts. This skill will sit somewhere between creative writing, pure design, and continuous ideation/iteration.

No alt text provided for this image

Above: A discussion around prompt-keeping and the craft of creating them. An AI artist is keeping the magic to themselves, and the traditional artist makes a sarcastic remark.

It might seem ridiculous to speak about the “craft” of developing prompts, but there is already a narrative forming there. Some artists out there right now, can reach dimensions that others aren’t able to reach, simply by the choice of words they are using. There are different areas of future application here, moments of “live performance” vs. iterative in-depth exploration.

Prompt engineers (or whatever they will be called) will develop specific vocabularies for specific areas of applications; helping to drive the technology deeper into new fields of application (3D modelling, architecture, ux design, sound design). They will help drive and establish the semantic framework around creative expression that is needed for a specific challenge; and they will help bridge the interoperability between AI-led tools and standardized software platforms.?

There’s already a vibrant discussion around the limitations of language in this context.

?

Prompt Keeping + General vs Specific AI

Artists and creatives will eventually start to safeguard, promote, and protect their personal styles. They will develop, package and sell their own “worlds” (imagery, audio, writing, systemic design) manually and through the AI. Each style will feel like the equivalent of a toolkit or a design system. They might commission them to brands, they might evolve new hybrid expressions through partnerships with other artists or brands, and some of them will blacklist the usage of their work/style to avoid it spreading (see Counter AI-artist Communities below).

No alt text provided for this image

Above: Amazing work by Francien Krieg. The artist has already established an artistic process and style, the "classical" way (through drawing, sketching and painting). She now uses AI to explore variations and new motifs, in her own style. The art is therefor deeply human, while machine-assisted. A perfect example of specific applications of applied AI.

Artists and creatives whom refuse to share their prompts to avoid others copying their style have already labelled the phenomenon prompt keeping. They want to keep certain passages and nomenclature personal, to ensure a more diverse style and their ownership of a specific expression. Brands will eventually do the same to keep distinguished expression in their applications. This will seem laughable at first but will eventually be a standard practice among emerging artists.?

No alt text provided for this image

Above: The prompt as artistic technique. "Can you share your search term?" rather than "Can you share your process?"

Platforms, engines, networks (Stable Diffusion being the perfect example) will put its focus on general AI; keeping models focused on general principles for creative expression and articulation. Consider it a body of work and the tools that outline the sum of all human achievement in art and science, and its' many expressions.

General intelligence will be brought into every single software suite as an input for learning, understanding, and guiding a user’s creative process. Think of it as the basic clay to start with, guiding the parameters of creating, a generic type of input that previously have had to be learned and understood by the practitioner.

Specific intelligence will be developed as niche articulation of models that guide specific expression and discover. It will be governed by the brand, the designer, the artist, the musician. It will outline expressive territories and vectors that is unique to an author.

Counter-AI Artist Communities

A community of creatives will opt out of this evolution based on their own ethics, philosophies, and beliefs. They will ban the use of their names (that can be used as a style prompt for exploration) across engines and software platforms. Will start to see the first law suits by artists against the engines and platforms that are promoting and exploiting their work. We’ll start to see stronger divergence in the discourse and narratives around produced work, and more toxicity in the debates in between camps.

No alt text provided for this image

Above: "100% Human Art" as a signifier for personal achievement and quality.

The AI-rejecting artists will come to celebrate their craft and human abilities further in their work. They will seek expression that is more difficult to imitate or generate with prompts. They will water mark their output as “non-AI created”. Ironically this could be one of the few categories of work that would be relevant for NFTs in the long-term; but arguably the archetypical artist that would reject generative AI based on their values, would potentially also resent monetization of their work based on blockchain technology.

Different schools of thought will diverge the artistic communities into philosophical camps. We’ll see the dawn of sacred aesthetics that are generated by the human explorer, without support from the machine.

We’ll enter an era of infinite artefacts. Anything and everything can be produced in real-time; this will create an absolute abundance in creativity and creative output. The cultural and monetary value of the artefact will be diffused, and we’ll see artistic exploration playing on concepts across the full spectrum between scarcity and abundance.

Will anyone want to own an instance of an artefact and what will be the dynamics of ownerships (think: NFTs)? Will there be beauty in the ephemeral and disposable? Will artists create artistic systems, rather than artistic pieces?

These are interesting questions to explore.


Vibe extensions & IP escape

Media brands and platforms will ultimately turn to AI to generate and explore in-house production of content (AI-driven automation). Businesses like Spotify and Netflix will be well positioned to start to experiment with in-house content which can be implemented in the flow of algorithmic programming. Think about it as transitional content. Ambience or vibe extensions.

No alt text provided for this image

How long until Spotify’s “Lo-fi beats” or “Ambient Essentials” playlists are starting to train and deploy AI to incorporate new music to bridge the artists on the platform? Build an engine that taps into a mood, a tempo, a style of orchestration, a beat and extends it by learning from the style of the playlists. Moods are moods, vibes are vibes, and at some point I as a listener, will have no possibility to distinguish a beat from a human creator versus an AI engine. Maybe it’s already happened?

What will the human experience of realizing that a large percentage of my Spotify likes are machine generated be like? How will it make us feel?

?

In-painting & Interoperable tool stacks

Coming back to the world of art and graphics. “In-painting” is specific to Dall-E 2 and allows the user to take a generated image and remove/re-work the parts of the image they are unhappy with. The crop can be altered, the composition can be scaled out, an objective within the composition can be removed or exchanged. This process allows the user to generate a perfected image, step-by-step.

In-painting creates a natural bridge for interoperability between AI-driven tools (image generators) and traditional software stacks. In the coming year you will see creative processes that start with prompts and end in Autocad, Photoshop or Figma, and vice versa. Users will want layered Photoshop files from their AI tools; and they will want to import three-dimensional renderings into their AI tools. The software platforms that help ease this process will win.

No alt text provided for this image

Above: "Outpainting" uses AI to create the visual bridge between two different input artworks. Van Gogh and Hokusai finds a middle ground in an endless composition.

This essentially means that AI will be integrated into the creative process in distinct moments, depending on context. It doesn’t need to be a one-size-fits-all meta application, different types of AI will serve in different moments, and grow in different directions organically. The open source community will drive this, users will lead the innovation.


Image making vs. Systems design

The obvious gap in the body of work that we see today, which ultimately needs to be addressed for Creative AI to be assessed as an aid in the process of designers (product designers, ux designers, interaction designers, architects), sits in the output of the tool.

?Images/pictures/graphics are the outputs of creatives that work with static artefacts; artists, illustrators, photographers. These are static, controlled, finite assets.

A designer’s desired output however (similar to that of the architect), will be a multi-layered, dynamic component (the schematic, the layout, the flow, the system). The “clay” as such, will need to be systemic atoms that are defined in a design system.

No alt text provided for this image

Above: The designer of a digital product doesn't generate "images" as such; they generate systemic components for interaction. When will the AI be able to make this distinction? Images from Dribbble.

Once AI engines can generate intelligent components, rather than pictures, we’re ready to transition to the next stage. A DLS can be seen as the clay to design with, using prompts to start the production process, and the DLS can also be seen as an ever-transforming dynamic engine that can optimize itself and evolve in real-time; either based on user interaction, or customer feedback. This evolution will ultimately disrupt the whole design process and value process of product designers out there.

Sentient Brand Systems

Once the creative AI has been trained to understand and to output system components according to systemic design principles (think Atomic Design), we’ll start to see the ripple effect across digital products. Brands will develop their own specific AIs as an engine to accelerate and expand design systems. Brand standards and experience principles will be encoded into and onto brand-specific “forked” (specific) AI engines; meaning that designers (incl. customer and communities) can generate product experiences and system instances based on prompts.

?AirBnB prototyped some these of ideas 5-6 years back, but using Computer Vision.

No alt text provided for this image

Above: The team at AirBnB used computer vision to turn a hand-drawn UI sketch to a fully working prototype in real-time. This is 5-6 years ago. One might imagine a similar interaction for developing UX prototypes from prompts, moving forward.

Moving forward I imagine a designer prompting Figma to create a “2 column image module grid, using listing page template, with filter option, using AirBnB DLS” to start a design process for such page. Flows will be automated according to presets and exploring a new product feature (and variations of such) will be generated in a matter of seconds.

?From a brand expression point of view, brands will be able oscillate between multiple instances of brand style sets to test and deploy the most efficient and impactful ones to customers in real-time. Think style transfers, but on a systemic level, in real-time.

No alt text provided for this image

Above: Janne Aukia is using text-to-image engines to generate a more expansive environment for brand exploration. He's describing the aesthetic universe around the desired end state and outputs brand identity, color palettes, imagery and layout.

System level brand expression tweaks will be instantaneous and seamless. Adaptation and customization to specific target audiences or user archetypes will happen in real time. Usability and accessibility features will auto-adjust and adapt in real time.

No alt text provided for this image

Above: Dr. Neill Campbell's Manifold of Fonts (left), showcases more or less likely variations of a typeface or font shape. Antti Oulasvirta asked "Can computers design" in 2016 and provided a framework (right) for assessing least to most appropriate design solution for a specific problem. As we start using prompts and AI-driven tools for Design, we'll also be able to start to assess the quality of its output, through data and analytics.

Different modes and forms of expression will be easily mapped and rated based on efficiency and efficacy. We’ll be able to see which design solution is the most appropriate one, and which one is the least appropriate one. This will not only be a tool for creating “good” design, but also a way to pragmatically explore opposites or negatives, as provocations.


Self-optimizing product experiences

Once systems are intelligent and the generation of system components is automated, it will be easier to imagine self-optimizing product experiences that can render a user journey or a product flow more efficient, independently of the designer’s input.

A broken e-commerce checkout experience for instance, can be deployed and tested with customers, and variations of it can be deployed in parallel. The product system slump tests the different variations with the user and re-deploys the most efficient one.

Multiple instances of products co-exist, the system automatically deploys the best version of itself, for each customer, in any moment.


So, what's next for us?

As highlighted above, I believe these tools and creative AI in general, will bring profound change to our day-to-day work in coming years. What might feel provocative and disruptive today will quickly become common knowledge and commodity.

The biggest concern (beyond the ethical usage and appropriation of AI usage) will be not thinking big enough. In all the example I list above, I reference common examples that are already known and understood by our creative community. There's an opportunity moving forward, to push these tools beyond what we know, beyond what we've seen, to create some truly remarkable new stories and experiences.

I hope to see things in the coming years that are completely novel; new forms of expression, new ideas, new concepts that push craft into a new dimension.

I think we'll see the emergence of "total design" where a truly holistic approach to creative expression can be adopted. A single person will be able to create worlds and universes, on their own. Teams will be able to think bigger than ever. Film, animation, music, art, design, architecture, fashion and poetry. These will all be within reach, accessible for all.

The generalist in such scenario will expand its reach, touch more things, go broader in its exploration. The specialist will be more attractive than ever, going deep into contextual exploration, driving niché work on the cutting edge.

We'll see novel, real-time experiences and multi-disciplinary art in abundance. The artefacts will be infinite. The value and narrative around artefacts will be completely disrupted. Content will be automated and generated in real-time.

The Metaverse at scale, won't be designed by designers in front of computers. It will be designed and generated in real-time by AI systems; adapted to its users and their preferences. The only limit being GPUs and bandwidth.

Ultimately, we’ll start to think of authorship in art, design, music and film as creative directors in its truest definition. Anyone can create anything, every creative artefact and every single form of expression will be a commodity.

Then what?

Pratik Shanbhag

Field Marketer | Niti Aayog LiFE Top 75 | Aspiring social entrepreneur

1 年

Really interesting read Andreas! Being new to the world of AI, I'm amazed at it's capabilities. I truly agree with your perspective on "where we will collectively work alongside these AI powered tools to dream and imagine new futures". Content writing and creation will no longer be the same! If you’re up for some more in-depth discussion on the topic, I highly recommend checking out our flagship event at?Yellow.ai?on the Future of Generative AI for Enterprises. Here’s the link:?https://www.dhirubhai.net/feed/update/urn:li:activity:7026182573827518464

回复
Jan van der Asdonk

Inventor, Designer & Futurist: Innovation Director [Ex-Nike & frog]

2 年

Enjoyed reading your perspectives on the matter Andreas, well done. Got me late-night thinking, will one version of future creativity primarily sit with guiding AI into novel and exciting territories, beyond today's specific niches? Prompt Engineer turns into AI Designer? Perhaps tied with a future fear of keeping every AI in check by not letting it wander on its own?

Roberto Veronese

Health & BIO venture accelerator

2 年

An angle I suggest keeping is the difference between art and design: art provokes (thoughts and emotions); design designates (problems and solutions). Both create (and do overlap) but with different intentions. So far, the technologies you reviewed make the most sense in the art world and, to some extent, in industrial design to explore the "form follows emotion" predicament. Besides design inspiration, I still don't see any significant answers to the design needs I articulated in this old (and a bit naive) piece https://www.core77.com/posts/36472/Advancing-the-Next-Paradigm-Shift-in-Design-Automation. Generating a functional artifact requires a different approach to "prompt engineering." A prompt such as "2-column image module grid with filter option" is disproportionately less helpful to a designer than "portrait of an old man, eternal wisdom and beauty" is to an artist. Initially (discovery phase), the input should be a question rather than a description of the expected output, and should be fueled by research insights. Maybe, it should be contemporary to design research as we used to sketch out in the field after talking with customers. Assuming technology can answer design questions, what design needs is "question engineers."

Tim O'Neill

Co-founder at Time Under Tension

2 年

Great article, and something I am thinking a lot about too (plus building some experiments). To add to the examples of design tools incorporating text-to-image, Canva have in the past day announced their t2i app powered by Stable Diffusion - https://texttoimage.app https://twitter.com/timwoneill/status/1570337302504943616

回复
Fabio Sergio

Chief Design Officer, Design & Digital Products Practice Lead, Managing Director, Accenture Song ICEG (Italy, Central Europe, Greece) | Visiting Faculty, Politecnico di Milano & Copenhagen Institute of Interaction Design

2 年

Compelling overview and reflections Andreas!

要查看或添加评论,请登录

Andreas Markdalen的更多文章

社区洞察

其他会员也浏览了