Artists Get Creative With Generative AI —?The Reality Gap #3
Image generated with OpenAI DALL-E

Artists Get Creative With Generative AI —?The Reality Gap #3

Welcome to the Reality Gap newsletter, which focuses on synthetic data, generative AI, and data-centric computer vision. If you'd like to be notified about the next edition, just click "Subscribe" at the top of this page.

Will AI replace creative professionals?

There have been a lot of concerns in recent weeks around AI replacing creative artists with the popularization of image generative models. And there are a lot of valid reasons to believe that the creative industry will go through a major transformation.

Without a doubt, we are moving closer to the point where parts of the creative process will be augmented or automated.

Does this mean the end for the creative industry?

Not so fast.

Take a look at this widely shared series of animations by Markos Kay .

"Neural Gardens"? by Markos Kay
"Neural Gardens" by Markos Kay (click the image to view the original post)

Although, there were posts advertising this work as "AI generated", there has been an extensive amount of work put in by the artist to prepare the final piece.

Images in this series were first generated by the Midjourney diffusion model, they were downselected from many variations created and later fine-tuned and post-processed for 3D depth effect using more conventional image editing tools.

In other words, an AI model has helped provide a spark of creativity in the early stages of the workflow, guided by the artist's direction and followed by a lot of manual fine-tuning.

Simply put, the artist was still in the driving seat using AI to enhance the creative process.

And I don't know how you feel about this, but I still think it is not AI's work, it is Markos' work.

We are already seeing a huge boost in the multidisciplinary creative process around the industry. Here is a handful of examples:

  • Turning Lego builds into real objects (Link)
  • Creating full-body animated digital humans (Link)
  • Generating product photo in a setting (Link)
  • Product prototyping (Link)
  • Creating new characters and dialogs (Link)
  • Profile avatar generator (Link)
  • Rendering brand logos in a variety of different materials (Link)
  • Typography creation (Link)
  • Designing fashion (Link)
  • ...and even peeking into Gandalf’s bag (Link)

And this list will only grow as engineers build better tools and AI models and creators will find ways to mix and match them.

And this is one thing I am particularly excited about

Jon Porter put together a list of Top 100 AI Art Creators and Innovators and as I was going through this list of people actively using generative image technology —?I did not find much engineering talent. There are just so many artists finding ways of using simple interfaces and existing tools to create art that humankind had no ability to do before.

You no longer need to know how to code or train machine learning models to use AI to build breathtaking art. But you still need to be an artist to direct the abundant creativity of AI.

And this is why I still believe in a bright future for creative professionals armed with new AI-enhancing tools.

And now, let's dive into the news headlines of recent days!

Google's Generative AI Strategy

谷歌 hosted its AI@'22 event last week giving a peek into its generative model's roadmap —?for text, code, audio, images, and video.

Besides several model demos (that are still not available to the public), it was nice to learn about Google's approach to responsible generative AI. Douglas Eck , Principal Scientist at Google Research talked about how Google's vision is about augmenting human creativity, not replacing it.

“It’s no longer about a generative model that creates a realistic picture. It’s about making something that you created yourself. Technology should serve our need to have agency and creative control over what we do.” —?Eck said

OpenAI Dall-E 2 Gets on the List of the Time Magazine's Best Inventions of 2022

According To This New AI Research At MIT, Machine Learning Models Trained On Synthetic Data Can Outperform Models Trained On Real Data In Some Cases

A group of researchers from MIT, the MIT-IBM Watson AI Lab, and Boston University created a synthetic dataset of 150,000 video clips that recorded a variety of human actions in order to respond to such inquiries. In order to train machine learning models, this dataset was used. The researchers then tested their model’s capacity to learn to distinguish actions in real-world videos using six different datasets of videos. For videos with fewer background objects, the researchers discovered that the synthetically taught models outperformed those trained on real data.

NVIDIA Omniverse 2022.3 Beta Release

英伟达 has announced a new 2022.3 beta release for its Omniverse platform for 3D development and rendering. This new beta release focuses on maximizing the ease of ingesting large, complex scenes from multiple third-party applications, and maximizing real-time rendering, path tracing, and physics simulation.

Deploy Stable Diffusion on 亚马逊 SageMaker

You can now deploy Stable diffusion models for generating images from text using Amazon SageMaker JuimpStart.

Midjourney Can Now Combine Two Images

...and the results are hilarious!

OpenAI Now Allows Access to DALL-E Through an API

You can now integrate DALL-E 2 image generative model into your application for 2 cents per image.

More data is not always better

Gartner is calling for organizations to focus on metadata and synthetic data to reduce their data liability and address privacy challenges.

During the keynote address at the Gartner Data and Analytics Summit in Sydney, Australia, Peter Krensky, director and analyst at the research firm’s business analytics and data science team, said that although organizations have been integrating volumes of big data, not all of that data is useful. Krensky said that such “big data thinking” – that more data is always better – is outdated and counterproductive.?

To reduce data risks and identify useful data, organizations can create synthetic data, which is artificially created data with similar attributes to the original data. Gartner estimates that synthetic data will completely overshadow real data in AI models by 2030.

Vision-Language Foundation Models for Medical Imaging

In this seminar, authors investigate the effectiveness of a large pre-trained latent diffusion model (Stable Diffusion) to generate medical domain-specific images.

Interview with Yashar Behzadi , CEO and Founder of Synthesis AI

Yashar Behzadi , CEO and Founder of Synthesis AI , who is well-known in the synthetic data community gives a deep dive into the technology stack and applications for synthetic data generation.

And that's a wrap! See you next week!

Andrey

Markos Kay

Multidisciplinary Artist & Director

2 年

Thank you this is exactly what I was thinking about the last few days.. when you call something AI generated it denotes that the AI just did it directly even though in my case and in many others a lot of manual work and direction or even prompt craft is involved. This is not the same when you call something computer generated, nobody thinks the computer just did it on its own. So there is a kind of dismissive bias/misapprehension going on with AI art. We need a term for AI-aided work that indicates that the AI is part of a larger process. Any ideas? Ps. Have a look at my last post where I put up a process video showing how much work goes into these

Yashar Behzadi

CEO & Founder at Synthesis AI

2 年

Andrey Shtylenko another great summary. So much going on! Thanks for the mention.

Saad Bedros, PhD

Engineering Management | Program Management | Strategy and Partnerships

2 年

Partner and competitor

Ryan Swanstrom

Product Content Creator

2 年

"augmenting human creativity, not replacing it. " So true

Ryan Swanstrom

Product Content Creator

2 年

Oh this is right in my current wheelhouse. I am exploring automated content creation.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了