Using Stable Diffusion to Enhance Mobile Game Art Production
Created using Midjourney

Using Stable Diffusion to Enhance Mobile Game Art Production

Have you ever considered using AI to enhance your art production? While some of us may have started by creating interesting pictures in DALL-E and MidJourney, there are other options to explore.?

Our Concept Designer Sabina tested how she could incorporate Stable Diffusion into her art production process. The goal was to show that even if AI doesn't initially produce a perfect image, it's too early to give up. You can refine the output manually.

To begin, she downloaded the desktop version of Stable Diffusion, which provides a greater level of control and freedom in artistic expression, enabling you to train your own models based on your library of works. This feature makes it possible to create multiple production-level quality models in various art styles.

For this example, she used a model created by SPYBG, specifically trained for concept art production.?

Let's take a look at her process!

Step 1: Text2Image

Sabina's model is trained to produce concept art, so she creates a simple prompt: “Portrait of a 15-year-old (female) warrior, ethnic outfit, concept art”.

She also checks prompt galleries such as Lexica and OpenArt . After receiving the four results below, she selects N4, which has more bugs and requires more detailed leading by a human.

"Portrait of a 15-year-old (female) warrior, ethnic outfit, concept art”

Step 2: Image2image (Inpaint)

Using the Inpaint feature, Sabina reworks specific parts of the image. By playing with the "denoising strength" setting, she controls how much she allows to change. For example, images 1 and 2 had low parameters while 3 and 4 were high.

No alt text provided for this image
No alt text provided for this image

Step 3: Finding a common language

Sometimes, AI doesn't understand what you need, so you need to help it out a bit. In Photoshop, Sabina covers some places that need adjustments–such as eliminating pigtails.

No alt text provided for this image

AI now understands what she wants, but the output is still not perfect.

No alt text provided for this image

Step 4: De-Bugging

Sabina works with a 500x500pix resolution, which is a good start, especially if you don't have a strong video card. But don't worry, she upscales the image part by part and fixes weird bugs on the way.?

She "inpaints" the character's clothing step by step, starting with the shoulder armor, the cloth itself, and finally, the belt. She doesn't give much freedom to AI, as she wants to keep the original design. But she provides more pixels to work with.

No alt text provided for this image

And there you have it! The final result is not perfect and still needs overpainting by a real artist. However, this work will take less than a day (for Sabina, this process took roughly 3 hours), rather than weeks.

By using AI in her art production pipeline, Sabina is able to save time and focus on the creative aspects of her work. She's not replacing herself with a machine, but rather using technology to enhance and support her creative process.

No alt text provided for this image

Today's Insights ??

Mira Murati, CTO at OpenAI, thinks that AI should be regulated. Here’s what she shared with Time on the subject:

“It’s important for OpenAI and companies like ours to bring this into the public consciousness in a way that’s controlled and responsible. But we’re a small group of people and we need a ton more input in this system and a lot more input that goes beyond the technologies—definitely regulators and governments and everyone else.”

What are your thoughts on potential regulations? Are you interested in hearing a lawyer’s take on the legal aspect of AI?


Mind-blowing News ??


Let’s see what next week will bring, stay tuned (and subscribe)!

Ivana Fodorová

Communications Manager at AppAgent / Strategic & Creative Mobile Marketing Agency

1 年

Just wow. Well done Sabina!

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了