Reimagining the Avatar Creation Pipeline in Games with Generative AI

Reimagining the Avatar Creation Pipeline in Games with Generative AI

At Ready Player Me, AI tools are helping us streamline the avatar content creation process — from the first concepts to the final 3D model.

Seemingly overnight, artificial intelligence (AI) is everywhere, transforming our personal and professional lives. For artists and game designers, the impact on creation pipelines offers huge opportunities to streamline previously time-consuming tasks. However, care must be taken to know when, where, and what are the limitations of using AI in the creative process.

Traditional avatar creation pipelines have remained relatively unchanged over the past 5 years: sketch a rough design, polish final concept designs, translate them into 2D art, create a 3D model, apply materials, rig and skin, and export into tools to create marketing materials, sizzle video trailers for avatar releases, and so on. Over the past year, our internal art team collaborated with R&D to investigate how they could leverage generative AI tools to reimagine the avatar creation pipeline for games and offer innovative new AI-based services to developers.

In addition, as we roll out new monetization tools, the art team needed a way to improve velocity without sacrificing the quality of the avatars, and critically, at the same time enhancing the creativity of the individual artists.

The Avatar Collection Pipeline

Concept Art

Concept art typically starts with a creative brief describing a collection's demographics, motivations, mood, and stylistic intent (fantasy, manga, dark, etc.). The teams will create a narrative, backstory, and mood board for inspiration alongside the brief. At this point, the concept artist begins sketching.

After experimenting with a number of generative AI tools like Midjourney to create concept art, the team decided that AI is not ready to replace this stage of the pipeline. Generative tools created insufficient unique concept sketches, with each sketch too derivative of the last. We do not want AI to replace individual creativity but rather offload other repetitive tasks to give the artists more freedom to use their imagination and create. That this time, AI is not suited for the ideation state, and is better suited further down the pipeline.

Intelligent Textures

Depending on the collection, the art team will work on new avatars, or assets such as clothing, hairstyles etc., and often both. After the 3D model is created in Blender, textures need to be applied. Artists then traditionally spend time working on textures to create a limited selection of options for players. Generative AI offers the potential for unlimited options via user-generated content. Not only does this reduce the workload for technical artists, but provides a better user experience and customization options. ?Our initial experiments, launched as Ready Player Me Labs, relied on updates to the avatar creator to allow the user to describe the pattern of the avatars' clothing: “A jacket with the colors of the rainbow”, or “pants with hotdogs on them.” On the backend, these textual prompts are interpreted by Midjourney to generate a texture.

Whilst this experiment was successful, users loved the ability to create any crazy pattern they could think of, the art team and designers recognized that patterns were not being applied correctly to textures. There were unrealistic shadows or highlights which did not follow body lines. Working in conjunction with our R&D team, we developed a series of custom algorithms and models to create intelligent textures that, when generated, look incredibly realistic. The result is more freedom of personalization for end users and significant time savings for technical artists. These intelligent textures and AI models are currently being incorporated into the Avatar Creator and will be available for all players and developers in future releases of Ready Player Me.

Asset Generation

With the groundwork in place with intelligent textures and user-generated content, the team shifted their focus to determine whether an entire asset, not just a texture, could also be generated. By partnering with CSM.ai, we have managed to create a process that allows us to deliver high-quality 3D assets from a 2D image, unlocking the potential for User Generated Content (UGC), the most requested feature from developers using the Ready Player Me Platform.

At the time of writing, we have an internal prototype tool, AI3dBeast, that combines a number of AI tools, plus internal texture management to allow users to start from a textual prompt that utilizes Midjourney or Stable Diffusion to create a 2D image, CSM.ai to turn that 2D image into 3D, and a magic box combination of auto re-meshing, scaling, skinning and texture mapping to generate a game ready asset. We are currently exploring the best way to make this available to developers, and once it has undergone more testing, we plan to roll it out to the platform.

AI3dBeast currently works in the following steps:

1. Select the article of clothing you want to target. Top, dress, pants etc.

2. Type a prompt “Lolita style dress, black and white” and four variations will be generated

3. Choose the variation you prefer, and see the model from each side

4. Generate the 3D model of your outfit (notice the realistic textures and shadows thanks to intelligent texturing)

5. Equip it to your avatar

Avatar Generation

The next step in the pipeline is to determine how much of the avatar itself can be generated by AI. This is a complex area with many separate 3D models interacting together: neck and appendages need to connect properly with outfits, the head and hair must appear in symmetry and much more. Experimentation so far suggests that AI tools are not at a point where they cannot handle full avatar generation. Every attempt creates a single mesh (hair, body, outfit, etc.) that moves as a single unit. The resulting avatar doesn’t feel lifelike enough for the quality we aim for. We will continue to investigate and closely monitor AI tooling, which improves almost daily.

Marketing Material

In the final stage of the avatar collection pipeline, the artists and designers will utilize the 3D models to create 2D marketing material, such as banner ads and sizzle video trailers. The current generation of AI tools are perfectly suited to this phase of the pipeline. Our designers can use Midjourney to generate background images that they can import into Photoshop along with the avatar model and use Firefly to tweak appearances and adjust overlays, lighting, foliage and more. For each current avatar collection, we are seeing an average of 2 days to create marketing materials vs. 5 days prior to adopting AI generation tools in the pipeline.

Conclusion

AI tools can make a significant impact on the avatar creation and 3D game design pipeline. The current generation of tools are better suited to the mid and late stage of the pipeline, and not in the highly freeform, imagination-driven early phases of concept designs. In addition, these tools are currently better suited to static 2D models and textures, but with the creation of Intelligent textures by our internal R&D team, we see huge potential for individual asset generation. Whilst generative AI tools are not yet ready for complex, interacting models such as full avatar generation, ?tools are improving at a blistering pace. By the time you read this article, they may well be.


Learn more about Ready Player Me.

Yoni B.

XR/AI Product Leader | Spearheaded Advanced AR Projects for Global Brands | Passionate About Impactful AI Applications

1 年

Great article Kaspar Tiri, can't wait to implement this at my company

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了