How AI is Changing the Game: Avatar Customisation Evolved
Data Reply UK
Data&Analytics @Scale | AI | Strategy | Innovation | From embryonic ideas to enterprise solutions
In the world of gaming, personalisation and character customisation backed by in-game microtransactions have led to some of the most successful games in production. As an example, we consider the game “Player Unknown's Battlegrounds”, or PUBG, which grossed $744m in weapons customisations (Clement, J. 2022). Players yearn for the ability to create unique avatars that reflect their individuality. We also see that in a study of Miis on the Wii gaming systems, the majority of the individuals opted to change their avatars to resemble themselves (McArthur, 2018). Traditional customisation options, however, often fall short of providing truly infinite possibilities having been limited by pre-made options that the development shortlisted through market research. This is where Stable Diffusion has gained traction offering respite.?
?What does this have in offer for businesses and gamers:?
In this article we will walk you through the tools, and technologies which can be leveraged to develop something like this, we have experimented with a few avatars as part of our Generative AI R&D. But before that, we need to understand more about the AI model, which is used, let's us dive straight in.?
Understanding Stable Diffusion and How to get started?
Stable Diffusion is a deep learning model currently developed by Stability AI and was initially released in August 2022. Given an input, it can be used to generate or modify highly detailed images with as much creativity as the person using it. Stable Diffusion uses a type of diffusion model called a latent diffusion model, which can be thought of as a sequence of denoising autoencoders. An example of Stable Diffusion's denoising process is seen below.?
Stable Diffusion has three primary features when it comes to generating content: ‘txt2img’, ‘img2img’, and ‘depth2img’. These features utilise different parameters when it comes to creating or modifying images, and the majority of these will be covered later with an example.
The technique of inpainting can be used in Stable Diffusion and is part of the img2img capabilities. This is where the user can draw a ‘mask’ on an existing image, indicating the area to be generated by the model. This technique can be used to fix errors or imperfections, as well as to generate objects with new colours or styles.
Provided below are some examples of the potential of generative AI for the use of personalised customisation of an avatar. Beside each image is the positive and negative prompts used alongside inpainting to create the output. () is used in the Stable Diffusion web UI to increase the model’s attention to certain key words. [] does the opposite of this, reducing the model’s attention, when you want the model to focus less on a specific input. Each bracket has a 10% value change, so (prompt) would be the equivalent of a 10% increase in the prompt, and [prompt] would be a 10% decrease. It is possible to use multiple brackets to multiply the effect.
In the below slides (+) refers to a positive prompt and (-) refers to a negative prompt
Stable Diffusion has a bunch of parameters which can be fine-tuned to work towards your ideal output image. Stable Diffusion also has many open-source checkpoints. These are model weights which are the result of training on specific datasets, which offer different styles of aesthetics, ranging from realistic, to anime, to cartoon, etc. These often have their own recommended parameter values and combined they are extremely useful for producing a variety of results. Below is a screenshot of the parameters used in the inpainting process.
Implications and thoughts
The introduction of generative AI into the field of gaming would have a number of benefits and negatives. The example of avatar content generation shows the possibility of bolstering creativity, allowing users to mould avatars to their will, only being limited by their imagination. This means that rather than choosing from the typical collection of potential changes, there would be an increase in diversity amongst avatars. It could also be used to generate other objects in the digital world, perhaps allowing for more elaborate, expansive, and inventive worlds.
However, there are obvious downsides and potential issues with this technology, and it would have to be limited in some forms, and a trade-off would have to be created between freedom to create and protection from inappropriate content. The barrier for this could be set independently from game to game, adjusted with developer expectations and age ratings. Aside from this issue, there is also the limitations on the side of the technology. Currently generative AI struggles with a few major areas of content generation. Even with carefully fine-tuned prompts (which would perhaps need to be a lesson in itself for all of the potential users utilising the technology), Stable Diffusion itself struggles with text generation, as well as certain aspects of object generation, such as human hands and mouths. However, it is important to note that the speed at which generative AI is improving this is unlikely to be an issue for much more time. Lastly there is the issue that images are typically only rendered at a resolution of 512x512, with the capability of 768x768 being added with Stable Diffusion 2.0, but rendering at this resolution combined with the technology of upscaling can produce images of high enough quality for gaming use cases.
领英推荐
Future Work and Enhancements
It is clear that there is quite a large stumbling block in the horizon. After all, what Stable Diffusion essentially does is work with 2-dimensional images of the avatars. So, the question remains: How is it possible to go from all these 2D customisations and map them on to a 3D object? There are quite a few techniques that could assist with this, and we explore the two main ones.
The first, and the simplest, is a warping and pre-defined mask to allow a simple transformation of the image to a 3D blender object. We see an example of a test grid being mapped onto an object using a UV map (Applying textures — blender manual, n.d.). The only downside to this technique is that the Stable Diffusion technique, quite remarkably, can take into consideration the lighting angles visible in the 2D image. This might render a tattoo, for example, with a light reflection that could be problematic in the transformation, and we would require some further preprocessing. This is also not to mention that we have a predefined blender avatar object available for the game.
The second, although a slightly more complicated method, is the use of another AI technique called Neural Radiance Fields (or NeRF for short) which makes use of multiple angles of pictures taken of the same object (NeRF: Neural radiance fields, n.d.). The resultant is a 3d file output of the same object in question. This is particularly useful especially when you want to change the shape of the avatar entirely (i.e., make them fat). However, the main problem to this method is that stable diffusion is a probabilistic model and whilst a picture with one angle might have generated one butterfly tattoo, the other angle might've generated a different version of it. Therefore, mapping this to a 3D object could be problematic. This could be slightly controlled with the use of a seed within the model but still uncertain. Thankfully, NeRF models can also be used to generate depth estimations with single images together with other models which brings us back to the first method.
Conclusion
The gaming world stands on the cusp of an exciting era where creativity and individuality will no longer be boxed in by the limits of pre-designed options. Generative AI, specifically Stable Diffusion models, present a promising future for game developers and players alike, enabling the creation of highly personalised game avatars and objects. These models harness the power of deep learning to not only mend and enhance existing images, but also, to create entirely new ones from scratch, making them an invaluable tool for in-game character customisation.
While the journey to this AI-enhanced gaming future is not without challenges, the benefits certainly outweigh them. The gaming industry has always been about pushing boundaries, exploring new realms, and crafting imaginative experiences. With the advent of generative AI and models like Stable Diffusion, these boundaries can be pushed even further, opening the door to a more immersive, personalised, and vibrant gaming universe.
So, are you ready to elevate your gaming project with cutting-edge AI technology?
At Data Reply, our expert team is on hand to guide you through integrating limitless character customisation. Get in touch with us today. Let's explore together how we can enhance your gaming experience and set industry standards.?
Get in touch at [email protected] or contact our Data Science Manager, Perumal Kumaresa
References