WHAT IF 99% OF THE METAVERSE IS MADE BY AI?
Copenhagen Institute for Futures Studies
An independent, non-profit futures think tank — founded in 1969. We equip you to act on the future, today.
One of the most interesting comments at the 2021 Web Summit in Lisbon came from Ivan Nikitin, Director of?Sensorium Galaxy . He claimed that the future of the Metaverse might not be up to us humans to create but will be created by general AI. After returning from Web Summit, this comment made me wonder how much of the content on the internet – and eventually the metaverse – will be created by AI in the future?
Obviously, any attempt to forecast a particular percentage would be a futile exercise, but it puts the issue into perspective. As always we work with different scenarios, but I agree with my colleague from the Copenhagen Institute for Futures Studies,?Timothy Shoup, when he states that “in the scenario where GPT-3 ‘gets loose’, the internet would be completely unrecognizable”. And in this scenario, he would bet on 99% to 99.9% being AI-generated by 2025 to 2030. The development of automated content creation will change the way the internet looks, and probably make it even harder to navigate in the massive amount of content. In many ways it seems like a scary scenario.
It is no secret that the development of automatically generated content with natural language processors and generators like GPT-3 is booming. GPT stands for?Generative Pre-trained Transformer, and is a language model trained on trillions of words from the internet. It was created by OpenAI, which was cofounded by Elon Musk and funded by Microsoft, amongst others. Briefly put, it is a language model that uses deep learning to produce human-like text. The full version of GPT-3 contains 175 billion parameters, but other models have been released such as Wu Dao 2.0, which has 1.75 trillion (!) parameters.
Earlier this year, OpenAI released DALL-E, which uses a 12-billionparameter version of GPT-3 to interpret natural language inputs and generate corresponding images. DALL-E can now create images of realistic objects as well as objects that do not exist in reality.
领英推荐
Many other AI models are being released on an ongoing basis, and are changing the way content is being produced and distributed. And just to add to the complexity, this does not just stop with simple text and pictures. One of the latest models from OpenAI is called Codex: a new system that automatically translates natural language into code. So what does that mean for the software industry?
According to the opensource software developer GitHub, which has already integrated Codex into its programming tool Copilot, it helps their programmers write up to 30% of the code. On the one hand, tools like Codex are a step further towards no-code development platforms in which you do not have to know any coding to create software. It will make developers more efficient and empower human creators to develop simple software. On the other hand, it is also a step towards a future scenario where machines might eventually do a better coding job than us humans.
“The Metaverse represents one of the most important technological landscapes of our generation”, says fellow futurist and Chief Metaverse Officer, Cathy Hackl. “It will serve as the backdrop for how society will work, play, and collectively conduct our day-to-day lives.”
The big question is, what will happen if you mix the attention economy of the current internet with a future in which AI will be creating the dynamic environment in the Metaverse? And what happens when you combine this with the development of synthetic media and virtual beings in the Metaverse? The dystopian scenarios could be mind-blowing, with deep fakes, fake news and misinformation flooding the Metaverse. But I also see a range of positive scenarios in which AI could provide built-in ethical content creation, making the Metaverse a collective virtual shared space based on a new set of values and an ethical code of conduct.
There is a need is to start a dialogue about the logic being built into the future content models. How do we make sure that the models are built upon valid data? What about the bias in the historical data? Do they reflect how we want our future to be built, if general AI creates the dynamic environment in the Metaverse? What happens when the automated content (which will undoubtedly have many flaws and a large margin of errors) becomes the very foundation upon which future models are built? It seems a bit like an impossible task to grasp the implications for the future media landscape, but it is something we will need to be very aware of when creating the infrastructure and logic of the Metaverse.