Microsoft has Exclusive Rights to OpenAI's GPT-3
Michael Spencer
A.I. Writer, researcher and curator - full-time Newsletter publication manager.
Microsoft's interest in OpenAI was very clear when it invested $1 Billion in July, 2019. Generative Pre-trained Transformer 3 (GPT-3) is an autoregressive language model that uses deep learning to produce human-like text.
If you enjoyed the topic of this article, subscribe on the right side to the Last Futurist.
At that time, Microsoft and OpenAI announced a new partnership to build artificial general intelligence to tackle more complex tasks than AI. OpenAI claims to want to help discovering and enacting the path to safe artificial general intelligence, which bodes well for Microsoft's own AI for good mandate.
However GPT-3 is not without controversy. OpenAI started with a very idealistic vision, that has quickly become a little bit suspect. The way this scales could be problematic to the written word online. The largest version of GPT-3 has 175 billion parameters, which are equations that help the algorithm make a more precise prediction. GPT-2 had 1.5 billion.
Microsoft also has a new supercomputer in the works to power OpenAI. Microsoft noted that the supercomputer was built "exclusively" for OpenAI. OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity.
However artificial intelligence has not been well regulated as we can see in the current algorithmic internet of 2020. What happens when those algorithms continue to evolve beyond our legal, regulatory and human intelligence capacity? OpenAI has made a name for itself by training very large models, and the point from which this turns from research into commercial applications, should concern us.
Microsoft announced yesterday that it will exclusively license GPT-3, one of the most powerful language understanding models in the world, from AI startup OpenAI. What happens to the internet in a world of computer generated human-like text? We've already seen what happens when algorithms for profit are let loose on social networks.
Microsoft is teaming up with OpenAI to exclusively license GPT-3, allowing them to leverage its technical innovations to develop and deliver advanced AI solutions for their customers (think Azure), as well as create new solutions that harness the amazing power of advanced natural language generation. But with innovation comes big responsibilities and AI brings a lot of uncertainty to the world that without better AI regulation, will pose certain risks.
Microsoft recognizes the enormous scope of commercial and creative potential that can be unlocked through the GPT-3 model, with genuinely novel capabilities – most of which we haven’t even imagined yet. Yet it fails to mention any of the dangers to what that technology could become. In age where companies call data the new oil and AI the "invention of fire" moment in a technologically society, I think we need to imagine how this technology will actually be used. We should not simply glorify the business aspects and praise AI blindly.
What Elon Musk helped found in OpenAI, has more or less already gone astray. Such is the power of the profit motive to commercialize AI. In February, 2020 Elon Musk criticized OpenAI, one of the world's top AI research organizations, saying it lacks transparency and that his confidence in its technology's safety is "not high." Given huge funding by Microsoft, AI as a service is becoming one of the on-ramps for Cloud customers.
When you consider the psycho-babble of the Singularity, OpenAI and DeepMind bear watching. OpenAI is racing to be the first to build a machine with the reasoning powers of a human mind. Alphabet's pressure on DeepMind to "monetize" must be extreme. What is this pushing companies like Baidu, Huawei or even Apple and others to do in order to keep up?
GPT will evolve. Transparency in AI research is not taking place. AI regulations on a global level that can protect humanity from AI in the 21st century will need to be formed, otherwise AI could become as existential a threat to humanity as climate change is slowly being recognized as. Few in 2020 consider that possibility seriously yet that AI could morph into something dangerous. Every blog by these companies is written like a PR piece of triumph, without any serious macro overview of what these tools will become.
We need to be more honest about the technologies we are developing. Of course Google, Facebook and Microsoft want to tell us AI will be great for us. GPT-3 will hopefully directly aid human creativity and ingenuity in areas like writing and composition, describing and summarizing large blocks of long-form data (including code), converting natural language to another language – the possibilities are limited only by the ideas and scenarios that we bring to the table. But what are the risks? What will the internet become? What of the journalists, the media, the content creators of this world?
AI has as much chance of being weaponized as it does empowering the human condition, if the evolution of the history of the internet has taught us anything, it's that. We've created a world that's more artificial, but not necessarily more intelligent. GPT-3 is incredible but that Microsoft and OpenAI don't even know what it could become, is also frightening.
OpenAI is acting like it's the pioneer of beneficial AGI. What could possibly go wrong? Microsoft has developed its own family of large AI models, the Microsoft Turing models, which it has used to improve many different language understanding tasks across Bing, Office, Dynamics and other productivity products. As it eventually acquires OpenAI they won't just make Sam Altman richer, they will bring new products to market that could change the world as we know it.
When you say you have exclusive rights on a groundbreaking technology, that just a few years ago was considered too dangerous to be released, you aren't just raising eyebrows. You are participating in an AI-arms race that could end very badly. Clearly, Microsoft plans to leverage the capabilities of GPT-3 in its own products, services, and experiences and to continue to work with OpenAI to commercialize the firm’s AI research.
OpenAI has no idea what its technology will finally be used for. Silicon Valley having mission statements that contradict how it operates is not new. If you say you prioritize transparency and then don't, how are we supposed to trust the AI that you have created? Does this sound like a transparent company to you? How much longer will we be duped by Silicon Valley propaganda? Something has to change, or else AI will eventually catchup with our lack of integrity.
Training massive AI models requires advanced supercomputing infrastructure, or clusters of state-of-the-art hardware connected by high-bandwidth networks. Microsoft is only too happy to enable OpenAI to complete its manifest destiny. However it may be time we stop pretending AI will democratize opportunities, create a more fair version of capitalism or make a healthier world for people.
If you enjoyed this article, subscribe on the right side to the Last Futurist.
Elon Musk tweeted about this, said it was the opposite of being open and transparent. Ironic how his fears of AI are coming true.
SDOH and Healthcare Interoperability. AI Beyond Chatbots. Public Health Data Modernization.
4 年Good read! The need to talk about these topics: AI safety and AI ethics--is related to the need to double down on researching new approaches to AI--hybrid symbolic and deep learning models, few shot or one shot learning--that don't require hundreds of millions of dollars to train the AI algorithms. Such money is only available to a few huge companies like Microsoft. Making this technology more accessible will create competition and potentially compel the incumbents to talk about these issues.
Creating innovations through "will" and "actions" from customer′s perspective!
4 年Our brain wants us to feel good, to motivate us. Does Artificial Inteligence too? That is why we believe we are better than we are. But the more we learn the more we discover how little we know, this is the relative truth. Let's keep upgrading the code.
Head of Product at Scooploop, Software Architect
4 年Musk has a point here, OpenAI was started exactly for transparency, not as some mish mush accessed through some commercial API and covered by marketing gloss.
"If our AI can help improve someone's well-being, it's all worthwhile."
4 年Very nice