Bots Thinking For Us - Microsoft's Investment in OpenAI and Imperfect Automation

Bots Thinking For Us - Microsoft's Investment in OpenAI and Imperfect Automation


With 微软 reportedly looking to make a $10 billion investment in OpenAI , the startup that created the viral chatbot #ChatGPT, it felt important to discuss the potential ramifications of mainstreaming this technology.

Beyond incorporation into Bing (which believe it or not, still exists) the obvious step here would be to significantly enhance Microsoft Office 365 products - which is probably good news if you find creating PowerPoint slides a massive time suck.

While this could be a huge upgrade to Bing by answering queries in a natural language (we've come full circle and are now back to Ask Jeeves) so that rather than getting a list of links for further research, you get a complete plain language paragraph providing an 'answer' to the question.


No alt text provided for this image


Which sort of begs the question - why hasn't Google already done this? After all OpenAI built the #GPT3 model on the Transformer platform, which came out of Google Labs. Project LaMDA - which has a different focus than natural language but is rumored to be even more powerful than ChatGPT - has been a key focus for them the last several years.

Which means there's probably a good reason they didn't go in this direction. Namely, that machine learning models have a tendency to push out factually incorrect content based on the input biases they consumed during training. Human beings created the content that taught them, which means that quite frequently the output will result in a hodgepodge of conflicting information - or at least contain the same perspective biases (or worse) that the original content did. AI isn't true intelligence. It is just a complex adaptable model that has no real understanding of the content it is creating.


Which leads to another question - is this the future we really want?

This results in even further disintermediation from original source material than exists today - which has already created problematic outcomes for large portions of society who frequently only stick to reading the headlines. Creating the veneer of accuracy and confidence in the output doubles down on an already questionable approach to obtaining information.

Without the human review layer of being presented options by assessing validity and filtering out the noise, we are often left with inaccurate information. Even worse, over time we may lose the ability to think critically and assess the validity the information we are being presented with as a society. With the digital age already creating interesting trends in information overload, yet another abstraction layer further removes society at large from source material and raw data. While more often than not, this is fine – it feels inevitable that the opacity of these processes will exacerbate skepticism and become a lightning rod for perpetuating disagreement. In a world where institutional distrust is at an all-time high, added complexity and furthering distance between source material and the end consumer is likely only to add fuel to the fire in many instances.

All that said, this is likely inevitable.

With GPT-4 right around the corner. And with early indications that the inputs are exponentially higher- adding significantly more complexity and nuanced capabilities to the model - we will be presented with a host of new options that no one can fully predict today.

No alt text provided for this image

Process automation is a broad societal trend that feels unlikely to slow. Answering emails automatically in Outlook seems like a massive time saver for everyone who lives in that world every day (myself included). Integrating GPT language models into Word would make it easier for people to summarize information to their teams, generate new business plans, create sales templates quickly – leading to a massive increase in overall productivity. Text to image deep learning models, known as DALL-E can further enhance things from a graphics perspective and be utilized to create full blown visual presentations in PowerPoint (or at least create a good start).

No alt text provided for this image


At the very least, natural language models will be a massive step forward in the reading of emails, creating audio books out of almost anything, and improve digital assistant capabilities.

While there are genuine concerns around creating a world where the vast majority of communication is simply bots having conversations with other bots, removing humanity and unique personality characteristics from content creation and communication, there are likely solutions to mitigate this as well. Personalized models can learn from each user’s personality traits and communication style. Meaning that your ‘bot’ will over time become closer and closer to resembling you and your individuality with each passing day. That likely creates a host of other concerns around 'synthetic reality' that I simply don’t have time to outline here.


Pursuing efficiency through automation is nothing new. And while there are genuine concerns to consider, the productivity gains (assuming that constantly reviewing the output doesn’t take more time than is saved) likely means that at least in many areas, GPT-style models will play a significant role in our daily lives going forward.

The hope is that we just don’t outsource our thinking too much and lose the ability to figure things out for ourselves. Critical thinking is a learned skill that takes continual practice. We must be careful not to automate that away as well.

Brett Glover

Leader (still learning), Collaborator, Fierce Customer Advocate , @CSI, Fintech Junkie

1 年

What could possibly go wrong by further advancing and implementing technology we truly don’t understand yet ??

要查看或添加评论,请登录

社区洞察

其他会员也浏览了