How long until Singularity- AI and Speech Recognition

How long until Singularity- AI and Speech Recognition

One day, we’ll all be coexisting with robots who just might be thinking and acting in a way that’s miles ahead of us.

Even now, the developments in the technology industry related to AI are astonishing, from Google’s AlphaGo defeating a Chinese Go Master in 2017 to the UK’s Royal Astronomical Society using AI to determine if we can live on other planets.

AI being truly human-like

Above all, however, the most popular question related to AI has been “how close are we to AI being truly human-like?”

One area in which this is becoming clear is in the usage of AI with speech recognition. A prime example of a current product that showcases this is Amazon’s Alexa. We all most likely know the problems that users have faced related to orders placed incorrectly and misunderstood commands.

A possible improvement to conversational AI like Alexa could come from a semi-competitor like Microsoft. At the tail end of 2017, ZDNet and other outlets reported that Microsoft has set a record related to AI and speech recognition.

Microsoft’s AI can now hear and transcribe just as well as a human. The metric that they used to determine this was, according to ZDNET and Microsoft, an error rate per transcription of 5.1%, which is on par with the human average error rate related to the area. The error rate in question was obtained from 2,400 telephone conversations between US citizens with neutral accents.

This brings to mind a key question. What results would be obtained if the sample was based on global telephone conversations? Answering this requires another discussion entirely, but suffice it to say that this is a move in the right direction. AI can now understand and transcribe neutral accents as well as humans. Microsoft and others aim to get it past this hurdle so that it can do the same with all accents in all situations.

Conversational User Interfaces

Even given this, Microsoft’s work in the space does not stand alone in this area. Another step forward related to speech recognition and AI is the work being done with Conversational User Interfaces (CUIs). CUIS can simply be understood as systems like Siri and any kind of chatbot that can recognize what you say and respond to it with a set of options.

We aim to first have CUIs that can completely learn from what humans ask and develop better answers based on this. In other words, these CUIs would need to fully run on Artificial Neural Networks, which are essentially computer versions of the human brain, in layman’s terms.

For this to happen, CUIs need to overcome the roadblock of needing presets to answer customer queries as well as the roadblock of not having completely internal transcription capabilities. In this way, the development of CUIs depends on companies like Microsoft reaching the milestone of perfect transcription of any human accent, with 5.1% error or less.

The field of Artificial Intelligence contains a lot of uncertainty and can be daunting. It is hoped with a series of articles that it will become clear how AI can and will affect your life, judging by leading work in the field.

Calero and Artificial Intelligence

At Calero, we are working on automating the submission of invoices from suppliers and matching this in real time against the purchase order. This aims to decrease or even eliminate the manual data checking process that occurs in organisations.

Calero have also realised that improving productivity is an essential part of any organisation, so we are working on a speech to text interface that would enable credit memos, purchase order…..to be created and sent to your clients in seconds, without any human involvement beyond the speech communication.

Find out more about Calero and our Token Launch at our site: Calero.io

Join us on Telegram: t.me/CaleroToken

Follow us on Facebook and Twitter for the latest news and developments.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了