A History of AI Mini Series: Part 2 – 2000 -2014
Welcome to our mini-series documenting the arrival of AI in the mainstreamover the past 23 years. For those new to the world of AI, understanding its can unlock a greater level of confidence and understanding. This includes:
This 5-part series will include a brief introduction to the technology and explore AI in the mainstream across 4 different eras.?
At Jade Kite, we have been studying AI for over 20 years. It is our mission to empower people to feel confident in navigating this new world of technology and using it in qualitative research as quickly as possible. That is why we are committed to delivering the need-to-know information in bitesize chunks.
2000-2014
In the early 2000s much of AI application centered around identification and simple analysis of medium-sized data sets. NLP laid the foundation for many of the breakthroughs we are now seeing. This gave rise to the first mainstream adoptions of AI, including speech recognition, navigation, and machine translation.
NLP-led
Speech recognition
A defining advancement of this time was speech recognition. Dragon NaturallySpeaking went from progress to progress since the first (incredibly hard to use, calibration-intensive, not very accurate) release in 1997, and by the early 2010s was reaching favorable effort-to-reliability ratios. The launch of Apple’s Siri and Google’s Voice Search in 2011 at the time truly felt like a giant leap into the future. These products allow users to speak naturally to their devices and receive answers without the need for typing.
?
领英推荐
Sentiment analysis
In the online marketing world, sentiment analysis was starting to make waves by determining the emotional tone of content – mostly social media and consumer feedback. At the time, this only went as far as recognizing ‘positive’ and ‘negative’ content and largely lacked emotional nuance. While basic, this still empowered researchers and advertisers to analyze social media sentiment, customer feedback, and product reviews.
?
Machine translation
While machine translation had been around for many years prior to this period, it was during this time that it began to improve significantly. Google Translate was launched in 2006 but underwent significant improvements between 2010 and 2014, making it a more reliable and widely used tool. As with many things in AI, this wide usage is a large part of what enabled it to progress: more inputs, reactions to the inputs, and corrections meant an army of millions of volunteer “trainers”.
Fun fact: Google Translate and modern translators do not actually translate directly from language A to language B, but from language A to its own non-human language, and then from that to language B. I would LOVE to study that universal language somehow.
?
Machine learning-led
Recommendation systems
The appearance of efficient, at-scale data collection, storage and analytics of consumer behavior transformed the online world forever. Amazon and Netflix, amongst many others, were quick to invest in and adopt this technology to offer content recommendations created by sorting consumers into segments. This gradually started to influence the way consumers are segmented across marketing: not by demographics but by behavior.
?
While adoption may have been slow at the time, these use cases and applications the groundwork for many AI tools we see today. Indeed, many tools which today claim to be fueled by cutting-edge AI are in fact only use the above.
Tune in next week for Part 3: 2015-2020
CEO, Jade Kite - Inspire With Confidence
1 年Part 3 is out now: https://www.dhirubhai.net/pulse/history-ai-mini-series-part-3-2015-2020-sidi-lemine
Pen & paper note lover | AI Explorer | Data Enthusiast
1 年I can't believe it's been 12 years since Siri/ Google Voice launched - amazing!
Insight Realist and AI Idealist
1 年Interesting stuff, as always Sidi! I had no idea that's how Google Translate worked... but makes a lot of sense. Very cool!