2019 in review: What just happened in the world of Artificial Intelligence?
David Foster
Founding Partner, ADSP | Author of world's first textbook on Generative AI - Generative Deep Learning (O'Reilly)
2019 has certainly been a busy year. The speed at which Artificial Intelligence (AI) advancements and news are making the headlines endows our everyday life with moments of awe and pride; and other moments dominated by an irritating thought that this technology is finding our society ill-prepared.
Has 2019 been a year of progress or disillusionment in AI? With researchers quickly conquering benchmarks that have previously felt unapproachable, can we today say that the field is on a steady track?
At Applied Data Science Partners, we wanted to take a step back and put the 2019 AI events into order and perspective. With the spotlight on, it is important to separate the interest that a work initially attracts, from its actual gravity, and its consequential influence on the field. For this reason, this article unfolds the parallel threads of the AI story and attempts to isolate their significance. Thanks to our amazing content writer Elena Nisioti for narrating this story so wonderfully.
Grab yourself a mince pie and cup of tea, sit back and enjoy a review of the AI year that was...2019.
Fields living a renaissance
If we were to pick a sentence to describe AI in 2019, that would probably be: “reinforcement learning returns and looks like it’s here to stay”.
By now, most of us are probably familiar with supervised learning: some people gather a lot of training data, feed them to a machine learning algorithm and let it come up with a model that can, among others, predict and classify for us. Some of us may even have the impression that Artificial Intelligence is synonymous to supervised learning. However, this is only one of the many types of machine learning that we today have.
In reinforcement learning (RL), agents learn through trial and error, by interacting with an environment that returns rewards to their actions. When more than one agents are involved, they are considered a multi-agent reinforcement learning system.
This field has been around for decades and, conceptually, sounds like a more plausible learning mechanism for creating intelligence than supervised learning. However, it wasn't until 2015 that it acquired traction, when DeepMind used Deep Q-learning, a combination of a classical reinforcement learning algorithm with deep neural networks, to create an agent that plays Atari games. In 2018, OpenAI also established itself in the area by solving Montezuma’s Revenge, an Atari game considered particularly hard.
In the past few months, things escalated:
These works have resurrected the belief of the research community in RL, which was in the past considered too inefficient and simplistic to solve complex problems, even games.
Another application that took off this year was Natural Language Processing (NLP). Although researchers have been working in this area for decades, text generated by NLP systems in the near past did not sound natural enough. Since the end of 2018, attention has shifted from word embeddings of the past to pre-trained language models, a technique that NLP borrowed from computer vision. Training these models is performed in an unsupervised manner, which enables contemporary systems to learn from enormous amounts of text available on the internet. Thus, these models become "knowledgable" and develop the ability of understanding context. Their performance on specific tasks can then be further improved using supervised learning. This practise of improving a machine learning model by training it on different tasks belongs to the area of transfer learning and is believed to hold great potential.
NLP has been building momentum since last year, when systems such as Google BERT, ELMo and ulmfit, were introduced at the end of 2018, but this year's spotlight was stolen by OpenAI's GPT-2, whose performance has raised discussions on the ethical use of NLP systems.
Practises achieving maturity
This year has also seen some of the recent deep learning techniques reaching their maturity. Applications that employ supervised learning, and in particular computer vision, have given birth to successful real-life products and systems.
Generative Adversarial Networks (GANs), a pair of neural networks where a generator network attempts to deceive a discriminator network by learning to generate images that imitate the training data, have reached a level of perfection. Creating artificial but realistic images of people and objects is no longer a frontier for AI. A picture is probably the best way to conceive the progress in the field, from the introduction of GANs in 2014 to StyleGAN, open-sourced by NVDIA in 2019:
2019 has even seen AI-generated art departing from hypothetical discussions of past years to be part of today's museum installations and auctions.
Computer vision has also been adopted in areas of significant commercial and social interest, including autonomous vehicles and medicine. The adoption of AI algorithms in these areas is naturally slow, as they directly interact with human life. At least until now, these systems are not fully autonomous and their aim is to support and augment the capabilities of human operators.
Research groups are working intensively alongside hospitals on developing AI systems for early prediction of diseases and organising the vast health data archives, with a notable example being the ongoing partnership between DeepMind Health and UCLH. However, most of these works are still at an experimental phase and, up to date, SubtlePet, a software using deep learning to enhance medical images, is the only AI-enabled system to have received FDA clearance.
The sleeping giant
AutoML is a sub-field of machine learning that has been around since the '90s, attracted significant interest in 2016, and somehow never managed to make the headlines, at least not as other AI trends did. Perhaps this is due to its not-so-fancy nature: AutoML aims at making the practise of machine learning more efficient, by automating decisions that data scientists today make through manual, brute-force tuning.
In the past three years, our understanding of this area has evolved, and, today, most major companies offer AutoML tools, including Google Cloud AutoML, Microsoft Azure, Amazon Web Services, and DataRobot. This year, the interest turned towards evolutionary approaches, with the Learning Evolutionary AI Framework (LEAF) becoming the state-of-the-art. However, AutoML has yet to reach the level of maturity that will allow a fully automated AI system to perform better than a team of AI experts.
Concerns about AI
Despite the overwhelming number of successes, the world of AI also gave us some discouraging stories this year. A major issue was bias in machine learning models, a problem that only materialised in 2018, when Amazon discovered gender bias in this automated recruiting system and COMPAS, a tool widely used to determine sentences in US courts, was also found to be biased against gender and race.
This year the number of cases increases, which arguably reveals that the public and institutions are becoming increasingly suspicious of existing AI systems used to automate decisions. Here's a small part of the picture:
- Hospital algorithms are found to be biased against black patients in October
- The AI system used to grant UK visas was accused by a rights group of being racially-biased in October
- Apple's credit scoring system was accused by its customers of being gender-biased in November
Bias is a particularly alarming problem, as it lies in the core of supervised deep learning: when biased data are used for training and prediction models are not explainable, we cannot really tell if there is any bias. The reaction of the research community has so far aimed at developing techniques for understanding the reasons behind the decisions of deep models, but experts are warning that many of our problems could be solved if we just adopted the right practises. Google Cloud Model Cards are a recent attempt at organising the community towards open-sourcing models accompanied with a clear description of their nature and limitations.
Another alarming realisation of this year was that, the more sophisticated a technology becomes, the higher are the chances that it will be misused. Deepfakes are the dark side of GANs, where deep learning algorithms are used to create pictures or videos involving real people, in purely fabricated scenarios. It doesn't take much far-sightedness to see how this technology can be used for spreading false news, from political propaganda to bullying. This problem cannot be solved by scientists alone, who history has proven bad at predicting the real-life implications of their discoveries, let alone controlling them, and requires a dialogue among all parts of society.
Just how big is AI today?
Quantifying the value of AI today is hard. But one thing is certain: AI has left the realm of science fiction and avant-garde computer science and is now an integral part of a society that is heavily investing on it.
Earlier this year, three major deep learning researchers received the Turing Award, a long-awaited recognition of AI as an established field of computer science. Plus, a look at these exponential curves in the 2018 AI Index annual report is adequate to reveal the excitement of both the research community and industry:
What's in store for AI in 2020? In our next post, we will attempt to see how the history of this field and recent developments will influence its near future.
Follow our company page for updates and more cutting edge AI content.
Applied Data Science Partners is a London based consultancy that implements end-to-end data science solutions for businesses, delivering measurable value. If you're looking to do more with your data, please get in touch via our website.
Staff scientist, wellbeing expert, exercise physiologist, inventor, biohacker, endurance engineer, health and running coach. Worked with an Olympic champion. Corporate and personal health engineer. Vo2maxologist.
1 个月Looking from 2024 and laughing on how primitive ai was??
Founder & Innovation Manager at Desidoo.com S.r.l. We are hiring in Turin
4 年Thanks David, really helpful?to spread to some of our customers in Italy what's going on.??
Teacher
4 年Great read and putting innovations into perspective!