Here Are The Most Controversial AI Moments of 2020

Here Are The Most Controversial AI Moments of 2020

Artificial intelligence has been the buzzword in 2020 and with the benefits of this technology evident around us; AI has had its own share of controversies. From algorithms unfairly discriminating women in hiring and students complaining about unrealistic grades, there is no doubt that AI has evolved in 2020 and as 2021 beckons, it is time to take stock of what the year has been. With GPT3, deepfakes, and facial recognition making headlines in 2020, there are many arguments surrounding privacy and regulations.

In this article, I will explore the following controversial AI incidents in 2020 and explore the future prospects of artificial intelligence and how 2021 is shaping up:

-Facial recognition

-Deepfakes

- AI-based grading system

- NeurIPS Reviews

- GPT 3

Facial Recognition

Clearview AI provides organizations, predominantly law enforcement agencies, with a database that is able to match images of faces with over three billion other facial pictures scraped from social media sites.

The company has recently been hit with a series of reprisals from social media platforms, who have taken a hostile stance in response to Clearview AI’s operations. In January, Twitter sent a cease letter and requested the deletion of all collected data Clearview AI had harvested from its platform. YouTube and Facebook followed up with similar actions in February.

Clearview AI claims that they have a First Amendment right to public information, and defends its practice on the basis of assisting law enforcement agencies in the fight against crime. Law enforcement agencies themselves are exempt from the EU’s #GDPR.

Clearview has received multiple cease-and-desist orders from Facebook, YouTube, Twitter, and other companies over its practices, but it is not clear if the company has deleted any of the photos it’s used to build its database as directed by those cease-and-desist orders. In addition to the lawsuit in Illinois, Clearview is also facing legal action from California, New York, and Vermont.

Deepfakes

Deepfakes supplant people’s faces onto existing bodies. While many look near-genuine, the technology still hasn’t reached its potential. Still, experts have noted its misuse in pornography and politics.

The start of 2020 came with a clear shift in response to deepfake technology, when Facebook announced a ban on manipulated videos and images on their platforms. Facebook said it would remove AI-edited content likely to mislead people, but added the ban does not include parody. Lawmakers, however, are skeptical as to whether the ban goes far enough to address the root problem: the ongoing spread of disinformation.

The speed and ease with which #deepfakes can be made and deployed, have many worried about misuse in the near future, especially with an election on the horizon for the U.S. Many in America, including military leaders, have also weighed in with worries about the speed and ease with which the tech can be used. These concerns are heightened by the knowledge that deepfake technology is improving and becoming more accessible.

Microsoft announced the release of technologies to combat online disinformation on their official blog. One of these technologies was the Microsoft Video Authenticator, which analyzes photo or video to provide a confidence score as to whether the media is fake. It has performed well on deepfake examples from the above mentioned Deepfake Detection Challenge dataset.

AI-Based Grading System

The UK exam regulation department chose to start using an AI grading system in place of the A-level examination for university entrance, which was canceled. The U.K. has since dropped it after parents and students complained that it was unethical and biased against disadvantaged students.

Thousands of A-level students were given a grade that was lower than their teacher predicted, though, sparking a nation-wide backlash and protests on the streets of London. Now, the government has buckled and announced that it’s abandoning the formula and giving everyone their predicted grades instead.

The backlash to Ofqual’s algorithm was only matched by its complexity. The non-ministerial government department started with a historical grade distribution. Then, Ofqual looked at how results shift between the qualification in question and students’ previous achievements.

The number of downgrades wasn’t the only problem, though. The reliance on historical data meant that students were partly shackled by the grades awarded to previous year groups. They were also at a disadvantage if they went to a larger school, because their teacher’s predicted grade carried less weight.

At a time when society is examining how technology is reinforcing its race and class issues, many realized that the system, regardless of Ofqual’s intentions, had a systemic bias that would reward learners who went to private institutions and penalize poorer students who attended larger schools and colleges across the UK.

NeurIPS Reviews

This year, the thirty-fourth annual conference on Neural Information Processing Systems, NeurIPS 2020 is going to be held virtually from 6th to 12th December. The paper submissions for this year is 38% more than last year. Additionally, 1,903 papers were accepted, compared to 1,428 in 2019.

The review period of the papers began in July, and in August, the popular #artificialintelligence conference, NeurIPS has sent out the paper reviews for this year’s conference. This has brought the popular machine-learning event once again amid the controversies as it has been claimed that the reviews of the papers are terrible such as either they are not clear, or the sentences were incomplete by the reviewers, among others.

This is not the first time that controversies have scarred the reputation of the conference. In other words, it can be said that controversies are not a new thing for this popular #machinelearning conference. In 2018, the organizers of the Neural Information Systems Processing conference had changed the event’s name from NIPS to NeurIPS after heading into a controversy about whether “NIPS” is an offensive name or not.

GPT 3

OpenAI released its latest language model in June, surpassing its predecessor GPT-2 with 175 billion parameters. It has raised many concerns about poor generalization, unrealistic expectations, and the ability to write human-like texts for nefarious purposes. Elon Musk, an OpenAI founder, also criticized OpenAI’s decision to give exclusive access to Microsoft.

Many advanced Transformer-based models have evolved to achieve human-level performance on a number of natural language tasks. Authors say the Transformer architecture-based approach behind many language model advances in recent years is limited by a need for task-specific data sets and fine-tuning. Instead, GPT-3 is an autoregressive model trained with unsupervised machine learning and focuses on few-shot learning, which supplies a demonstration of a task at inference runtime.

Scaling up language models greatly improves task-agnostic, few-shot performance, sometimes even reaching competitiveness with prior state-of-the-art fine-tuning approaches. For all tasks, GPT-3 is applied without any gradient updates or fine-tuning, with tasks and few-shot demonstrations specified purely via text interaction with the model

On NLP tasks, #GPT3 achieves promising results in the zero-shot and one-shot settings, and in the few-shot setting is sometimes competitive with or even occasionally surpasses state-of-the-art.

Future Prospects

The important question is what will happen if the quest for strong AI succeeds and an AI system becomes better than humans at all cognitive tasks. Designing smarter AI systems is itself a cognitive task. Such a system could potentially undergo recursive self-improvement, triggering an intelligence explosion leaving human intellect far behind.

By inventing revolutionary new technologies, such a #superintelligence might help us eradicate war, disease, and poverty, and so the creation of strong AI might be the biggest event in human history. Some experts have expressed concern, though, that it might also be the last, unless we learn to align the goals of the AI with ours before it becomes super intelligent.


要查看或添加评论,请登录

社区洞察

其他会员也浏览了