Stanford's report on AI: Gaining Strength, Gathering Storm
Dmitry Doshaniy
CEO, Founder | NNTC - Bespoke innovations, Industrial Metaverse, Turnkey IT projects || Bizdev.ae - outsourced sales for tech companies in MENA ||| Ex-IBMer ||| Wannabe philosopher, boxer, comedian
The Stanford AI100 Project has released its second report on the new possibilities of artificial intelligence. It summarizes how AI affects people's lives, what the new benefits and threats are, and how we should live with it
The AI100 expert group at Stanford University has been monitoring the field of artificial intelligence since 2015, and its reports are some of the most important in this field. The first document was published in 2016, the second - under the rather poetic title "Gaining Strength, Gathering Storm" - was published in September 2021. ?The group intends to produce an index every five years for at least the next century, and this is enshrined in the name (AI100).
The goal is to use the index to quantify the development of AI and do so as comprehensively as possible (similar to the Dow Jones Industrial Average for U.S. stock markets). This is needed to understand the risks of AI applications to national security, ethics, politics, psychology, and science. Previously, such comprehensive assessments of AI simply did not exist.
The benefits of rising AI capabilities
In the five years since the last report, the effectiveness of artificial intelligence has grown significantly. New directions and challenges for AI have emerged: recognition of human visual and imaginary images, as well as commands via external neural interfaces, "machine cooperation" - the work of two algorithms that train each other in, say, recognizing and creating fake materials. For example, Facebook held a shuttle race to uncover deepfakes: participants used neural network training technologies to identify almost all the fakes in the huge database of fake and real videos created for this task in just three months.
Modern algorithms have excelled at facial recognition, and in the past two or three years, the technology has radically improved and begun to work for government surveillance systems. The growth of its effectiveness has reached the point where even covid masks do not help to hide in all cases.
The accuracy of recognition of human poses in photos increased by 30%, identification of actions in videos and many other nuances of computer vision are also progressing quickly. And this is happening with a massive reduction in the cost of the programming and training time.
Compared to the results of 2016, neural networks have achieved better results in translation and speech recognition, and ordinary people have access to all the benefits (Google translate etc.) and voice input.
One of the most intriguing applications of AI is in applied biology. Until recently, it has been considered a dormant, knowledge-gathering field that can provide a breakthrough when the technological level allows. This includes, for example, visual pattern recognition and genome decoding. But the most promising direction is the recognition of protein functions and the creation of new amino acid sequences or chemical substances with set functions (for example, for the creation of drugs). The doors may open to the free modification of cell functionality. This potential breakthrough can only be compared to the human transition to metal processing, and perhaps it will come quite soon - the situation with the coronavirus has spurred research in this direction.
AI Threat to Society: The Problem of trust and misuse
Until recently, popular culture promoted the distant scary image that artificial intelligence can act independently, posing a real threat to humans. But now this image is almost buried under a pile of real problems that the widespread use of AI is already causing.
Instead of Skynet, we are faced with the fact that it is humans who are using the capabilities of modern algorithms and neural networks against humans, so far only resembling "canonical" artificial intelligence. These include combat drones instead of assassins, fake evidence in courts and political affairs, creation of fake materials for the purpose of threats or blackmail, "voice stealing", non-existent artificial virtual identities (even free web applications are not bad at generating new faces) and many other attempts to get small or big power, and more often money at all levels of society.
领英推荐
The authors of the report (still US-centric, btw) see the threat in the fact that neural networks tend to reinforce discriminatory tendencies. A striking example: the Microsoft twitter-chatbot Tay, which overnight turned (with the help of users) into a radical racist, proposing a race war. Because of that, the bot's functionality was cut down.
There is a more serious problem related to the spread of AI in processing raw data instead of classical mathematical approaches in areas where accuracy and transparency are especially important. Neural network is essentially a "black box": even the authors do not know how it arrives at a result, and in this sense, trained neural networks have become similar to the brain.
?That is why some research is focusing specifically for "cracking" these "black boxes". Such hacking is necessary to avoid threats because without a complete understanding of the principles of data processing, it is impossible to know about leaks, losses of important information and places where errors occur. And yet such algorithms are already used in medical genetics, in asteroid hazard assessment, and even in judicial practice!
?In addition, since 2016, the fear of people not being able to compete with AI and losing their jobs has grown. This is already happening in some areas, such as assembly and quality control in manufacturing, sales, and financial services. However, the authors of the report consider this to be the least significant of the problems created by AI, including in the long run. They are confident that artificial intelligence is being made a scapegoat for nothing - its use does not explain the too slow decline in unemployment and other manifestations of financial crises. Moreover, according to the index, in areas related to security (digital, physical, national, financial), there is a growing proportion of organizations that consider the use of AI to be risky.
?Much more often, AI is used in addition to production processes to optimize them than as a full-fledged replacement of some work, and even if it is replaced, people are still needed to maintain and develop the corresponding tools. At the same time, the share of jobs directly related to AI is growing slowly.
?AI in the media
In the five years since the last report, the buzz around AI has increased dramatically, as the people's awareness of the subject. The contrast in opinions about AI in publications and at conferences has grown dramatically. The last report noted a decline in the "neutrality" of AI evaluations since the beginning of the century, and this trend has only increased in the last five years. In 2019, 70% of AI conferences addressed ethical issues at the title level; previously, there were far fewer such conferences.
The bias toward positive evaluations of AI has not been evident, although previously it was significant. This is due to the fact that since 2016, technologies based on neural networks - the most relevant embodiment of the idea of AI - have become real, and people have encountered their manifestations in their own lives. In places, such technologies make life more comfortable, but in many cases their unconditional usefulness is in doubt.
Uncertainty will only increase - along with the growing influence of AI on the labor market, information security and other economic and social processes. Although AI's ability to develop independently and cross information barriers is not yet even on the horizon, its superiority over humans in many areas is rapidly accumulating, and it will have unique capabilities in the future, AI100 experts conclude.
This report should be read by everyone, not just IT folks and technology geeks. AI is a major factor of today's life, it is important to understand the tendencies and adapt to them!
Machines should work, people should think, as IBM's father Tomas J Watson used to say. Enjoy this food for thought)
No Risk No Glory : No Failure No Story Horse Fall Survivor ?? Founder@ Touchforce : In constant BETA : Cloud Services |App Dev|Managed Cyber Security|EdTech|IT Infra|BlockChain| AI | UAE,Africa,US,UK "
3 年Dmitry Doshaniy ... excellent read !! On first sight I thought it was Atom back from Real Steel ... ?? .... the statement " free modification of cell functionality" ... this is epic. As highlighted solutions to identify deep fakes which has taken 3 months for FB could be improvised ... seeing this as an opportunity to bring solutions which could do that on the fly. One side of the coin as you said is Gaining Strength & the other is gaining Storm... brilliant article ????????????
??????????????????? ?????? ???? ????-?????????? ??????????????????????? ???????? ???? ???????? | CEO Advisor | Artificial Intelligence | Tech Marketing | GTM strategy | Board Advisor
3 年Thanks for the summary Dmitry!