What a machine will think when it looks us in the eye?

What a machine will think when it looks us in the eye?

Blog

Artificial intelligence will do all of our work for us – it will even find a cure for cancer, claim enthusiasts. It will render man useless, respond pessimists, prophesying a doom of humanity at the hand of conscious machines. Either way, today’s AI developers will determine our fates in the next dozen or so years. Especially that they have the financial means to do the job.


About machine consciousness

In reply to a journalist’s question on whether machines will ever be able to feel, Oren Etzioni, co-founder and CEO of Allen Institute for Artificial Intelligence, an organization established two years ago by Microsoft co-founder Paul G. Allen, said: “The short answer is no. An expanded one is: no, they won’t – people have an overblown perception of what computers can do in this day and age”. Another notable quote came from Stuart J. Russel, a scientist credited with major contributions to AI research, an author of many publications on the subject: “The biggest obstacle to the development of AI is we have absolutely no idea how the brain produces consciousness. If you gave me a trillion dollars to build a sentient or conscious machine, I would give it back. I don't get the sense we are any closer to understanding how human consciousness works than we were 50 years ago”.

Why don't we look at some facts that are undeniable and that will offer an undistorted view of where we stand on developing Artificial Intelligence. Is the experts’ skepticism justified?


Enthusiasts riding the wave

Not a month goes by without reports on global corporations such as Google, Amazon, IBM, Facebook, Apple, Microsoft and Samsung investing in AI and the proliferation of start-ups developing ultra-modern smart technologies. The trend is as popular with mainstream media as it is with niche technology portals. It is accompanied by an ongoing debate on the possible consequences of the spread of artificial intelligence. The buffs are quick to enumerate the benefits they expect to see within a decade or two, including computers responding to human facial expressions, emotions and voice, self-improving computer systems using successive dataset operations (machine learning), implantable nanobots capable of seeking out and destroying cancer cells in human bodies, computer decision support systems; smart technologies in our homes and autonomous vehicles. Even today, the effort to prolong human life, the strife for greater data processing power and the ongoing personalization of personal computers provide a fuel of sorts for global business and are no longer the domain of Hollywood directors who have for years exploited AI.


Money drives intelligence

Continued research in the field and greater funding will inevitably result in a gradual commercialization of AI. Within the horizon of just a few years, the involvement of big corporations can be expected to produce revolutionary changes in both medicine and business. Forecasting agencies predict a quantum leap in the next five years. During this time, the funds invested in AI-related projects will grow by tens of percent while fascination with artificial intelligence is posed to snowball. According to the CB Insights, in 2015 alone, the global financial market saw the arrival of approximately 300 new large companies whose mission statements featured such keywords as artificial intelligence, machine learning and neural networks. According to a report by the market research agency TechSci Research, the United States AI market will grow by 75 percent during 2016 - 2021. The money will go to making AI better adapted for consumer electronic devices, scientific research, autonomous cars and R&D activities in the healthcare industry. On the other hand, BCC Research, a company specializing in technology markets research, projects the global market for smart machines (neurocomputers, expert systems, autonomous robots, intelligent assistants) to grow to US$ 15.3 billion by 2019, with an annual growth rate of 19.7 percent. Without a doubt, this is the fastest-growing segment of the technology industry.  


Intellectual backing

The global drive in business would not be possible without a research backing. Today’s AI investment boom would never happen without the involvement of science and technology institutes, research organizations, technology hubs and non-profits. The afore-mentioned Allen Institute for Artificial Intelligence employs dozens of scholars and technology experts in various fields. Its mission, as proclaimed on the Institute’s website, is: “to contribute to humanity through high-impact AI research and engineering”. Besides research institutes, artificial intelligence projects receive the financial support of technology corporations which literally vie to outdo one another in launching new high technology projects. The last four years have brought about a breakthrough. Even in 2012, Google has engaged in AI ventures. In 2014, it announced an investment of hundreds of millions of dollars in the start-up Deep Mind. In 2014, Mark Zuckerberg, Elon Musk and Ashton Kutcher joined forces to invest substantial amounts in Vicarious FPC, a company dedicated exclusively to the most visionary AI projects. As declared in its mission, Vicarious FPC aims to “build a unified algorithmic architecture to achieve human-level intelligence in vision, language and motor control”. One of Facebook’s strategic objectives is to create a powerful data processing system and develop a computer-based facial recognition technology. Huge resources (money, intellectual capital and human labor) have been deployed to develop AI. All this, in my view, may soon put an equivalent of a simple AI in the laboratories of large companies and research institutes. 


The time of intelligent change

In view of the projected investment in research, it is interesting to look at some of the forecasted areas of AI development. Note the most recent report by the analytical company Gartner on AI. Here are a few examples. According to Gartner, within the next few years, IT systems will be able to make autonomous economic decisions. By 2020, a staggering 5 percent of the world’s economic transactions will be carried out with the use of installed software algorithms with the capacity to draw conclusions from data sets entered into computers. According to Gartner, by 2018, an astounding 20 percent of business content and information will be authored and published by computers. We are therefore referring to intelligent or nearly intelligent processing of data in various categories. Speaking of which, note the market presence of the companies which even today offer systems that autonomously convert data into reports that are meaningful for humans. One of them is Yseop, a provider of a service that may soon revolutionize the jobs of accountants, stock exchange analysts, business strategists and managers. With the help of a simple interface, the user may enter numbers, charts and infographics into a computer. A machine then will automatically compile, collate and process the data. Even today, the Associated Press agency relies on computers that autonomously create reports that are then used by its journalists. Their quality is very high and only some need to be edited by humans.


Machines take to science

In its predictions about the nearest future, Gartner notes especially one trend of great significance for the further development of AI systems: “machine learning”. One of the most fundamental questions asked by AI researchers is how to create AI systems that will autonomously improve their performance by learning from experience. How to apply the rules that govern human learning in developing computers? This brings us to what experts believe to be the biggest challenge faced by AI developers, an idea that is as fascinating as it is controversial: how to create advanced systems that will replace humans in decision making. One can immediately see that this is about more than just automating repetitive processes, which has already been done in business and industry. We are referring to machines whose algorithms recognize patterns, predict future results and apply that knowledge in decision making. A practical answer to this challenge will may be to use a technology based on artificial neural networks that will mimic the human brain (the quantum computing trend). Picture a machine capable of analyzing data and drawing conclusions that is employed in medicine. Based on parameter analysis, computers would diagnose health conditions, detect anomalies and predict diseases. To quote Oren Etzioni again: ”What if a cure for intractable cancer is hidden within the tedious reports on thousands of clinical studies? AI will be able to read and – more importantly – understand scientific text. These AI readers will be able to connect the dots between disparate studies to identify novel hypotheses and to suggest experiments which would otherwise be missed”.


Can you hear me, Mr. Watson?

Another serious matter that is central for AI developers is that of natural language processing. Many managers believe that computers that emulate the ability to understand human language may revolutionize the drive to achieve integration and collaboration between human and machine intelligence. Google claims that machines currently handle 20 percent of telephone inquiries from customers. Research in the field aims to develop systems that will dialogue with people rather than responding to simple demands. Some add that the breakthrough that is ca. 20 years away will be for computers to learn to fully recognize human facial expressions and read human emotions. What is certain is that major advances in the field are seen even today, some of them right under our noses. We are often challenged to assess their scale and significance for the future. The IBM computer Watson (2880 core, 15 TB of RAM) has been made for the express purpose of responding to questions asked in natural language. The machine relies on natural language processing for its standard operations. To make the answers possible, the computer has access to a database of millions of pages of various content, including dictionaries and encyclopedias and is programmed to use hundreds of parallel algorithms to find the right answer. With this mechanism, it can analyze huge data sets from various areas such as business, economics and medicine. By communicating with humans through voice, Watson “understands” the questions asked and problems presented, gathers successive data and “learns” from them in keeping with the idea of machine learning.


I am your assistant

As computers “acquire” intelligence, they learn to communicate with people in an increasingly “human-like” manner. Their responses will be a function of their ability to read and process diverse data. The pioneering research which has led to the development of IBM Watson will be disseminated and commercialized. It is nearly certain that autonomous assistants will become very popular in the nearest future. These applications will help us acquire knowledge and make decisions in our day-to-day personal lives. Even today, it takes little imagination to picture that happening. For a number of years now, iPhone users have enjoyed the company of Siri which responds through voice to various simple questions about the time of day, the weather, the day’s date as well as finance, music, e-mail account content and smartphone contacts. Similar projects which, although continually improved, are already available for individual users, include Amazon Echo which also relies on the natural language processing mechanism.


The optimists, realists and pessimists

AI continues to evoke mixed feelings. Enthusiasm meets fear, as fueled by filmmakers, writers and wary futurologists. Facebook founder Mark Zuckerberg ranks among the technological optimists and business pragmatists. He said that “AI will reach a point where it will benefit companies large and small. We are working on AI because we believe that more intelligent services will be more useful”. His sober approach is not shared by all technology market players. According to a report prepared for Baker & McKenzie, out of 424 financial specialists, 76 percent believe that financial oversight authorities are ill-prepared to work with new AI software whereas 47 percent are doubtful about their own organizations’ ability to understand the risk inherent in using AI. The respondents were found to believe that dependence on artificial intelligence will bring about a cut in employment. This statement conceals a whole spectrum of emotions and views on AI. 16 years ago, Bill Joy shared his thoughts on AI in a legendary and much-quoted article “Why the Future Doesn't Need Us” published in Wired magazine. Many of its author’s reflections can now be seen as both extreme pessimism and incisiveness. Next to all of the benefits that Joy saw in the development of AI, he also shared some fears. He said: “We have yet to come to terms with the fact that the most compelling 21st century technologies – robotics, genetic engineering and nanotechnology – pose a different threat than the technologies that have come before. Specifically, robots, engineered organisms and nanobots share a dangerous amplifying factor: they can self-replicate.”


What does the future hold?

Well, the issue of AI appears to be so complex that both extreme optimists and pessimists still stand a good chance of winning the battle of opinions on AI and its role in our lives and perhaps in the life of our entire species. One thing is certain: we are living in a time in which the notion of progress and benefits for the human race need to be redefined. The categories developed ages ago may no longer suffice to grasp reality and understand our place in the age of personal assistants, computers, autonomous decision-makers and nanobots that roam around within our bodies.


Related articles:

- Artificial Intelligence as a foundation for key technologies

- The lasting marriage of technology and human nature

- Technology putting pressure on business

- The brain – the device that becomes obsolete

- How machines think

- Artificial Intelligence – real threats or groundless fears?

- On TESLA and the human right to make mistakes

- Modern technologies, old fears: will robots take our jobs?

 

Brief history of IBM Watson (source: IBM)

  

  

  


要查看或添加评论,请登录

Norbert Biedrzycki的更多文章

社区洞察

其他会员也浏览了