Can digital technologies be trusted?

Can digital technologies be trusted?

With the growing number of cyberattacks, personal data breaches, cases of identity theft, and mounting concern that artificial intelligence might one day take control of our lives, confidence in new technologies is beginning to crack. What can be done to remedy the situation?

A general state of mistrust

In our ultra-connected societies, individuals, businesses and nations are more exposed to digital risks than ever before.

Billions of digital data records are exchanged every day, with data traffic expected to grow by a factor of 50 between 2010 and 2025. And when these exchanges are poorly protected or not encrypted, more data means more vulnerabilities. The number of attacks has also continued to grow, with an estimated 14 billion data records* lost or stolen since 2013. The many data breaches exposed in recent years have also played a role in eroding long-term user confidence.

Added to this fear of hacking or digital identity theft, there is mounting concern over artificial intelligence, whose computing power clearly cannot be matched by the human brain. For example, when an aircraft flies a one-hour reconnaissance mission covering an area of 3,000 km2, it takes experienced military personnel an average of 300 hours to analyse the images. With the AI-assisted image recognition system being tested today, that volume of data can be analysed in real time! Broadly speaking, however, even though artificial intelligence can process enormous volumes of data more efficiently than the human brain, it is still very hard to provide a mathematical explanation of how the results were achieved. This "black box effect" can present real problems if these results are going to influence the human decision-making process.

Can digital technologies be trusted in these conditions?

Restoring confidence

The simple answer is that confidence does not depend on the tools themselves but on how and where we use them and the limits we impose.

"Combining the power of data-driven artificial intelligence with the reliability of model-based artificial intelligence offers the best of both worlds."

The first step that needs to be taken to restore user confidence is to design AI tools that are explainable — in other words tools that not only produce results but can show how they produced them.

As well as data-driven AI based on deep learning, there is a need for more model-based AI. This kind of AI also relies on algorithms, but the underlying models include legal, professional or ethical rules and principles that are established beforehand. That makes the results much easier to explain. Combining the power of data-driven artificial intelligence with the reliability of model-based artificial intelligence offers the best of both worlds!

Also for the sake of transparency and explainability, another promising area of exploration is AI's capacity to interact with humans to explain its decisions in real time and in natural language.

So making AI explainable is one important way to restore confidence. The other crucial step is to provide security technologies that protect people's digital identities effectively.

Our take on this is that protection will never be effective if it isn't easy to use. Improving digital security by piling on additional layers of protection is likely to be counterproductive, because users have a tendency to sidestep measures they find too restrictive. We know they often use the same password everywhere, for example, even though it exposes them to a greater risk of being hacked. We need to offer new ways of improving user security without messing up the user experience.

"Protection will never be effective if it isn't easy to use."

These new solutions exist. Biometric technologies like facial recognition and fingerprint authentication are already used to secure ID cards, passports and driving licences, and they have huge potential in consumer applications too. Some telephone banking applications already use digital fingerprints or facial ID to authenticate transactions. These biometric technologies have two key advantages in that they provide extremely effective security and are also very easy to use — in fact users becomes their own passwords. The objective is to secure the entire digital experience, from end to end: fingerprint ID enables access to the service, encryption guarantees data integrity, and accounts can be deleted at the end of the transaction if users don't want their data to be utilised for other purposes.

The third way to restore confidence is to establish ethical uses of technology so that humans always stay in control of the final decisions.

The purpose of AI is to "augment" humans by helping them make the best decisions.

This brings us back to artificial intelligence. The goal isn't to replace humans by AI — which has neither the flexibility to adapt to the unexpected nor the innate ability to multi-task — but rather to get humans and machines to cooperate. The ultimate aim is to harness the tremendous computing power of AI to guide choices that can only be made by humans. The purpose of AI is to "augment" humans by helping them make the best decisions.

Videosurveillance systems at an airport are a good example. These systems produce so much video footage that security personnel simply cannot analyse it all in real time. But AI can play that role, processing the information and alerting security staff about abnormal events like an abandoned bag, an unexpected crowd movement or a gunshot. The role of human operators is to step in at that point and set in motion the appropriate response.

"In artificial intelligence and digital security, confidence is the key to user acceptance."

In artificial intelligence and digital security, confidence is the key to user acceptance. Our job is to establish a framework within which user confidence will flourish.

* The Thales Breach Level Index reflects data records that have been lost or stolen since 2013


Sahal Backer

Master Student in Tourism ,Hospitality and Event Management

5 年

We can't trust the Artificial Inteligence completely . Artificial Inteligence based machines always works based on preprogramming , when it recieves a command other than the preprogrammed one it fails in its reliability. NO datas are safe in AI systems it can be manipulated or transferred easily with the help of chip/Network manipulation. ((Conclusion)) Human supervision is essential over Artificial Inteligence ))

回复
Didier G.

Glochebulle ZX28X

5 年

I would answer "why should we?" Another question: what condition (s) could we trust? This implicitly introduces the question: can we and why trust companies, manufacturers who use and / or develop these technologies? under which conditions? Issues that arise both externally to the company, producer or operator, for users, customers; than from an internal point of view, for employees, themselves concerned (impacted) by these technologies. And, last but not least, by the citizen. They upset all compartments of human life, for better or for worse. Production, consumption, privacy. Which have ontological and civilizational implications. Are these questions treated and by whom? Citizen is excluded from possible reflections. The consumer consumes and serves as both a laboratory rat and a cash cow. Both are exploited-supervised. Workers : the implementation of these technologies impact on the work processes, organization, culture, organization of power. It should be the object of a co-reflection, co-construction, co-operation between the social partners (employer/trade union) upstream. By associating with ergonomists, occupational psychologists. This would not be enough to fully answer the initial question but that would help it.

回复
Jean-Jacques Lagref

Head of Chemical Operations - Brand ROPUR -- Fellow of Society of the Advancement of Science (FR). Associate at Team for the Planet

5 年

Depends if trust which is a human concept, means safety+reliability. The last 5 yrs showed us that a 100% safe digital system does not exist. Nor its 100% reliability. Thus, I am skeptical but will leave the question open for more opinions.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了