Artificial Intelligence: Bright Future or Dark Cloud?
The potential of deep learning and AI is almost limitless, certainly well beyond the scope of our current imagination. Complex machines imbued with the characteristics of human intelligence (e.g., the ability to sense the world through sight, sound, and touch; to reason and plan; to communicate in natural language, and to move and manipulate objects), will influence society in untold ways. The discipline however polarizes opinions, with evangelists on one side and doomsayers on the other.
This dilemma is not at all new or limited to the field of computing—consider the ethical debates sparked by breakthroughs in gene editing, stem cell research, or genetically modified foods. Like many technologists of my generation, I am a rational optimist by nature. I believe AI can be harnessed in ways that dramatically improve our lives, and that its potential to do good far outweighs its potential to do harm.
However, we can’t presume that progress will automatically translate to benefits for humankind as a whole. We have an obligation as technologists to think through the implications of our design choices before we put software into production.
I last wrote about artificial intelligence three years ago—before Alexa took up residence on our countertops and Google’s AlphaGo beat the world’s best “Go” video player. I wanted to revisit the subject because I believe we are at a critical inflection point in the evolution of AI.
The Convergence at the Heart of Advances in AI
Driving the growth and importance of AI are the improvements in computing hardware, access to greater amounts of more valuable data, as well as breakthroughs in the underlying software, tools and algorithms that can analyze and make sense of the data. Most of the activity we do today using connected devices is powered by this intersection. Internet searches, online recommendations for movies we want to stream, to gifts we want to buy are driven by advances in machine learning.
In order to compete with the richest, deepest intelligence, like a human being, AI needs fast memory and fast transfers inside of that hardware subsystem. In my days at EA, we were obsessed with making the action in your game look real and authentic on your computer screen. Some of the hardware and software architectures catalyzing AI actually came from advances made in gaming—GPUs and fast memory buses and high-speed memory management.
Machine Learning in Payments
AI and machine learning bring boundless opportunities to payments and commerce. With behavioral biometrics, authentication will become more seamless and secure; with natural language processing, automated sales associates can make shopping online a richer more personalized experience; with computer vision, users will be able to snap pictures to search online, making all visual content instantly shoppable.
In the five years I’ve been at Visa, AI and machine learning have become increasingly embedded in our products and infrastructure. We’ve been using machine learning for years to predict and prevent fraud. With neural networks and gradient boosting algorithms, we were able to identify several billion dollars in fraud last year alone.
AI has also given us formidable new tools for securing and maintaining the Visa network. Our cybersecurity team uses neural networks to categorize and search petabytes of data every day, giving us actionable insights to protect our network from malware, zero-day attacks and insider threats.
Meanwhile our operations team is using machine learning models to predict disruptions in our hardware and software systems, giving our engineers the insights they need to fix bugs in the network before they impact our ability to process payments.
This is just the beginning. We have a team of data scientists in our research group exploring new applications of machine learning for the payment industry and beyond—from recommendation systems to new models for risk and fraud management.
AI: Molding in our (Best) Image?
The breakthroughs in AI that we are harnessing at Visa are manifesting themselves across disparate industries, including energy, consumer electronics, gaming and medicine. So, the questions have evolved from “will AI reach its potential?” or “will it transform our lives?” to “how will we manage that transformation?” and “will AI ultimately help or hurt humankind?”
There is a fierce debate on campuses and in boardrooms about the life-altering effects of AI. Elon Musk has warned of a “fleet of artificial intelligence-enhanced robots capable of destroying mankind”, while Larry Page of Google and Alphabet foresees advancements in human progress.
I believe there is merit in both arguments, and the good news is that we have time to shape AI in a positive direction. In human terms, we are in the toddler stage in the development of AI--a period of rapid neurogenesis. A child’s early years are shaped by external stimuli like pictures, music, language, and of course, human interaction. The result of this neurogenesis will determine a person’s intelligence, compassion, thoughtfulness and, importantly, capacity for empathy.
Similarly, for AI to evolve in a positive direction, we need to involve the humanities, law, ethics as well as engineering. We need diversity of thought amongst the people working on these solutions. I know others share this view. Deep Mind’s founder Demis Hassabis insisted that Google establish a joint ethics board when it acquired the company in 2014.
As a father of young children, I realize how futile it is to predict what they will be like when they grow up. Similarly, none of us can predict what AI will become 10, 20, 50 years into the future. However, today, we have a responsibility, as parents and technologists, to raise our children to be productive, compassionate and, perhaps most importantly, empathetic members of society.
I am excited to learn your perspective on how we can chart an empathetic course for artificial learning in all its manifestations.
LinkedIn "Top Resume Writing Voice" | Expert Resume Writer | Job Search Dream Maker | 1-800-730-3244
4 年i am humbled ;-)
Strategic Account Executive
4 年Insightful article Rajat, nicely done. From Microsoft's perspective, there are some basic principles and goals that should be considered when designing AI to quell concerns in society about the potential for it to harm as follows: 1) AI must assist humanity, 2) AI must be transparent, 3) AI must maximize efficiencies without destroying the dignity of people 4) AI must be designed for intelligent privacy, 5) AI must have algorithmic accountability and lastly, 6) AI must guard against bias. And there are "musts" for us as humans as well when it comes to thinking clearly about the skills future generations must prioritize and cultivate including empathy, education, creativity, judgment, and accountability.
Managing Partner at BigRio, Saviance and Damo
4 年men and machines will come together
Cloud Data Practice Delivery Head
5 年AI brings new tech spin on? abilities to deepening our knowledge on every subject of human life and coexistence with nature. however, at the same time, it does bring lots of uknowns on possible use and abuse to feed? human greed, desire to have control and be at power to running the globe.
Founder & CEO at Proyava Innovations - Cyber Security, 5G Core, Enterprise Solutions, Quantum vulnerability Assessment, IT R&D
5 年Interesting quote "We’ve been using machine learning for years to predict and prevent fraud. With neural networks and gradient boosting algorithms, we were able to identify several billion dollars in fraud last year alone" .. AI/ML limits prediction/detection/prevention within pre-defined process and parameters.? When process itself having flaws AI/ML would not help in detection or prevention. To prevent frauds, re-look at the process itself, would not need AI/ML.