A Glimpse into the Future of AI
The pursuit of doing more is as old as history can recall: technology has been playing an important role in it from the very beginning. For example, in the ancient times, when people were just starting to explore the world, they were using their fingers to calculate. It was the first and the simplest way to calculate. However, every time they had to add or subtract, they had to keep track of the numbers. This became an issue when the numbers were getting bigger. So, they used sticks to count. However, this method was not efficient enough. Then, a new method came and it was called the abacus. It is a huge board, used by people to carry out counting operations. In the end, the abacus was replaced with the mechanical calculator. The mechanical calculator was a huge step forward because of the fact that it could calculate more quickly. However, it was not perfect. A new invention came and it was called the computer. It was the best invention ever became because it could do everything automatically. The computer was the best invention ever, but it still had some flaws. The first flaw was that it could not think for itself. The computer could only do what it was programmed to do. This was the end of the line and the beginning of a new one. We have now entered the era of Artificial Intelligence or AI. The use of AI is increasing every day and it has a great impact on the way we live.
You may have noticed a few oddities in the introductory paragraph above: For starters, I intentionally marked the first sixteen words in bold because that’s all that I had to write as a prompt for my line of thinking — the rest is written by AI. This, in my opinion, is an epitome of human-AI collaboration: I was going to use the evolution of the wheel in my introduction; AI-assisted writing not only has saved me time but also improved upon the idea that I originally concocted. A human editorial touch is warranted to make the writing more stylistic; however, in this instance, I wanted to share with you the raw fruit of state-of-the-art AI as is. I think of AI in the enterprise and other substantial applications as Augmented Intelligence: Humans are still in the driver’s seat with their hands on the wheel, even if they are not always driving.
Looking at what AI today has to offer in a plethora of domains, one can only wonder what the future holds. To make educated predictions about the future of AI, we need to learn about its past, present, and rate of acceleration. The past few decades have witnessed astronomical growth and improvements in three key ingredients that make AI extremely effective: digitization, computational power, and algorithms. The trio has been accelerating faster than ever before thanks to the increasingly innovative human capital that produces more efficient, more capable hardware and software by the day. We see this in the ubiquity of internet-connected devices and IoT; the abundance of AI accelerators in data centers and personal devices, and the hyper-growth of per-device and collective computational power. Moreover, we are witnessing a “gold rush” of research, funding, and talent in pursuit of advancing AI more than any other scientific field in present time.
Like other sufficiently complex technologies, AI technologies have been going through a hype cycle. The use of the plural form in the previous statement is intentional; there are many facets and subfields of AI that are at different stages of the cycle. For instance, let’s predict how AI will overcome privacy challenges that stem from data collection and handling. Predominantly, AI scientists teach computers to recognize patterns in mappings (e.g., the mapping of an image to its caption); using tons of examples of such mappings, the machine then can predict the output for some input it has never seen before. This technique is called supervised learning, and it requires human-curated data (supervision), which entails a myriad of human workers inspecting data — collected from various sources — to label for the machine. This approach may not raise any eyebrows when the data are images of wild animals; but how about photos on your phone or your tax return forms? Emergent approaches address such challenges by using encrypted data (homomorphic encryption); noisy, aggregate data (differential privacy); and requiring far fewer labeled examples (self-supervised learning) in order to teach machines to solve problems that typically entail supervised learning and full access to private data.
There are many other facets of how AI is developed and applied that will also have to evolve; such as quality management of AI systems (which are typically nondeterministic) and explicability of AI-made decisions in certain applications. The point is that the more substantial the application is, the more mature AI is expected to be and the higher the expectations are. That said, businesses cannot afford to wait until AI is fully mature to start utilizing AI solutions beyond a proof of concept; simply because human-AI relationships are cooperative and for them to mature, all involved parties require training. Imagine commanding medieval soldiers into the battlefield, fighting with cannons for the first time without proper training; just like these early cannons, AI technologies need to be battle-tested — in more than just a skirmish — to hone; nevertheless, they will still help you win against those who aren’t equipped for the future of AI.
Business Development | Partnerships | Alliances | Ex-Cisco Investments | Recovering Entreprenuer
4 年Great deck! Under your "A New Breed of Data" prediction how will the developer environments evolve if datasets are the new code?