AI for the Future: Should We Take the Bad with the Good?
Karyna Naminas
CEO of Label Your Data. Helping AI teams deploy their ML models faster.
While learning about the ever-growing field of artificial intelligence, one simply can’t ignore the fact that artificial intelligence has made our lives better. But at the same time, it’s hard for us to disregard the many issues we have to work through that stand in the way of the smooth and safe adoption of AI in our lives.?
That’s how I came across the TEDxMIT event, where the international AI community has discussed the good and the bad side of the omnipresent advanced technology.?
The top tech talents today continue to amaze us with all kinds of new (yet, sometimes bizarre) inventions in the world of AI. However, how might this rapid pace of AI development affect our society? How many important points are we missing in this race for technological breakthroughs? Let’s talk!
AI for the Greater Good (Or Not?)
Artificial intelligence isn’t perfect because neither are humans. But AI is constantly improving, which cannot fail to delight. We’ve already witnessed how AI could take on the role of an artist, a chef, or even help soldiers on the battlefield! But I’ve never brought up the downside of AI and the challenges it poses to humanity.?
Due to AI, some things that were formerly thought of as science fiction are now easily accomplished. But even though this field develops fast, is AI fair enough? I think it’s hard to give a definite answer here. On the one hand, AI is already outperforming humans in a variety of tasks, like image recognition and classification or games (e.g., chess, Go, and poker).?
Moreover, I can’t help but mention the recent wave and hype around the autoregressive language model called GPT3 (Generative Pre-trained Transformer 3). This could probably be a set of good examples of bizarre inventions in AI. Case in point, GPT3 technology created by OpenAI was used for code generation and comprehension, content creation (e.g., The Guardian article written by AI), and even for building lifelike chatbots. You’ve probably heard the news about Project December, or how a man created a GPT3-based chatbot to speak to his dead fiancée!?
In this particular case, AI developers have raised a number of concerns about the use of this AI-powered technology, such as biased AI, suicide concerns, or mass misinformation. All this leads to the question: Can everyone be conscious of the fact that they are talking to a robot? Or can these conversations go too far?
Nobody’s Perfect, Not Even AI
As we all know it, perfect is the enemy of good. So, why do we strive toward perfection in artificial intelligence? That’s right, it’s hard for humans to trust inaccurate and questionable AI-based algorithms, because some of them directly impact our safety and lives, like in the medical settings.?
领英推荐
Did you know that the famous physicist, Stephen Hawking, once said that the advent of AI could be “the worst event in the history of our civilization”? When I read this, I immediately thought of Sophia, the AI-powered robot that (or rather who?) was careless enough to say that it (or she?) could “destroy humans.” So really, how far have we come with AI? Let’s look at a few examples!
What Is the Real Nature of AI?
Not so long ago, I was talking about smart HR assistants, enabled by AI. There are actually a vast number of intelligent assistants, including personal AI assistants, virtual assistants like Siri or Alexa, and voice-enabled assistants in healthcare, to name a few. Here, and in many other cases with intelligent machines, it’s our prime responsibility to ensure that the technology that we create is safe and wouldn’t harm anyone.?
Another critical issue to talk about is that many recent findings state that AI is becoming conscious and may even develop feelings. I think many of you have seen the news about sentient AI at Google. The controversies and debates around this subject were pretty serious, although AI experts reassure us that these technologies are designed specifically to mimic humans in the way they interact and communicate with one another. Besides, this type of AI is completely unable to produce its own ideas or opinions, which is a fundamental element of consciousness.
The next point I’d like to highlight is lazy AI. How is that possible, you ask? Well, AI is not interested in learning something new unless a human tells it to do so. This is why there are a lot of errors and bugs before we achieve an accurate and efficient model. For instance, there was a case when AI critically misidentified human faces, put the wrong labels, and demonstrated racial bias using the PULSE algorithm or Google Vision AI.
While this might seem fun at some point, the lazy nature of AI, in general, poses some serious concerns in important domains, like healthcare. If there’s the task to identify cancerous tissue in tumors, we want AI to be as perfect as possible and generate trustworthy results, right? But AI doesn’t try hard, it just guesses, and we rely on its most accurate results. So, it’s not about exceptions or possible mistakes, it’s the very nature of AI-powered systems that we plan to entrust ourselves to.
Final Thoughts: So Should We be Afraid of AI?
Going back to the questions at the very beginning of this article, my answer is this: the rapid growth and development of AI affect our society in both positive and negative ways. And it’s only up to us to decide which influence is greater, the good or the bad one. Working with ML and data annotation, I’ve encountered only highly useful and exciting applications of AI, but no one is immune to risks.?
Plus, there’s a difference between what we think the model does and what it actually does, because, yes, AI can cheat. So I think we miss a lot of crucial points in the pursuit of perfect AI and many technological discoveries. Humans must stay conscious about adopting AI and crossing the line. And I’d say yes, we should be afraid of AI, but only if we are too careless with it. In all other cases, we should learn to trust the algorithms that we create, or rather focus all the efforts on building algorithms and AI that we can trust.