Ten Things You May Be Overlooking About AI

Ten Things You May Be Overlooking About AI

To say that there is hype over AI would be the understatement of the century.? I’ve been through some significant hype cycles over the years and this one, in my mind, is closest to what unfolded when web browsers were launched, and websites became a thing. I was personally involved with those early days and, although people forget, those early days were full of massive failures punctuated by the dot-com bust. AI is not being approached from a holistic standpoint and, because of that, I believe AI heading into that same type of failure cycle and that true adoption is further away than most think. Here’s why, as described by a human and not generated by AI:

?

1.?? AI is Going to be Expensive

Everyone has heard of the nVidia stock rocket. Why has that happened? Because AI is going to need powerful computing power? OK, so new hardware is needed: ka-ching! Power hardware is going to use more power: ka-ching! More power means more heat and the need for additional heat dissipation: ka-ching! Additional heat dissipation means additional cooling and possible upgrading of data centers: ka-ching! Who is paying for all this? Does anyone believe that big tech is just going to eat this expense? Nope, services using all this upgraded hardware are going to be more expensive than previous generations, yet most businesses are not taking that reality into account.

?

2.?? AI is Software

Most people don’t seem to look at it that way. Everyone talks about AI as if it is something new that hasn’t been around for decades. If you listen to the hype, people are associating almost supernatural powers with what is a series of algorithms powered by data sets built on the same old digital technology that has been around since the 60s. Software is buggy, it crashes, can be hard to maintain, and is driven and created by humans, not a higher power. Most people would not trust a “program” to deal with their money automatically. Most people don’t trust their self-driving car to get them from Point A to Point B without being actively involved. But somehow, AI is going to be perfect and do all these amazing things with no errors? It defies logic and there will be many disappointments to come as people try to use this technology in their everyday lives. It is still very raw, and it will take years for some of the major engines to mature.

?

3.?? Humans Need to Remain In the Process: AI Not Being Approached Holistically

As software, it needs human intervention. In my experience in designing AI-based systems that can be relied on to do something important, they must keep humans in the loop. This is true for a variety of reasons. First, is the trust issue brought up earlier. When was the last time you completely trusted a piece of software to make a critical decision that puts your finances or career on the line without you checking on what it was about to do? AI is fed data often without context. This is why there are such things as AI Hallucinations where the AI process can find patterns that are not there in truly random data. If a person looks up at the clouds and sees that it looks like a frog, they think “that cloud looks like a frog”. An AI process may look at that same information and conclude “there is a frog in the sky” and feed that conclusion to whatever process it is running. These issues can be thought of as false positives and false negatives, but they can have deadly consequences in medical scenarios and other mission-critical scenarios. This brings forth the reasoning why humans will always need to arbitrate and monitor what AI processes are doing. The idea that AI can get rid of people is the same logic that defined the early computer revolution as achieving the “paperless office”. Some 50 years later, I still see paper in every office I go to. Human intervention will remain; however, it suggests that new types of digital user experiences will be needed to make people effective players in the process.

?

4.?? Hyper-Personalization Is Dangerous

One of the values touted by the power of AI is its ability to produce hyper-personalized consumer and enterprise solutions based on data known about an individual. It sounds great in theory: all the stuff you care about and nothing else. That all sounds good, but let’s step back a moment. In other circles, this might be called “living in your own echo chamber.” From a social perspective, it is a dangerous idea to effectively isolate people by only giving them what they already know. From a commercial perspective, you remove the opportunity for insight, innovation, and the very breakthroughs in productivity being sought because people will be fed just the stuff that their behavior dictates. How will they be exposed to new ideas and possibilities that are not part of their current or past experiences? Some might say that AI processes will inject new information and possibilities in a curated fashion. If that’s the case, then it isn’t hyper-personalized anymore and who are the gatekeepers of that exposure? Will they have the person’s best interests in mind, or will it be used as an opportunity to drive behavior in the form of a new type of intrusive advertising?

?

5.?? AI Is Loaded with Emotion

In all the tech revolutions I have witnessed or been part of, I have never seen a technology that generates or involves so much emotion. If you mention AI, it immediately creates some level of angst in the conversation. People worry about what AI technology will mean to their jobs, how it will impact their future, how it might impact their children, and even what it means for the future of society. On a more personal level, there are worries about whether the information being seen is real or AI-generated. Whether something that has been provided to them using AI will be complete and accurate. Many people still count their change, many people look over their credit card bills and fall into the category of “mostly trust but verify.” Trust is an emotion. The spectrum of human emotion that revolves around AI is real and cannot be ignored. If you have heavily automated processes that AI can deliver, how can exceptions be dealt with? What is the nature of manual control over an AI-driven solution down at the user level? The first generation of AI solutions has not dealt with this reality and AI-driven product and service experiences must be designed with this consideration at their core.

?

6.?? ChatGPT is a Usability Win, Not Innovation

AI has been around for decades, and it has had many names and flavors over the years: expert systems, deep learning, machine learning, generative AI, causal AI, etc. You can go back to the 1950s and Alan Turing and see the same types of thoughts and issues that are still around today. AI has been in use in a variety of industries for a long time. The reality is that what is being done now is not new in a technological sense. It isn’t “innovation” in its truest sense. People seem to be missing the fact that what is different about OpenAI is that it created a huge leap in accessibility to this technology and it has done so bring creating simplified user experiences that allow access to power that was only previously accessible in university labs and large organizations with people that had a deep and specialized technical knowledge. This shows that the market “wins” to come will be based on the quality of the experience and not nuanced information about what technology is behind it. If it were about technology, what we see in the marketplace now would have happened a long time ago. “The proof is in the pudding”, as they say. The future of AI will be driven by its user experience and not by its algorithms and data, yet most organizations have not woken up to the reality that is staring them straight in the face.

?

7.?? AI-Generated Content Is Not Necessarily Yours

If people take the time to read the terms and conditions related to AI-generated content, they might be surprised by what it says. Take OpenAI for instance. First, the use of the model is subject to the terms and conditions set by OpenAI, LLC. That means, they have control over your ability to use the large language model (LLM) and it is subject to change. In terms of the content you generate, OpenAI assigns all its rights, title, and interest to the output of the user. However, OpenAI can only assign the rights that it owns. It leverages copyrighted content that it does not own and, therefore, that you cannot own. That leaves you possibly liable for copyright infringement. But let’s take it a level deeper. Two people generate something new using ChatGPT using the same query and they publish the content. Who is the owner of that content? Let’s go even deeper. How can you differentiate and provide any value if you are using the same content generator your competition is using? All this content generation is cool and it can provide some productivity enhancements when it comes to doing analysis and searching. When it comes to putting any of that content out into the world, whether for personal use such as a resume or for commercial use, there are some significant pitfalls, and the dangers may only increase as new laws are fashioned to protect copyright owners.

?

8.?? Bad Data Equals Bad AI

The current state of AI technology is based on algorithms defined as networks of probability that are driven by large quantities of data. Without the right data, the algorithms are useless. So where is all this data going to come from? OpenAI has sort of cheated by using the Internet as the source of the data. For many organizations, that isn’t very useful. They need insights about their issues using data that cannot be shared publicly. Therein lies a rat’s nest of problems. First, the data at many organizations is a mess usually brought on by a history of acquisition or switches in technology over the years. It is siloed in different systems and likely is not clean. If you give AI incomplete data or data with duplicates and other problems, you get back the garbage you put in. All the past data also embeds past practices that you may not want to replicate in the future either because of policy changes, regulatory, or ethical issues. For example: if you are a financial institution and practice subtle but real discriminatory behavior, the AI will learn from that and use the same practices. In other words, your data may not be usable, and you may be at square one and don’t know it.

?

9.?? AI Is Just Two Letters

AI is becoming a marketing slogan rather than a representation of a technology. Just about every company is adding those two letters to their offering simply to say, “We’re keeping up with the times.” My guess? A full 60% to 80% of everything that is being advertised as using “AI” isn’t based on AI technology. This is called “AI Washing” and it falls into the category of words like “healthy”, “organic”, “easy to use” and many others that effectively don’t mean anything. To see if something is powered by AI, you need to learn about what engine they are using and the source of the data. When was the last time you asked to see the algorithm powering a web app you were using? My guess is you never have. This brings up a bigger question: Who cares if it is AI? I’ve been around technology users almost all of my life and have been part of extensive research in just about every type of industry. I have yet to hear “I wish this solution was driven by AI.” What I have heard is “I wish I could talk to a person about this” or “I’m lost in this automated solution.” Why have companies jumped to the conclusion that labeling their solution as having AI is going to be a plus for them in the marketplace? There may be a massive backlash against AI-driven solutions and many brands will instead opt to hide the fact that they are using AI in the background. However, there have been several instances already of brands being damaged by hiding the fact that they generated something with AI and didn’t disclose it. Relationships with brands are based on trust when it feels like a brand is keeping things from you or is lying, it can cause irreparable harm. As a digital technology, AI has the unique potential to create that type of damage.

?

10. Why is Big Tech Behind AI??

Perhaps the most obvious question that few are asking is why now and why is big tech so firmly behind it. It can’t be said that there was a groundswell of demand by consumers waiting for AI. Does anyone believe that big tech is pursuing this direction for altruistic reasons? That they are pouring all this investment into it because it will make the world better? It goes back to my first point. AI is going to be expensive and tech companies are looking at it as a new revenue model and a new way to lock in customers for long-term profit. I have no problem with that. Every company has the right to put solutions out there and let the market decide whether it is right or wrong. Big tech has made a lot of bad bets including the major tech players. Apple Newton, Microsoft Clippy Office Assistant, Google Glass? All “smart” solutions were major market flops. Wasn’t the Metaverse the future? Where are all those VR goggles and AR glasses that were supposed to be the future of computing? The problem is that, now, big tech has created such a frenzy that people are not asking fundamental questions about whether this technology is right for them. The fear of missing out and general herding instinct dominate, and organizations are simply reacting rather than thinking strategically. It could be that their winning strategy is to avoid large investments in AI and rely on more efficient and cost-effective solutions to their business problems.?

?

No matter what technology is used, ultimately people will be involved. Rather than talking about AI, a more relevant conversation is how you deal with people and their usage of massive automation. Automation is really all that AI is offering and it would be healthier for organizations to talk about it in those terms and approach the problem from a holistic standpoint. It will be the experience related to that automation that will decide the winners and losers in the future and investing in how your customers and users will react to the technology is where the greatest investment should be.

?

Ishu Bansal

Optimizing logistics and transportation with a passion for excellence | Building Ecosystem for Logistics Industry | Analytics-driven Logistics

3 个月

How can organizations strike a balance between embracing AI and ensuring a human-centered approach for long-term success? #AIethics.

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了