My Article About Artificial Intelligence

My Article About Artificial Intelligence

My Article About Artificial Intelligence – D M Goldstein, September 2024


Artificial Intelligence. AI. Everybody is talking about it, writing about it, and boasting about it. It seems like automation that we have had for years and years is rebranding itself as AI. Visionaries are promising how it will change the world for the better, while pessimists and Luddites are worried that it will change the world for the worse. Here are my thoughts.


The best of times, the worst of times

In earlier articles (ref 1, ref 2) I discussed some of the risks of AI with examples of mistakes that support that pessimism. I see it being used in social media spewing information that is just plain incorrect, for topics as simple as (mis)quoting song lyrics and (mis)attributing performance credit. Come on, that’s simple stuff, which is easily verified online, and I see no benefit for publishing this misinformation. And there are numerous “deep fakes” that are being used to sway voters, or fictional “evidence” of things that never happened. That is the Dark Side.


But, when properly used, there are also benefits to be had. An associate of mine (ref 3) wrote about ways to add AI to a customer service tech stack, to which I have added some thoughts and caveats here. With a safe and vetted knowledge base, AI can help with customer self-service through a Customer Portal, including executing workflows for repetitive tasks like order status, renewals, refunds, how-to questions, and basic troubleshooting of common issues. But make sure that the content it is drawing on is correct and was written and reviewed by a knowledgeable person. My earlier referenced articles show what happens when that is not the case.


Similarly, that knowledgebase can help agents quickly find related articles and previous issues which may lead to a more rapid resolution of a customer’s problem. It can even help summarize information from multiple sources into a short and cohesive response. AI can allow work teams to spend less time on repetitive tasks and more time working with customers to make them successful. My Luddite instincts say that response should still be reviewed by a knowledgeable person before being presented as fact.


And AI can be used to monitor events, such as a customer’s system usage and performance, or internal statistics such as case flows, and then present an analysis. Are there warning signs of an operational problem? Are there red flags that warrant human intervention? Similar monitoring and “call home” features can be built into a plethora of products. These are ways that AI can help improve the customer-vendor relationship and experience.


All the above have the initial requirement of needing a controlled and vetted data source. My examples of “bad behavior” mostly come from AI searching the internet at large, where there is no control over the validity of information found. And AI currently lacks the ability to understand nuance or context, so humor, parody, and outright lies may be treated the same as investigative facts.


Long-term cost and implications

My concern is that the world is rushing into using AI without considering the ecosystem. One risk which is near-and-dear to me is that AI tools can take the place of knowledgeable staff. There are AI tools for Support out there right now explicitly advertising that they are “virtual agents”. Without humans writing knowledge content and vetting output, it has already been shown how dangerously inaccurate some AI-generated results can be. And AI can be used maliciously to create false content, relying on the fact that many people will not question or challenge it because it came from an “expert system”. Yes, I know that many new technologies over the years have been accompanied by worries about killing jobs; think about coal miners, switchboard operators, buggy-whip manufacturers, and steno pools. Somehow, to me at least, this feels different. The breadth of scope and the potential for harm are far greater than single finite populations.


To be fair, however, some current AI uses are exactly what it is best for. Voice-assisted search in the form of answering questions is one such example. Amazon’s Alexa, Apple’s Siri, and Google Assistant are all examples where a person can ask a “natural language” question and get results. This is far easier than earlier online search engines. My caveat, as throughout this article, is that their source data must be accurate. The user should use critical thinking when they get the results, look for corroboration where possible, and never take it as gospel. One of my favorite slogans from the 1970s was, “Question authority.” Just because you got an official-sounding answer from your electronic device, that does not mean it is complete or correct.


But even “good uses” can have negative possibilities. Computer programs and applications are now being written by AI tools by companies who want to reduce time-to-market and costs, and there is extreme risk in their quality control. “As generative AI tools have lowered the barrier to entry for code creation and democratized software development, the foundation of our software-dependent world has come under threat. Limited oversight has led to an influx of subpar code, often riddled with bugs and vulnerabilities that enter the system. The increasingly common practice of having non-technical individuals create code exacerbates the issue because they may not understand the intricate nuances and potential downstream consequences of the code they’re creating. The lack of understanding about coding complexities and the necessity of rigorous testing is leading to a degeneration in code quality.” (ref 4) So, as with everything else in this article, you still need experienced software developers to oversee and review these apps, and still use thorough quality assurance methods to shake them out.


My AI crystal ball

A big question is, “Where is this going over the next five years?” Like any disruptive technology, it is hard to say with certainty. My prediction, based on observing human and corporate behavior, is that companies are going to over-invest in AI, as it is “the next big thing” and companies are enamored with the buzz. They are going to try to get something AI-like into many aspects of their company, without fully understanding why, or how to make it successful. You cannot just say, “Let’s use AI for this,” without an understanding of your desired outcome and a knowledge source with which to train it.


Online retailers like Amazon already do things like stating, “You bought this; other people who bought that also bought these…” It seems natural to use newer AI to enhance that experience by providing an AI-generated paragraph summarizing what others have said based on their feedback of the item you already purchased. If limited to Amazon’s own data, that may be a good application of AI. But would that same thing work at your grocery store? “You bought broccoli. Others who bought broccoli also bought asparagus.” Personally, I do not see much value in that scenario. These are passive scenarios, though. What about more intrusive ones?


Newer cars have all sorts of “nanny devices” in them, keeping track of whether you are staying in your lane, following the car in front of you at a safe distance, monitoring blind spot activity, and many other features. Emergency braking is a good thing, but do you want it to limit your speed or prevent swerving? Those seem to be good ideas but may be dangerous. Suppose you are trying to avoid an accident? The “nanny devices” should be able to see and react to those same potential hazards, but I would not want them preventing me from reacting to them.


The long arm of the Law

As I am writing this the California state Senate is considering bills to regulate Artificial Intelligence companies. They are concerned that, once Pandora’s Box is opened, bad actors will use AI for terrorist attacks: shutting down power grids, building chemical weapons, and other dangerous risks. AI companies could be required to test their models and disclose their safety protocols (ref 5). A key current proposal “targets systems that require more than $100 million in data to train,” which makes it mostly hypothetical or theoretical right now. Is this “well-intentioned but ill informed,” as quoted from Nancy Pelosi in the article? Or is it “crucial to prevent catastrophic misuse of powerful AI systems”?


My question is whether such regulations have any real teeth? It is illegal to shoot another person, but there are gun murders all the time. While it is nice that people in the government recognize the risks involved with this new technology, is it even possible to stop the storm? Getting back to my recurrent theme throughout this article, there needs to be human oversight and intervention with such powerful tools. Other proposed legislation makes the AI companies liable for misuse. That may encourage a sense of responsibility and accountability in their senior leadership to try to identify and stop bad actors. Another bill under consideration would require identifying AI-generated content and labelling it as such; should that pass, it would make it easier for the reader to know how it was generated and inspire them to verify the content.


In that same newspaper (ref 6) is an article about Tesla’s ongoing problems with their “Full Self-Driving” systems. I only mention this because it again emphasizes the need for a human to be at the controls. Self-driving robotaxis sounds very cool, futuristic, and Sci-Fi, but they are only as good as their inputs, training, and programming make them. And, for now, they still require an alert driver at the wheel who can react to situations in real time. If self-driving tragedies continue to regularly make the news, I will stick with human drivers (provided they are paying attention to the road and not their phone).


Human Touch

In case my bias was not clear so far, for me it comes down to “Human Touch.” AI is and can be a powerful tool to get better information quicker. But it needs real people on the input and output sides. People need to vet the source data to ensure accuracy. People need to review the answers to ensure appropriateness. AI can do some of the heavy lifting between those two points to generate integrated content. I intentionally avoided using the phrase, “create content,” as creation is in that input point that requires human controls. And, as always, critical thinking and analysis are required when consuming AI-generated output or actions. Be vigilant, be smart, and be safe.


Where I stand

Heroes, such as firefighters, police, and military personnel, rush toward danger. But so do fools, idiots, and crazy people. The initial release of a software product is often “Version 1.0.0”, but some of us refer to that as “Version one dot uh-oh”. Brand new and leading-edge stuff is often plagued with problems while the kinks are worked out. There are people on the very early edge of the adopter scale who will take that challenge. When it is that early, I tend to sit on the fence and observe. But I am not a laggard, either. When the building is safe to enter, I will go in and look around. For AI there are some current uses that appear safe to me; as discussed above, those are typically tools to aid efficiency, are based on well-vetted information, and have controls to prevent bad outcomes. But there are others, such as the self-driving cars and the ability to create malicious content, that still have me withholding embracing it without question. Buckle your seatbelts, we may be in for a bumpy ride.


References

1.????? https://www.dhirubhai.net/pulse/law-unintended-consequences-miles-goldstein-hnsyc

2.????? https://www.dhirubhai.net/pulse/write-wrong-miles-goldstein-ghgbc

3.????? https://influx.com/blog/cx-leaders-add-ai-customer-service-stack

4.????? https://builtin.com/artificial-intelligence/ai-fueled-software-crisis

5.????? “Landmark AI bill advances”, San Mateo Daily Journal XXV, Edition 10, Thursday Aug 29, 2024

6.????? “Questions about the safety of Tesla’s ‘Full Self-Driving’ system are growing”, San Mateo Daily Journal XXV, Edition 10, Thursday Aug 29, 2024


Photo credit: Alex Knight - https://www.pexels.com/photo/high-angle-photo-of-robot-2599244/


Krishnanand Srinivasan

Support Leader | Customer Support Operations

1 个月

Interesting read on the pros and cons. Thanks for sharing

回复
Woodley B. Preucil, CFA

Senior Managing Director

2 个月

Miles Goldstein Fascinating read. Thanks for sharing.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了