Let us get serious about AI Risk Management
AI Risk Management @AiThoughts.Org

Let us get serious about AI Risk Management

I follow Gary Marcus columns on the rapid advances in AI and his clear focus on possible risks.? While they may sound like “Doomsday”, it is important and necessary to listen to his views to make your assessment of risks and decide on your level of risk mitigation.

In one of his latest posts, he talked about Air Canada’s ChatBOT , ?which offered a discount on ticket fares to a customer when this offer was not officially available in the Air Canada pricing & and discounting policy.? Maybe the ChatBOT combined some sentences on bereavement travel help and discounts and invented a brand new discount policy of bereavement discount.? The customer saved the screenshots and sued the company for the refund and got a favorable verdict. Ofcourse, one has to ignore the sensational heading of Gary about "ChatBOTs are dead" and look at this incident carefully.?

In my view, ChatBOTs main function is to (a) ?understand and interpret human voice or text into business questions and pass it on to the Business applications via APIs and get answers and (b) Convert the business application answer into a human-understandable audio or text form and communicate back to the customer.? Ofcourse, decent language, and use of empathy in responses are required at the minimum. I worry more about this risk of inappropriate/foul language in ChatBot responses which could be a major embarrassment for the enterprise.

In the non-AI or even RPA automation customer service centers,? the first-level human agent understands the inputs either audio or text and uses the current business policies, and responds to the customer.? If current business policies are not satisfactory to the customer, the call is usually transferred to a Team lead/manager who will have additional discretionary powers within set limits, to provide additional discounts or offer coupons, etc.? In neither case, no one is allowed to “invent”, new business policies.

Hence, ChatBOTs are very much alive and their announcement of death is vastly exaggerated!?? Customers are happy to text to talk to a ChatBOT as if they are talking to humans and happy to get back answers that they can understand if we make the audio response in customer-specific accents, even better.

Maybe AirCanada went far beyond the scope of a customer service agent and made it into an Agent, manager, and Revenue officer, all in a single BOT. ??If only a serious AI Risk management review/audit had been done , these issues would have been flagged before production.? The decision to take the risk of hallucinations and invention of new business policies and pay a discount for a few customers can be taken by the enterprise.? Then there is no need to go to a court and lose the case and get bad publicity. Maybe this new "Invented" policy may become an official business policy if it is well thought out by the BOT!

This and other incidents like this are wake-up calls for enterprises.? Take AI Risk management seriously and establish a policy for this.? At @AiThoughts.Org , we offer AI Risk management services, and do contact Diwakar Menon for any assistance.


Lucius Lobo

Chief Information Security Officer at Tech Mahindra

9 个月

Love this, insightful example. Clearly signs of problems to come

Yeshwant Satam

ICT professional and consultant

9 个月

Very important dimension you have touched upon. While it's crucial to perform risk assessments before deploying AI solutions like chatbots, achieving complete 'full proof' status is inherently difficult. As Diwakar Menon explained me once - "risk management in AI deployments is not about achieving absolute certainty, but about proactively identifying and minimizing potential risks. And then supporting it by certain strategies (like XAI) and tool sets in AIOPs for timely interventions and corrections."?

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了