AI is Power, and with Power comes Responsibility

Growing expansiveness of AI

Artificial Intelligence (AI) has seeped into corporate consciousness over the last decade to the extent that we cannot conceive a world without it now.

‘Knowledge is Power’ is an adage we are all familiar with. With AI systems holding the key to a treasure-house of data (a.k.a knowledge), AI has become synonymous with power and pervasiveness.

Many interesting use cases have started emerging with the possibilities demonstrated by AI. In fact, use cases are evolving in tune with new challenges and realities that we face in everyday life. Gen AI, the latest evolved version of AI, has only upped the ante around AI.

A recent article highlights how Natural Language Processing (NLP) is employed in Kenya to predict election violence. This method analyzes the sentiment of speeches delivered by influential figures and leaders in the country. The model predicts increases and decreases in average fatalities for look-ahead periods between 50 and 150 days, with overall accuracy approaching 85%.

Elections are a reality we face from time to time. This use case is a typical example of how AI is being used to handle a real challenge of violence during elections.

The above example highlights the positive side of AI, demonstrating its ability to forecast violence, thus offering an opportunity for prevention.

Another favorable application of AI across the world these days is in the realm of recruitment processes. For instance, Unilever processes over 1.8 million job applications each year. Partnering with Pymetrics the company has built an online platform that can assess candidates using video software. In the second stage of interviews, candidates answer questions for 30 minutes while the software analyses their body language, facial expressions, and word choice using natural language processing and body language analysis technology.

?

Pitfalls in AI applications to be aware of

While AI can be applied for numerous beneficial purposes, it's essential to remember that if not modelled responsibly, it has the potential to cause widespread chaos. Here are a few examples where the AI models went awry.

  • ?In February 2024, Air Canada was ordered to pay damages to a passenger after its chatbot lied to the passenger about the Airline's policy.
  • ?In November 2023, Sports Illustrated made headlines for publishing articles written by fake AI-generated authors. These authors' biographies and photos were created by AI.
  • ?In August 2023, a woman of African descent from Detroit, who was eight months pregnant, was falsely accused and arrested as a suspect in a robbery and carjacking case. This incident, caused by an AI error, resulted in the woman being detained for 11 hours, leading to a traumatic experience.
  • ?In July 2023, it was discovered that ChatGPT can create phishing templates that a scammer could easily use to create convincing scam mail.

?

A Case Study in point

?Let's delve into the Air Canada case to understand precisely what occurred.

?In November 2022, Air Canada's chatbot promised a discount that wasn't available to passenger Jake Moffatt, who was assured that he could book a full-fare flight for his grandmother's funeral and then apply for a bereavement fare after the fact.

?According to a civil resolutions tribunal, when Moffatt applied for the discount, the airline said the chatbot had been wrong – the request needed to be submitted before the flight and wouldn't offer the discount.

Instead, the airline said the chatbot was a "separate legal entity that is responsible for its own actions." Air Canada argued that Moffatt should have gone to the link provided by the chatbot, where he would have seen the correct policy.

?In February 2024, the British Columbia Civil Resolution Tribunal rejected that argument, ruling that Air Canada had to pay Moffatt $812.02 (£642.64) in damages and tribunal fees.

Not only did this incident lead to a financial penalty for Air Canada, but it also caused reputational damage, which takes time to fade from public memory.

?

Responsible AI – Need of the hour

These examples drive home the message that AI systems that are being developed need to be closely monitored and regulated. It is imperative to create a governance mechanism around AI models that ensures the examination of all ethical, legal, and safety dimensions. In other words, the need of the hour is a 'Responsible AI' framework for the ethical use of AI technology.

?AI works based on the data fed to it by humans. AI systems cannot behave responsibly by themselves. Hence, it is the responsibility of humans who develop and assist AI to ensure fairness and transparency in the predictions given by AI systems.

?According to a recent Accenture research: “Only 35% of global consumers trust how AI technology is being implemented by organizations. And 77% think organizations must be held accountable for their misuse of AI.”

?Since the consequences of misuse of AI can be heavily damaging, all organizations should resolve to adopt 'responsible AI' practices and ensure ethical use of AI technology that guarantees that their AI systems are explainable, monitorable, reproducible, secure, human-centered, unbiased, and justifiable.

?

Call to Action

This, therefore, implies that all organizations engaging AI should prioritize ‘responsible AI’ practices and incorporate them into the AI implementation lifecycle – i.e., from AI strategy to deployment.? Here are the practices that organizations should strive to adopt.

  • Use a human-centered design approach: Engage with a diverse set of users and use-case scenarios and incorporate feedback before and during project development.
  • Identify multiple metrics to assess training and monitoring: Ensure that your metrics are appropriately aligned with your system's context and goals.
  • Examine your raw data closely: Analyse it carefully to ensure that you understand it in its entirety, including but not limited to user sensitivity and privacy.
  • Understand the limitations of your dataset and model: It is important to communicate the scope and coverage of the training, thereby clarifying the models' capabilities and limitations. For example, a shoe detector trained with stock photos can work best with stock photos but has limited capability when tested with user-generated photos on mobile phones.
  • Test, Test, Test: The importance of testing cannot be over-emphasized. Hence, one must conduct iterative user testing to incorporate diverse sets of user needs into the development cycle. It pays to apply the quality engineering principle of poka-yoke and build quality checks into a system so that unintended failures either cannot happen or trigger an immediate response (e.g., if an important feature is unexpectedly missing, the AI system won’t output a prediction).
  • Continue monitoring and updating the system after deployment: Continued monitoring will ensure your model considers real-world performance and user feedback. Before updating a deployed model, analyze how the candidate and deployed models differ and how the update will affect the overall system quality and user experience.

?

Last but not the least, in any AI endeavor, especially those with implications worldwide, it is important to often remind ourselves of the adage popularized by the Marvel Comics-inspired Spiderman series: "With great power comes great responsibility."

?

References:

?1.????? https://www.ictworks.org/natural-language-processing-predict-election-violence/

?2.????? https://tech.co/news/list-ai-failures-mistakes-errors

?3.????? https://www.cio.com/article/190888/5-famous-analytics-and-ai-disasters.html

?4.????? https://www.worklife.news/technology/generative-ai-blunders-2023/

?5.????? https://www.ibm.com/topics/responsible-ai

?6.????? https://www.techtarget.com/searchenterpriseai/definition/responsible-AI

?7.????? https://www.forbes.com/sites/marisagarcia/2024/02/19/what-air-canada-lost-in-remarkable-lying-ai-chatbot-case/?sh=38cd8420696f

?8.????? https://www.bbc.com/travel/article/20240222-air-canada-chatbot-misinformation-what-travellers-should-know

?9.????? https://cmr.berkeley.edu/2022/01/the-reputational-risks-of-ai/

?10. https://ai.google/responsibility/responsible-ai-practices/

?11. https://www.cio.com/article/652775/12-most-popular-ai-use-cases-in-the-enterprise-today.html

?

#PowerofAI # AIisPower

?

Sridhar Vellaisamy

IT Service Delivery Manager | Agile | ITIL | Cards Acquiring, Fraud, Dispute, 3D Secure | Power BI, Python, SQL, Azure Data Factory, Databricks | MS Power Automate | ServiceNow | Incidents, Problems, Change & QA |

4 个月

Really good insight on AI. My opinion is, organizations must be accountable for whatever mishaps from their AI solutions. I am fully against unmanned AI solutions, AI still needs maturity to operate on its own. Without regulations and governing bodies, certainly AI would not make our world a better place. Content regulations are still a nightmare especially in social media, unfortunately our generations could not rely on the contents published in the public forums, although social media players keep saying, they have a best AI solutions to curtail fake contents, we always needs human moderators assisted by AIs.??? ? Thank you Kesavan Hariharasubramanian for showing the true angel and the devil in AI. I think quality and productivity must co-exist and neither can be compromised.

Shanker Ramalingam

Engineering lead for privileged access management solutions. Extensive experience in IAM/PAM solutions delivery and support.

4 个月

Informative article on AI giving insight into how it needs to be handled. Good one

Arthi Sairaman

Global Operations Executive | Expertise in Industry 4.0 & IIOT Manufacturing Transformation, Robotics, Autonomous Vehicles, Smart Factory

4 个月

Kesavan Hariharasubramanian a well written article. It has me thinking about how these AI use cases, tests and metrics need to be configured in product design, mfg and supply chain environments to mitigate issues.

Raghav Gururaj

Specialist in Core Network at Ericsson

4 个月

Very crisply articulated,Kesavan!!

Arun Kumar Rallapalli

Marketing | Strategy | Channel Management | Customer Relations

4 个月

A well written article that is informative and also acts as a guidance. I want to add a few points to it. 1) AI should be considered as a productivity enhancing tool rather than a human resource equivalent, 2) AI should be multi-solution recommending engine for enhancing decision making credence, 3) AI cannot become a tool of fad and OCD. Organisations should be adhering to regulations, jurisprudence.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了