AI is Power, and with Power comes Responsibility
Kesavan Hariharasubramanian
Results-oriented Data-driven IT Delivery Leader| Data Analytics / ML/ AI Evangelist | ex- iNautix, PwC, HCL, Cognizant, Western Union | Alumnus - CET (Trivandrum), IIT Kharagpur(VGSOM), Great Learning
Growing expansiveness of AI
Artificial Intelligence (AI) has seeped into corporate consciousness over the last decade to the extent that we cannot conceive a world without it now.
‘Knowledge is Power’ is an adage we are all familiar with. With AI systems holding the key to a treasure-house of data (a.k.a knowledge), AI has become synonymous with power and pervasiveness.
Many interesting use cases have started emerging with the possibilities demonstrated by AI. In fact, use cases are evolving in tune with new challenges and realities that we face in everyday life. Gen AI, the latest evolved version of AI, has only upped the ante around AI.
A recent article highlights how Natural Language Processing (NLP) is employed in Kenya to predict election violence. This method analyzes the sentiment of speeches delivered by influential figures and leaders in the country. The model predicts increases and decreases in average fatalities for look-ahead periods between 50 and 150 days, with overall accuracy approaching 85%.
Elections are a reality we face from time to time. This use case is a typical example of how AI is being used to handle a real challenge of violence during elections.
The above example highlights the positive side of AI, demonstrating its ability to forecast violence, thus offering an opportunity for prevention.
Another favorable application of AI across the world these days is in the realm of recruitment processes. For instance, Unilever processes over 1.8 million job applications each year. Partnering with Pymetrics the company has built an online platform that can assess candidates using video software. In the second stage of interviews, candidates answer questions for 30 minutes while the software analyses their body language, facial expressions, and word choice using natural language processing and body language analysis technology.
?
Pitfalls in AI applications to be aware of
While AI can be applied for numerous beneficial purposes, it's essential to remember that if not modelled responsibly, it has the potential to cause widespread chaos. Here are a few examples where the AI models went awry.
?
A Case Study in point
?Let's delve into the Air Canada case to understand precisely what occurred.
?In November 2022, Air Canada's chatbot promised a discount that wasn't available to passenger Jake Moffatt, who was assured that he could book a full-fare flight for his grandmother's funeral and then apply for a bereavement fare after the fact.
?According to a civil resolutions tribunal, when Moffatt applied for the discount, the airline said the chatbot had been wrong – the request needed to be submitted before the flight and wouldn't offer the discount.
Instead, the airline said the chatbot was a "separate legal entity that is responsible for its own actions." Air Canada argued that Moffatt should have gone to the link provided by the chatbot, where he would have seen the correct policy.
?In February 2024, the British Columbia Civil Resolution Tribunal rejected that argument, ruling that Air Canada had to pay Moffatt $812.02 (£642.64) in damages and tribunal fees.
Not only did this incident lead to a financial penalty for Air Canada, but it also caused reputational damage, which takes time to fade from public memory.
?
Responsible AI – Need of the hour
These examples drive home the message that AI systems that are being developed need to be closely monitored and regulated. It is imperative to create a governance mechanism around AI models that ensures the examination of all ethical, legal, and safety dimensions. In other words, the need of the hour is a 'Responsible AI' framework for the ethical use of AI technology.
?AI works based on the data fed to it by humans. AI systems cannot behave responsibly by themselves. Hence, it is the responsibility of humans who develop and assist AI to ensure fairness and transparency in the predictions given by AI systems.
领英推荐
?According to a recent Accenture research: “Only 35% of global consumers trust how AI technology is being implemented by organizations. And 77% think organizations must be held accountable for their misuse of AI.”
?Since the consequences of misuse of AI can be heavily damaging, all organizations should resolve to adopt 'responsible AI' practices and ensure ethical use of AI technology that guarantees that their AI systems are explainable, monitorable, reproducible, secure, human-centered, unbiased, and justifiable.
?
Call to Action
This, therefore, implies that all organizations engaging AI should prioritize ‘responsible AI’ practices and incorporate them into the AI implementation lifecycle – i.e., from AI strategy to deployment.? Here are the practices that organizations should strive to adopt.
?
Last but not the least, in any AI endeavor, especially those with implications worldwide, it is important to often remind ourselves of the adage popularized by the Marvel Comics-inspired Spiderman series: "With great power comes great responsibility."
?
References:
?
#PowerofAI # AIisPower
?
IT Service Delivery Manager | Agile | ITIL | Cards Acquiring, Fraud, Dispute, 3D Secure | Power BI, Python, SQL, Azure Data Factory, Databricks | MS Power Automate | ServiceNow | Incidents, Problems, Change & QA |
4 个月Really good insight on AI. My opinion is, organizations must be accountable for whatever mishaps from their AI solutions. I am fully against unmanned AI solutions, AI still needs maturity to operate on its own. Without regulations and governing bodies, certainly AI would not make our world a better place. Content regulations are still a nightmare especially in social media, unfortunately our generations could not rely on the contents published in the public forums, although social media players keep saying, they have a best AI solutions to curtail fake contents, we always needs human moderators assisted by AIs.??? ? Thank you Kesavan Hariharasubramanian for showing the true angel and the devil in AI. I think quality and productivity must co-exist and neither can be compromised.
Engineering lead for privileged access management solutions. Extensive experience in IAM/PAM solutions delivery and support.
4 个月Informative article on AI giving insight into how it needs to be handled. Good one
Global Operations Executive | Expertise in Industry 4.0 & IIOT Manufacturing Transformation, Robotics, Autonomous Vehicles, Smart Factory
4 个月Kesavan Hariharasubramanian a well written article. It has me thinking about how these AI use cases, tests and metrics need to be configured in product design, mfg and supply chain environments to mitigate issues.
Specialist in Core Network at Ericsson
4 个月Very crisply articulated,Kesavan!!
Marketing | Strategy | Channel Management | Customer Relations
4 个月A well written article that is informative and also acts as a guidance. I want to add a few points to it. 1) AI should be considered as a productivity enhancing tool rather than a human resource equivalent, 2) AI should be multi-solution recommending engine for enhancing decision making credence, 3) AI cannot become a tool of fad and OCD. Organisations should be adhering to regulations, jurisprudence.