Accountable, Explainable, and Unbiased AI: Are we there yet?
Subramanian Murukesan
Digital Transformation Consultant | Cloud Certified Architect | DevOps & SRE Certified | Presales Solution Architect
Many believe the time has come for Artificial Intelligence (AI) to play a major role in shaping our future. There are many success stories to reinforce such belief such as AI successfully powering real-world consumer applications from Apple's Siri, Amazon’s Alexa to Robots as waiters in restaurants. Companies across sectors are also increasingly harnessing AI’s power in their operations, as embracing AI promises considerable benefits for businesses and economies through its contributions to productivity growth and innovation. These developments are accelerating the service industry and developers to rapidly embrace the new opportunity in AI space, even while being fully aware of the challenges that an unregulated AI is bound to bring along with unmanageable implications for their end user.
Some of the technical challenges associated with machine learning and deep learning such as requirement of huge ‘processing power’ and ‘storage of data sets’ that are sufficiently large and comprehensive to be used for training, are being addressed though Cloud compute platforms and massive-parallel processing systems. Challenges associated with reliable ‘AI models’ are being addressed by next-generation AI algorithms through supervised learning, unsupervised learning, and reinforcement learning. Even while technological progress has gathered pace, many hard problems remain that will require more breakthroughs, government intervention and regulations and hence most experts argue an ethical AI application to be still decades away.
Some of hard problems/social challenges where legal clarity is required, relates to data collection and use, accuracy and quality of the AI system. There is also a need to ensure that bias and discrimination have been accounted for along with fairness, responsibility and liability.
Europe has led the way in the area of Data privacy and use of personal information, which are critical issues to address with the General Data Protection Regulation (GDPR). While GDPR introduced more stringent consent requirements for data collection, gives users the right to be forgotten and the right to object, and strengthens supervision of organizations that gather, control, and process data, with significant fines for failures to comply, experts agree, GDPR doesn’t covers the whole AI gamut and believes more guidelines/framework are in pipeline.
From transparency and fairness perspective, people in general don’t feel comfortable when they don’t understand how a decision was made, especially, where predictions have social implications and the “black box” complexity of deep learning techniques creates a huge challenge. And hence, such systems should be transparent by design and should include redress mechanisms for potential harms that may arise. This can be through the presence of a human in the loop as human judgement is still a key component of a balanced AI system or the existence of a kill switch. These should be addressed through ethical principles, standards, and regulatory frameworks.
The European Union published guidelines (https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai ) which outline some of the essential steps a developer should take, which will promote the creation of trustworthy and ethical artificial intelligence applications is something service industries should take it to the heart. These guidelines describe with seven requirements any AI systems should meet to prevent AI from running amuck.
Also, the UK's Institute of Business Ethics (IBE) issued a briefing urging organisation to examine the risks, impacts, and side effects that AI might have for their business and their stakeholders, as well as wider society. The report proposes a framework outlining ten core values and principles for the use of AI in business. These are intended to "minimise the risk of ethical lapses due to an improper use of AI technologies
In India, NITI Aayog released a policy paper, ‘National Strategy for Artificial Intelligence’, (https://niti.gov.in/writereaddata/files/document_publication/NationalStrategy-for-AI-Discussion-Paper.pdf) , which considered the importance of AI in different sectors. The report deals with India specific key challenges and focus areas of AI implementation and briefly describes growing Ethics, Privacy and Security related concerns associated with AI implementations. It also talks about remediation and regulatory measures to be implemented to deal with such issues to make AI applications safe, secure and contributing towards better life and mankind.
On liability front, since AI is considered inanimate, a strict liability scheme that holds the producer or manufacturer of the product liable for harm, regardless of the fault, might be an approach for the regulatory bodies to adopt.
Currently AI guidelines proposed by various organizations and governments that apply to service industry/developers are only emerging. The German government has recently adopted guidelines for self-driving cars, which prioritize the value and equality of human life over damage to property while in US individual states have begun drafting their own set of rules, setting the stage for a confusing patchwork of regulation.
In UK, government’s plan of AI innovation to "transform the prevention, early diagnosis and treatment of chronic diseases by 2030" have led them to publish the ten principles of the UK NHS’ code of conduct for the use of artificial intelligence (AI) and other data-driven technologies and these principles include understanding user need, being fair and being transparent and ensuring the technology is secure.
These guidelines are shattered across sectors and governmental levels and this has resulted in an inconsistent and unclear landscape of rules and guidelines concerning development of AI applications. Proposals of new regulations related to strengthening consumer data, and privacy and security protections, establishing a generally shared framework and set of principles for the beneficial and safe use of AI in the coming years will only add to this fragmentation, but it is imperative for the service industry and developers to cope up with such guidelines to avoid running into the risk of overlooking potential ethical implications that could produce unwelcome results as witnessed from recent scenarios like AI-driven online ad-targeting system undermining elections in US, voice recognition system to detect immigration fraud cancelling thousands of visas and causing erroneous deportations in UK Or Amazon’s recruiting tool ending up being gender biased.