Accountable AI - Lets talk about that

Accountable AI - Lets talk about that

Is there a need for Responsible AI or Accountable AI?


Being a professional network, I would rather focus on the business impact, than a social impact of technology innovation. The era in which we are entering will make us more dependent on technology. Technology has already taken away our confidence on knowledge, that even if we know something we do not feel confident enough to state that without a doubt and rather we end up validating that with Google.

Our minds have become so incapable of doing simple calculations, let alone remembering phone numbers, which we could do just few years ago. We trust technology so much that we question our own memory or intelligence.

I am sure for those who have kids, would at some point come across a situation wherein your kids would have questioned your knowledge and suggested that they would check with Google first. ?My daughter does that to me all the time.

The future generations will rely on applications and tools for every menial task, which should have been a no brainer and one’s brain should be able to process and guide you with something what we call common sense, knowledge or experience. So, if we will rely on every small task to be either performed or guided by technology, ?we are surely going to technology for the bigger, important, more complex tasks or decisions.

Let’s understand some of the scenarios,

1.?????Doctors will take decisions based on AI assistance

2.?????Teachers will have AI assistants to drive course lessons for students

3.?????Cars in any case will be autonomous, so no more driving skills, alertness etc

4.?????Policy Makers, will again rely on intelligence of technologies (Forecasts, predictions etc)

5.?????Corporate decisions, where to invest, whom to hire or let go, etc

Invite you all to also suggest more scenarios.

No doubt all the above scenarios will surely help in getting better outcomes in majority of the cases like getting better results for students, improve medical treatment accuracy, reduce vehicular accidents and possibly improve policy making. However, the thing to keep in mind is that an AI will be aided by the current world data, thus will inherit the biases, missing links, etc and may not be able to build a human emotional context to things/situations and may render human dependence less important.

Side note : would we need teachers in future, coz an AI adaptive bot can assess every students needs and teach them basis individuals capability and pace, possible in near future? or just my imagination.

If AI is going to impact every intricate detail of our lives, isn’t it important to understand the Risks, Gaps and Threats at the same level as we have been propagating its benefits, usage and adoption.

Any time a new technology innovation is announced, there a race to creatively define its use cases and jump on the bandwagon to be seen as a leading voice or Thought leader in the realm of new technology. We have witnessed the same with Blockchain, Meta, NFT, 3D printers etc. These technologies did bring out a paradigm shift on how technology can be perceived and adopted; However, these lacked the versatility with which these could be brought into practical application of real life, or maybe just that the time is not right for mass adoption. While the number of applicable sectors/verticals/real life scenarios from the think tank seemed very high, actual implementations were done in handful of cases. Resulting in fading of the hype, which does get replaced by some new tech innovation or its being simply tagged to AI these days.


What is typically missing in the race to ride the wave is the impact of initial rush on various facets of human life.


While the industry has made some serious attempts to unbox AI, make it more, transparent, understood better or should I say made as ‘Explainable AI’ and moved towards setting frameworks for Responsible AI, which could be adopted, promoted, and evolved to ensure that AI is put good use for humans. There is no one standard against which the application of AI or usage be measured/assessed. We witnessed something similar on Data Privacy - it took time but we got something called GDPR and is becoming seemingly important now.

Every leading Think Tank organization has defined Responsible AI with its own lens while trying to add an additional element over its rivals, covering aspects of

1.?????Transparency

2.?????Data Protection

3.?????Explainable

4.?????System Bias

5.?????Safety

6.?????Legal & Regulation

7.?????Governance, etc

However, these only act on the limitations in the technology, but do not apply Risk view to the application of the same.

There are investments made to add elements of above guiding principles. However, the pace of innovation in current era and cut throat competition to win the AI race, doesn’t seem reasonable enough for enterprises to pause and look at building newer tech with a greater sense of responsibility. A simple approach taken is that of EULA, before the launch of AI into production, which run into pages and no user reads it and newer tech is pushed into the market with safeguards for the company but may not be for the end users.


#GenerativeAI will become mainstream, which accompanied with #hyperautomation will definitely change our way of doing things, making it faster and efficient. However, as with any tool designed for solving one issue can also lead to rise of other issues.

Generative AI can and will be put to use, in areas of

1.?????Fake News /Information Spread - generation of text, accompanied with image can be used to spread misinformation. Internet is already struggling with Fake News, and this would only increase by putting such tools in the hands of the perpetrators.

2.?????Bias - Generative models will have inherent bias, based on what data is fed into it. This is similar to conditioning children at early age to hate a particular race/community and they end of doing that as they grow up, that's their mental bias)

3.?????Copyright infringements – Generative AI can learn to write in particular styles of individuals and the same content can be repurposed or simply leverage copywrite information to generative new.

4.?????Duplication - Voice, Image, Video Generation – This can be used for fraudulent, spread hate & panic. For example. a fake video can be generated of a country leader spreading false propaganda

5.?????Hallucination, Misinformation or Inaccuracy of facts – Generative AI is accurate to the level of data is it fed with or trained on, so if the data used for training is inaccurate or not contextualized to the scenario, it will lead to generation of whole new outcomes. Something called Hallucination. We have already start trusting the information presented on the top results of Google, and to our laziness or ease of dependence we may start putting in more trust into the outcomes of Generative AI for our Personal or Business Decisions.

Considering the above Potential Risks (there could be more), there is a need to move beyond #ResponsibleAI to #AccountableAI. Wherein the enterprises should put in measures to be seen/held more accountable, when they role out new products built on AI into the hands of masses.

There has to be focus on putting frameworks which starts measuring objectively the impact of any new product/tech/use case that is launched with a better understanding of its impact

For example

Impact on Environment – AI leverages advanced languages, which also require high amounts of data processes and complex computation, which in turn results in higher consumption of Power. ?Did you know Bitcoin single transaction results in CO2 emissions equivalent to that of an average family in three weeks.

A Bitcoin produced in 2021 would have generated?113 metric tonnes?of CO2 equivalent, which is 126 times more than one mined in 2016.

How much energy does 1 Bitcoin transaction use?

Energy consumption in kWh

1 Bitcoin transaction - 703.25

100,000 VISA transactions - 148.63

Researchers estimated that creating the much larger GPT-3, which has 175 billion parameters, consumed 1,287 megawatt hours of electricity and generated?552 tons of carbon dioxide equivalent, the equivalent of 123 gasoline-powered passenger vehicles driven for one year.


Here’s my attempt to put a light framework of what all should be considered as we embark on driving innovation through Generative AI

No alt text provided for this image
Accountable AI framework


It is imperative that the organizations, which are pioneering in the field of innovation also take accountability seriously before they handover such technologies in the hands of common people. Infact, even the Founder of OpenAI Mr. Sam Altman suggested greater regulation of AI in a Congressional Hearing.

We need the technology to have inherent gateways to establish continuous monitoring over the usage of the democratized Generative AI Tech Use cases. I believe 10 key aspects which must be baked into the lifecycle of development, implementation, and live deployment of codes/models


1.?????Identification - Who is using it?

2.?????Growth & Expansion - How many entities/individuals are using it?

3.?????Context of Use - How is it being used? Possibly capture response from users through surveys or self declaration

4.?????Evolution - How are the models evolving?

5.?????Bias Monitoring and removal – Training and Live data assessment for Bias.

6.?????Innovation - Evolving nature of use cases.

7.?????Explainable – Ensure each iteration, self learning by AI or shifts are explainable.

8.????Transparency -?Maintain transparent on usage, and application.

9.????Accuracy or Relevancy measurement/scoring of the outcomes on a continuous basis

10.??Referencing – Information generated should be referenceable to source or tagged if completely fresh generated by Generative AI. ( Microsoft announced watermarking its AI generated images)


The intent of sharing my thoughts around Accountable AI is neither to limit the independence of individuals, enterprises who are truly pushing the benchmarks of innovation, nor is it take the freedom of choice and reasoning, But to highlight the need around having more defined processes, guidelines, and policies so as to ensure that the technology is only used for the betterment of, not just Human race but for our entire planet.

PS. Also it makes me wonder, would we need a “Kill switch” if AI goes out of hand or does not perform the way it is designed? If yes, it will open the debate on who will have control on that "Kill Switch".

?Thanks for your time in reading this article. If do share it further so we get more thoughts on this by the communities working in this space.

References

https://www.microsoft.com/en-us/ai/responsible-ai?activetab=pivot1%3aprimaryr6

https://www.ibm.com/design/ai/ethics/accountability/

https://cloud.google.com/responsible-ai

https://www.responsible.ai/

https://lens.monash.edu/@politics-society/2023/03/29/1385545/so-sue-me-wholl-be-held-liable-when-ai-makes-mistakes#:~:text=Sometimes%2C%20the%20AI%20system%20may,case%2Dby%2Dcase%20basis.

Edmund Cartwright

Technology Marketing & Sales Enablement Specialist Creating Effective Lead Generation and Customer Acquisition.

1 年

Thank you Kamal, your article is very much appreciated. I have to agree with your perspective on the need for #accountableai. Certainly AI is powerful and has potential for great benefits to mankind (my leap of faith) but in the same breath, AI is powerful and has the potential to cause great harm.

Nagesh Ravuri

Enterprise, Multi-Cloud Strategic Architect

1 年

This is good thought process Kamal. I think we are not mature enough on the responsible AI yet to move to next maturity that you are proposing Accountable AI. At the same time the AI seems propagating deep inside. The balance will happen on the maturity as any technology or fades away. Will see

Rishi Bhandari

Joint Vice President - Business Solutions Consulting & Pre-Sales | Credit & Fraud Risk | Public Policy & Governance | Ex Deloitte, PwC, SAS & NatWest Group

1 年

Great insights Kamal!

回复

要查看或添加评论,请登录

Kamaljeet (KJ) Singh的更多文章

社区洞察

其他会员也浏览了