The polysemic and polymorphic AI
Shyam Singhal (Ph.D.)
Digital & Agile Transformation | Product Innovation and Development | Practice Head | Delivery Head | Excellence Head | Ex-Microsoft, Hewlett-Packard, Accenture
Despite of our liking or disliking AI is here to stay. Since its evolvement the world has seen many forms and debates on this topic. The most recent one is the “event” unfolded at OpenAI, where the CEO got ousted for a brief period, only to come back with reinforced following.
Since its (AI) evolvement we have seen state of euphoria, caution, pessimism, activism…the list just goes on. The event at OpenAI was the result of this conflicting states. One faction wants to pursue it without any restriction and/or inhibition, then there are others, who want to pursue but with caution.
I don’t understand what is this “caution”?
We developed nuclear weapons, and biological weapons in the name of societal advancement and creating a deterrent against potential threat. But who were we afraid of? Weren’t we aware that someday its accessibility would be beyond few nations, and potentially to “irresponsible” entities too? Didn’t nations and governments try preventing that from happening by forming cartels and regulations? What is the end result?
Internet revolutionized the world, we also thought of repercussions of that and again put in measures to prevent its misuse. However, we couldn’t prevent the creation and flourishing of dark web.
We thought cryptocurrency is the best thing, and it is the most secured way to transact. Is it? If someone has access to data on its use, one can easily find as why it is not the preferred way of transacting in any nation.
There is no poka-yoke available with us to foresee all usages and happenings in advance. As poka-yoke also happens in gradual steps.
The point I’m trying to make is we invent or innovate something by having some use case in mind. At that point our sole focus is on materialization of that use case without delving into consequences of it. AI is no exception either. That is why “Ethical AI” and/or “Responsive AI” are afterthoughts, or should I say are just “terms”.
领英推荐
Precisely, that is why after seeing the power of Generative AI in form of LLMs and application in other functions, people start seeing the potential of its misuse. Did we really realize that happening at this stage? No, definitely not. Neither it was true for all previous examples either. It is just that we accepted that publicly at that point, to either hide our gaffes or showcase our ethical and responsive behavior to the world.
We started AI as a companion to the humans, to increase our productivity, and free-up workforce for more skilled work. But was that the case all along? Do we have so many jobs at that level? Do we all have access to same level of resources, access to good education, and being competent? Certainly not.
Let’s accept that Industry 5.0 or concern for society and environment has always been and shall always be an afterthought for all commercial applications / inventions / innovations. As all of them are focused on creating differentiation, cost optimization, increasing revenue per unit, and overall profits. As that is what determines the payout for the CXO and the board.
That is why these “Ethical” and “Responsive” AI, regulations for AI etc., are ploys to cover the fact that they have started something which is already out of their control, and they don’t want to be painted with “irresponsible” tag, to say the least.
Otherwise, tell me how do they plan to control the open-source models, which are already there in the market? They couldn’t control dark web, and it also has access to cutting edge technology and resources. So, do you really think that with these regulations they will be able to control any rogue and even more powerful model and implementation?
Let me take one more example. An insurance company using AI to decide on premium or creating their underwriting engine. Will it ignore the factors like demography of the region, education of the people, crime rate in the region etc.? If not, wouldn’t it be discriminating against people who are getting penalized for no fault of theirs? But will you prevent an organization from doing so? No, because they also have a business to run, so it is legal to discriminate in this case.
Therefore, sooner we accept that AI is polysemic and polymorphic, the better it is for us. As we must live with this reality that with good, we will have bad too. Remember, no one had thought of using basic washing machines’ use in preparation of “Lassi” at scale, but those are being used anyways in India.
That is why no matter how much efforts (genuine) we make, how good our intentions are, we will not be able to prevent its pilferage to no so appealing use cases. Therefore, whether you like it or not like it does not depend on you, rather it is determined by its impact on you. You will like it if it were favorable, and not like it or abhor it if the impact was negative. Remember, if may not remain static either, as we all live in a dynamic environment.
Faculty (IS and Strategy) at MYRA School of Business. Senior executive - IT industry (TCS, HCL, Infosys, PwC)
1 年Interesting perspective, Shyam!
? Agile Delivery Management/Coaching ? Program/Portfolio/Project Management ? Account Management ? Relationship Management ? Pre-Sales ? Service Delivery Management ? Operational Management
1 年Well articulated Shyam
Founder, Director and Investor | Turn HR and Recruitment into your business’ biggest revenue driver | Passionate about helping CEOs and leaders to thrive in every aspect of life |
1 年It sounds like your article delves into the complex relationship between humanity and advanced technology, particularly within the context of recent events at OpenAI. ?