AI in the fabric of human nature
Whether we allow AI enter our lives is at large beyond our control. AI at whatever level of sophistication has already encroached us in so many forms and more of it is coming every new day. The question to be asked is, how much good or bad can AI do for each of us and for all of us going forward.
As a society obsessed with technology, in an unsatisfied need of new cool things that envelope us in more comfort, make us feel up-to-date or make more money to some through us, we approach AI with expectations of benefits in efficiency, productivity, quality of life and so on. Less clearly than the benefits we can understand the risks which, either confront us through entertaining sci-fi movies about villain human or robot abusing an AI system to pursue its selfish interests, or turn into "black out" nightmares that make cloud data inaccessible.
But let's take a step back from the cliche and look at AI through the track record of the human race. If we set a mirror to our very own nature, the picture shows human being as an operator running with a massive margin of error. We have made an endless track record of lacked judgement, misunderstandings and lapses of reason that had caused great conflicts, financial crises, environmental-time bombs, exhaustion of natural resources, overpopulation and so on. Humanity has polluted water, soil, air and above all, it has polluted the human being (see the increase in cancer, cardio-vascular disease or diabetes, to name a few). Have we not realized that the further we progress in complexity and sophistication, the more stuff we need to fix? Have we not realized that the nice-to-haves have been overtaking the essential stuff in our value charts? Cannot we see that merits yield to popularity? Now imagine, we take the state-of-affairs data and pass them through a chain of binary code transactions, an algorithm created by an objective AI system, perhaps incomparably more intelligent than us? The most likely objective conclusion would be: "NO HUMAN, NO PROBLEM." As the AI will be designed to fix problems, what will it do?!
Before we reach the point of an inevitable self-destruction through our very own creation, we should very well understand where within the mass of AI with its endless potential lies the "line of no return." There definitely is one such "LONR" but who can define and identify it? How can we chronologically and technologically assess our distance from that "LONR" at its entire length? How can we ensure no trespassing?
To a complete layman of my likes, the threat of the "LONR" far outweighs all the prospective benefits that come along with AI.
It seems like a very obscure territory which humanity hasn't sufficiently explored prior to the free fall for AI in which everyone greedily pursues selfish interests in research, development and exploitation of the AI space rather than setting its safe rules and boundaries. Once again, given our historic track record of critical mistakes committed in the name of progress and prosperity, AI is a huge gamble if left unleashed. Observing it from afar, there comes in striking contrast to the discussions about setting limits to cloning and other biotech research which is a subject deemed to have more traction in regards to morality as compared to AI which certainly does not lag behind in terms of its destructive potential.
Can the human race pull its acts together? Or is it already too late and we will, as always, rely on our (dis)ability to fix things after the damage becomes apparent?
May, in the AI Wikipedia, human being appear NOT on the list of extinct species!