AI's Risk
AI is one of the hottest research topics. Though there are a lot of positive news around, where AI solutions have helped human society towards more efficient and reliable models, there is one aspect which should be considered to have equal weight. Humans are not ready to completely rely on AI. Advancement in AI development calls for advancements in social development as well. Now that there are prospects of real estate all over universe, it is evermore important to guide technological and social development in right directions. Social development is far behind which is a risk and there is no leadership acceptable enough to lead the sync of advancement in, for that matter of fact, most of the tools (eg, AI) , used by humans, with society in the direction of widely acceptable roadmap of development. Potential mismanagement of a tool as powerful as AI doubles the risk.
Scalable model of society, globally acceptable, at least by human beings is not in practice anywhere in this world. For the sake of existence, competition between these real time social models, with added requirement of wider acceptance make proposed postulates of these social models contradictory to each other.
Let’s assume that two countries are at war. These two countries are given a chance to train an AI which is designed to favor one of the countries based on training. The dataset of acceptance for AI is assumed to be lives lost due to wars in retrospect. Following outcomes are possible:
- There are losses on both sides hence both are wrong.
- There is a loss to one of country more than other. Hence country with fewer losses is wrong.
- Classification inconclusive due to high variance of data points as there are two contradictory sources.
3rd case would occur if there are equal number of data points provided by both parties for training.
2nd case would occur when one party has more data points than other party.
1st case is possible only if there is a common methodology of classification of data points accepted by both parties. And even that is not in favor of any of the countries.
AI can never be trained to be unbiased unless unbiased training is available and it is not. Right is divided into two by every party involved, whether it is a society or a country or an individual.
If somehow, there is a common parameter identified to train AI systems to assist humanity, it is quite likely to be directed towards existence of human life. Threat to humans can be negative dataset, treat to humans can be positive dataset. AI systems are likely to conclude that there does not exist scalable development model of society being followed by human race. This most logical conclusion is bound to affect the social development. Why on earth would human race have less negative biases towards global warming and pollution? Humans have degraded their surrounding to such and extent that they cannot drink water straight away unless they have wasted half of the water through RO systems, for example.
This poses a concern towards responsibility of AI development unless most important issues of human society has been handled by hands of humans. Safely assuming that AI can better any human shortly considering its current evolutionary state thanks to humans and their desire for their needs to get fulfilled by easier ways using tools such as AI itself and henceforth virtually stagnating their own biological evolution, it seems to be inevitable that AI decision system shall give last logical decision on the scalable society. Not humans. Humans have killed millions just for their taste buds. If it is life we are talking about, there is no way humans can have logical justification in front of a decision system unbiased towards life itself.
Moreover, AI are fastest learners in existence known to humans, given that dataset is available have. There is a dormant data-megabomb lying in deep web where majority of internet content lies a part of which is dark web. This megabomb, due to exponentially fast development of AI, is closer to detonation, if not checked.
Unless these problems are resolved, AI development has its risk which should at least be acknowledged.
Sake of science has to come after sake of life. Without life, there is no purpose of development of science.