The Paradox of AI: Brilliant and Clumsy at the Same Time
Abhishake Yadav
Using data analysis to make decisions, an analytical approach to business leadership
Artificial Intelligence (AI) has come a long way since its inception. Today, AI models can be as large as Goliath and trained on tens of thousands of GPUs and a trillion words. They have demonstrated their power by beating world-class champions in various fields and even passing the bar exam. However, despite their immense capabilities, these models often struggle with trivial tasks that even children can do. This begs the question of whether meaningful research can be done without extreme-scale computing.
There are three immediate societal challenges posed by extreme-scale AI models. First, the exorbitant cost means that only a few tech companies can afford it, leading to a concentration of power. Second, researchers cannot inspect and dissect these models, which puts the safety of AI at risk. Third, these models have a significant carbon footprint, which raises environmental concerns.
The importance of common sense in AI has been a long-standing challenge in AI research. Similar to dark matter, where only 5% of the universe is normal matter, and the remaining 95% is dark matter and dark energy, for language, the visible text is the normal matter, and the unspoken rules about how the world works, including naive physics and folk psychology, make up the dark matter. Extreme-scale AI models often lack common sense, which can be a concern for their safety. AI models need to be taught basic human understanding about human values and morals to ensure they don't misuse their power. They need to be trained on commonsense knowledge, social and visual common sense, theory of minds, norms, and morals.
One way to move forward is to innovate data and algorithms. Raw web data, which is freely available, is not suitable for training AI models as it's loaded with racism, sexism, and misinformation. The newest AI systems are powered with the second and third types of data, which are crafted and judged by human workers. This data should be open and publicly available, so anyone can inspect the content and make corrections as needed. There are commonsense knowledge graphs and moral norm repositories available to teach AI basic commonsense norms and morals, and their data is fully open to the public.
One potential innovation being investigated is symbolic knowledge distillation. This algorithm can take a large language model and condense it down to a smaller commonsense model using deep neural networks. This innovation would allow the generation of human-inspectable, symbolic, commonsense knowledge representation for training other neural commonsense models.
In conclusion, AI is like a new intellectual species, with unique strengths and weaknesses compared to humans. To make AI sustainable and humanistic, AI models need to be taught common sense, norms, and values. A humanistic approach to AI development that is transparent, inclusive, and focused on the needs of humanity is crucial for the future of AI.