Anything wrong with human-centered AI?

Anything wrong with human-centered AI?

The emphasis on "human-centered" AI arguably lies at the symbolic center of today's hyperbolic fear (or admiration) towards AI technology. It points at a one-sided conception of human existence, perpetuating the age-old human vs. nature, mind vs. body dichotomy. Aristotle's conclusion that the mind is separate from other senses was wrong. Our mind is distinct from the other senses, yet still a sense, despite its unique ability to reflect upon itself. Thus, our mind is deeply embodied, while AI, lacking this biological connection, is neither intelligent nor conscious from a human perspective. Reaching consciousness is not simply a matter of increasing computing capacity or algorithmic complexity. Consciousness is matter of human experience, intuition, and self-perception. However, the essence of consciousness remains elusive and enters the domain of philosophy and speculation. In the future, can we generate artificial consciousness, or will AI develop its own? Some posit that advanced AI might create a unique form of consciousness, resembling but distinct from human consciousness. Others contend that merely emulating consciousness suffices for AI, deceiving most of us effectively. Still, some argue that consciousness remains an exclusive trait of organic entities. This is not the entire issue related to the body-mind dichotomy.

Advocates of human-centrism, which are most of the lawmakers, have valid points, yet their strict adherence to the dominant notion of human-centric AI leaves little room for a more substantial public debate and deeper societal change, which is needed to navigate the 21st century digitalisation. Similar to identity politics, reducing everything to one overarching identity dilutes its meaning. Identity holds no inherent truth. Labeling AI as a severe risk has become a prevailing perception. No doubt: "Human-centered AI " does mean profound moral progress in society and politics, yet it may present a potential trap. "Human centrism" has already been used as an ideology in the past.

It's time to re-examine human existence. Who are we? To borrow this phrase, we are the animal that denies being one. Whether through belief in an eternal soul or in science, we've experienced other forms of transcendence, like feeling someone was meant for us. Those who deny any transcendence have a hard time to enjoy such experience. There is no proof in nature that all is math. Although the brain is emphasised, it’s just body. This is the underlying dichotomy in today’s human-centrism.

Yet, there is a key difference (for now): We act based on the perception of the image we develop of ourselves, unlike animals. Animals lack the capacity for large-scale change and moral evolution. Only humans can decide and strive towards things like eating vegan for nature's sake, bringing me back to morality.

Politics should focus less on defending identities and more on immoral behavior. Racism, being immoral, must be opposed. Equality isn't a truth, but actions that restrict our innate drive for autonomy and agency need serious addressing. Cancel culture is moralisms, though an emancipatory struggle, lacks truth and ignores the moral principle of forbearance, crucial for societies to function. It overlooks that certain action is even non-moral or neutral. However, today’s politics moralises non-moral behaviour, which deeply alienates.? For, the white male collar worker, a global minority, is still part of a powerful system protecting its relative superiority, despite precarious life conditions. This implies a moral duty of the powerful towards the weak and disadvantaged. It's not about protecting the identity of the working class, as it lacks any substance that has ever been successfully and sustainably utilized.

Now, what must precede human-centrism is a new realism that is aware of the old dichotomy, seeks to overcome it, and places morality at the core of our existence, something erased by "unfinished capitalism" and "physicalism" – the nihilistic view that we are merely a "neuronal storm.” Neuroscience's contribution to AI has already peaked. Innovation might now emerge empirically from within AI models, as top AI engineers suggest. Eventually, we will see that the human brain might be more clumsy than backpropagation. What was the oil for the Industrial Revolution, has been big data for AI. So far there hasn’t been any significant short cut through history, and what for nature applies, has applied for technology, only that technological advancement has become exponential. Nature didn’t start with a massive neocortex, and AI engineers needed big data for AI’s breakthrough. AI might well find a better process of technology change, without imitating our human brain.

Thus, post-big data AI will surely surpass human capabilities in emulating what makes us human, but it won’t stop there. However, AI won't provide the "why." While science is universal, there is a reason our minds remain clouded and imprecise. We must remember that subjectivity and self-doubt are the very space where freedom is experienced, and agency, emancipation, and change occur. We're not just slaves to our neurons and some form of natural determination; we can witness moral progress and collaborate for a better future. To emphasize algorithm over human agency is to escape responsibility.

AI might offer a false sense of transcendence in a world lacking it, but such emulation would be deceptive and immoral. AI cannot become a “mass opiate,” also simply because only a few will control those models (unless their are socialised and controlled by cooperatives, which would probably the most radical yet still imaginable way of AI governance). A law doesn’t empower but distrust human agency. Last but not least, the concept of cosmological AI, perhaps born from feelings of exclusion or deep distrust in the dominant view of humanity, raises questions about long-term AI regulation. And yet, creating an even larger identity is not the answer, as it ultimately diverts attention from existing and immediate challenges.


Dr. Seth Dobrin

AI Consultant ?? Globally Recognized Leader | VC | Speaker | Entrepreneur | Formerly IBM’s First Ever Global Chief AI Officer | ?? Geneticist | ???? Golden Visa Holder

11 个月

There are so many definitions of ‘human-centric AI’ out there. The Stanford Institute for Human-Centered Artificial Intelligence (HAI) defines it as: “Human-centered artificial intelligence is AI that seeks to augment the abilities of, address the societal needs of, and draw inspiration from human beings. It researches and builds effective partners and tools for people, such as a robot helper and companion for the elderly.” This is the most widely accepted context for human-centered AI. I don't see how the above has anything to do with that? I'm confused, Fei-Fei Li your perspective? https://hai.stanford.edu/sites/default/files/2023-03/AI-Key-Terms-Glossary-Definition.pdf

Thomas Va?ek

Journalist, Author, Head of Content ?human?-Magazin

11 个月

Thought-provoking, thank you!

要查看或添加评论,请登录

社区洞察

其他会员也浏览了