"The Ideological Struggle Behind AI: More Than Good Versus Evil"
Evelien Verschroeven
Expert in Learning Communities Academy ?? #EducationalInnovation
This article is a summary of a article in Dutch written by Siri Beerends & Emile van Bergen here you can find the original tone. https://www.setup.nl/artikelen/de-existentiele-dreiging-van-ai-de-ideologie-erachter/
Ethical AI: a contradiction
A tale of AI protagonists and AI antagonists. The prevailing notion is, "If the wealthy elite behaves ethically and doesn't rush too quickly, the development of AI will turn out fine." The belief is, "We just need to establish appropriate rules to guide the progress of AI." While the appeal of this idea and the hope for 'ethical AI' are understandable, it constitutes an inherent contradiction.
AI is not just a matter of developing 'good' or 'bad,' or responsible versus irresponsible' use. Ethically, AI is inherently problematic and ideologically charged.
It seems like AI is rapidly advancing, but in reality, AI has nothing to do with the increase in intelligence. Instead, it is driven by the increase in data, computational power, and the diligence to feed AI applications with texts, images, and more.
Data and parameters are not synonymous with insight, understanding, or reasoning. From the data and parameters, the most probable answer is generated through probability calculations.
AI researchers often possess a simplified understanding of language, human processes, and evolutionary complexity. Existential threat scenarios do not take into account the chaotic nature of the natural world. The structures of humans and computer programs are fundamentally different, resulting in no shared experiential world and meaning.
reflection: As humans, we often seek to model and simplify everything to grasp it more easily, but in doing so, we risk losing nuances. How can we truly derive meaning if we ignore these subtleties?
A computer with an AI program is not an independent life form and cannot exist without humans. The actual risk lies with us, as we have the choice to stop, redirect, or proceed. The threat of AI does not lie in sudden disasters but in gradual processes of decline. AI that poses a threat to humans is often viewed as a future risk, but current AI already threatens us existentially—not physically, but by reducing us to economic functions.
?
If the robot doesn't resemble a human, then we turn a human into a robot.
Humans are increasingly becoming like machines. By reducing language comprehension, consciousness, intelligence, perception, intuition, empathy, and personality to statistical phenomena and data processing, AI companies can assert that they can replicate these aspects with their own AI systems. The overestimation of AGI and x-risk impacts the human self-image and the way we understand and act upon intelligence in our daily lives.
领英推荐
The Complexity of Intelligence and Humanity: A Philosophical Exploration
The concept of intelligence and the nature of being human have been subjects of philosophical debate for centuries. Entrusting these immense questions to a group of computer enthusiasts may seem absurd at first glance.
Intelligence is an abstract, versatile concept for which there is fortunately no universal, unambiguous definition. Terms like "at human level" or "beyond human level" suggest a measurable standard of humanity. In reality, intelligence is culturally determined and extends beyond what we can ascertain with Western measurement tools and simplistic IQ and EQ tests.
Let us not forget that in history, entire communities were exterminated both physically and mentally in the name of intelligence and the question of whether they were human or not."
The AI industry often models artificial intelligence according to human standards, indirectly determining the definition of a human. However, we must keep this definition open. The ambiguity of intelligence is not a shortcoming but a necessary condition for being human. The fact that we cannot fully understand and describe our human capabilities offers us the opportunity to be part of an open future.
Unfortunately, this open future is currently under threat. As intelligence is narrowly and mechanistically defined, AI companies gain the power to shape our understanding of intelligence according to their own commercial interests. The tragic consequence is that we reduce the meaning of intelligence and being human to what AI developers can or cannot replicate.
The threat of extinction does not come so much from a runaway AGI but rather from our own tendency to act increasingly mechanically. Let us not fear so much the machine that suddenly grows outside of us but especially the machine that gradually grows within ourselves.
This article is a summary of a article in Dutch written by Siri Beerends & Emile van Bergen here you can find the original tone. https://www.setup.nl/artikelen/de-existentiele-dreiging-van-ai-de-ideologie-erachter/
references:
?? P??????? M???s?? S???????s? | Author | S?????? | F?????s? | Advisor
10 个月Oana Vega
an auspicious polymath, Chief Visionary Officer @10^13 (10 to the 13th power), Dinner Party Jerk, Neurodivergent (and divergent Thinker), creator of the Neuro Universal Language, BHAG’s [Big Hairy Audacious Goals] .
11 个月"The article's nuanced view on AI is intriguing, particularly the skepticism around AGI being a marketing tactic. Yet, I believe AGI might eventually surpass human limitations, potentially distancing itself from its creators due to our inherent flaws. Concerning the hippocampus, could AI influence lead to evolutionary changes in our brain structures, such as the hippocampus? AI's impact seems more likely to shape our cognitive processes rather than directly altering physical brain structures."
Expert in Learning Communities Academy ?? #EducationalInnovation
11 个月Jan Hauters (杨候德)
Expert in Learning Communities Academy ?? #EducationalInnovation
11 个月Zsolt Olah as I mentioned yesterday