As Regards AI, Count Me Among the "Speciesists" (Sort Of)
I am an animal rights/anti-abuse activist. I think that the way we humans treat our sentient animal cousins screams-out for cosmic justice to be delivered, should we not change our ways very soon. In that regard, I don't believe our meat-eating and maiming and killing sentient animals for human purposes, including for mere cultural enjoyment, are morally justified. These treatments of sentient animals is, among other things, what Australian philosopher Peter Singer called "speciesism," in his now classic work Animal Liberation. "Speciesism" is the arbitrary disregard of the pain or welfare of other animals simply because they are members of another species.
Because of his warnings concerning AI, Elon Musk was, oddly, accused of speciesism (or, errantly, of being a "specist") by a certain tech leader and AI enthusiast. But when it comes to AI, I might join Elon Musk in embracing the "speciesist" label, save for the fact that it is a category error when applied to AI -- one of those words being thrown around inappropriately because of a misunderstanding of the meaning of the word and the reason Singer coined it. (This is what happens when one spends one's time in computer technology and data science classes while skipping philosophy classes because those classes are deemed "BS distribution requirements.")
What Musk must mean (I assume) in his embrace of the term is that he is on the side of humanity (human beings, with flesh and blood and aspirations and characters and spiritual yearnings and naturally evolved (or created) capacities for complex reasoning), not on the side of machines, however sophisticated those machines are or might be.
The AI enthusiasts who use the term "speciesist," who are accusing Musk and others of being guilty of it, seem to be putting the cart before the horse. That is, they assume that there already?is?a "species" concerning which the criticism of speciesism may be apt. But there is no such species at present. For that concern to be even remotely apt, Artificial General Intelligence ("AGI") will have to actually emerge. So far, it has not.
领英推荐
But even with the emergence of AGI, I would not hold that its "arbitrary treatment" (as a lesser-than "life form" not compelling our moral regard because having no moral status above that of a tool), would be apt, because I would not consider an AGI machine to be a new "species," nor do I feel constrained to expand the meaning of a species beyond the biological. AGI would be a human construction, a technology, a tool, and nothing more, however it might, like Baal or Marduk, seduce us into believing otherwise (and apparently, even in its nonexistence it seems to be garnering the power to seduce at least some of us into that belief, in a manner of speaking).
Finally, the meat-eating, leather-wearing techno-fascists (as I must call them, for in the (Hannah-)Arendtian sense of "thinking" they are not thinking very well at all), in raising the moral challenge of "speciesism," show just why they cannot be trusted with the technologies they are so eager to unleash on the rest of us, the noted salutary uses of that technology notwithstanding: They seem to think more like the machines they wish to create than the flesh and blood human beings who struggle to construct worthy lives, build characters, hope, experience awe and reverence, and love and hate deeply.
If we allow ourselves to get pulled into the powerful currents of "inevitability" that AI developers have churned into existence (a good deal of which is based on the potential for?huge?financial profits), we will be?fools. Now is the time to act to create the controls (ethical, legal, and characterological) that this very dangerous moment demands. It is an all-hands-on-deck moment for philosophers, theologians, policymakers, and thoughtful global citizens.
I, for one, do not?welcome?our future AI overlords, the latest manufactured idol of human stupidity, any more than I?welcome?my garden rake or Chevy as my overlord. While I will always understand the moral conundrums and difficulties of unplugging a human being from a ventilator (euthanasia), and the moral outrage of vivisection on non-human animals (the proper application of the concept and charge of speciesism), I would unplug an AGI computer with blithe disregard for its "fate" or "life," since it would have neither. And since AGI, once upon us, will be able to scour the web for "threats" to itself, I shall no doubt be among the first targets for death -- real death, not the fake death of an AGI machine unplugged. To avoid that outcome (and many others that are far more important), I will do all I can to join in the work to constrain the power of this technology while there is still time to do so.