False Equivalence: AI is Smarter Than Humans... NOT.
Raéd Alexander Ayyad
"The most serious mistakes are not being made as a result of wrong answers; the true dangerous thing is asking the wrong question." —Peter Drucker
Life...
There is no doubt that we have being living in near two decades of buzz-word and hype filled life, at least, surely, in the US, and that does include aspects of the subject of Artificial Intelligence, better known as "AI"!
Is anyone else getting sick of hearing the hogwash about "artificial intelligence" (AI) being "smarter" and more capable than human beings, and dominating us—no less? Does everyone even understand what that means, and how?
If there is anything the advancements in AI research & development (R&D) made me think of often, over the past decade, is the concept of Humanity being a superorganism, or, at least, being part of one.
Here's a bit of scientific trivia: As currently defined by scientists, did you know that the largest organism on Earth is actually a fungus! Yes, a fungus!
Personally, I am one of those who believe that Earth is a superorganism, for everything on the planet, including us human beings, existing in symbiosis, is critical for a balanced presence of said supreorganism; we have also observed that when said equilibrium is disrupted, "mother nature," through 'her' means (which we are still working on understanding), eventually returns to said state (healing process).
Are all of you aware of the recent reports of several creatures that were considered extinct due to environmental, or human, action, being "found" alive, again—without human intervention? Through what we know so far, and are able to verify, about Mother Nature's long-term strategy: Destructive influences and mutations are eventually eliminated, and the opposite is reinstated and amplified... Can it be that the environmental turbulence that is effecting us (humans) negatively is akin to the immune system combating a disease—including that of developing a fever? Hmm... that's food for thought! ... Anywho, I digress...
The collective
Since our childhood...
The two words I heard the most, all my life, since I could comprehend social concepts, were "participate" and "collaborate." From our early years at home, then in school, and later during our careers and personal social interactions—as adults, we are monitored and evaluated for these prized developing skills.
Any deficiency in using them triggers a barrage of concerns and resulting remediating actions... if the person is not responsive to the remediation, their lacking participation and collaboration skills are deemed a behavioral 'defect,' often blamed on some genetic glitch, or "upbringing" (nature or nurture)... in some communities, that will lead to initiating processes to help "treat" the 'afflicted' individual, and in others, they are shunned and incarcerated (punished)!
Perhaps Maurice Hurley, who introduced the concept of the Borg collective to the StarTrek universe, had some interesting perspectives, and the other writers further exploited some of them in interesting directions, such as when they explored the character of "Hugh."
The existence and the thriving of the fictional Borg society is contingent on an extreme credo of sacrificing the individual's needs for the sake of what benefits the collective, the hive, as whole... there is no privacy... there is no "it's mine" business... every Borg can hear, feel, and perceive every sensory input and output of every other individual Borg. Only by having access to this amount of data are they—relatively speaking—more powerful than most other life-forms of that fictional universe.
How are any conflicts resolved between the Borg? By a "queen" entity... “I am the beginning, the end, the one who is many. I am the Borg...” a God (in the parlance of human beings)? Hmm... we create things in our own image? That said, for those of us Trekkies (yes, I'm one), we know well how these Borg qualities can, eventually, be exploited by foes, and backfire on the almighty Borg themselves!
There is no AI without a "microchip," and there is no microchip without the transistor
Using Artificial intelligence (AI) is the process of applying advanced analysis and logic-based techniques, including machine learning (ML), to interpret events, support and automate decisions, and take actions, be it via input devices such as voice assistants, facial recognition cameras, a collection of sensors, etc.. These don’t work via magic, however, and need something to power all of the data-processing they do.
For some devices, that could be done in the cloud, by vast data-centers, and other devices will do all their processing on board the said devices themselves through an "AI chip" on their motherboards.
All these systems are contingent on a singular component: the humble transistor, or "execution units" in the case if neural chips. Towards the latter example, most people need to understand that when we use the term "neural networks," unlike those in StarTrek Voyager, we are not describing a technology that is as effective, or identical, to bio-neural cells (ex. human nerves and nervous system)... to the contrary, for, as described in an article in Ars Technica, neural networks are only distantly related to the sorts of things you'd find in a brain.
While their organization and the way they transfer data through layers of processing may share some rough similarities to networks of actual neurons, the data and the computations performed on it would look very familiar to a standard CPU.
All that said, and as the same source describes, one of the other big differences between neuromorphic chips and traditional processors is energy efficiency, where neuromorphic chips come out well ahead.
IBM, which introduced its TrueNorth chip in 2014, was able to get useful work out of it even though it was clocked at a leisurely kiloHertz, and it used less than .0001% of the power that would be required to emulate a spiking neural network on traditional processors... impressive.
Mike Davies, director of Intel's Neuromorphic Computing Lab, said Loihi can beat traditional processors by a factor of 2,000 on some specific workloads. "We're routinely finding 100 times [less energy] for SLAM and other robotic workloads," he added. Nice!
As an article in the IEEE Spectrum journal elaborates: The basic building block of neuromorphic computing is what researchers call a spiking neuron, which plays a role analogous to what a logic gate does in traditional computing. In the central processing unit of your desktop, transistors are assembled into different types of logic gates—AND, OR, XOR, and the like—each of which evaluates two binary inputs. Then, based on those values and the gate's type, each gate outputs either a 1 or a 0 to the next logic gate in line. All of them work in precise synchronization to the drumbeat of the chip's master clock, mirroring the Boolean logic of the software it's running.
Basically said...
All these technologies are still information processors that, at a most basic level of explanation, use a truth tree switch to arrive at decisions... True=On, and False=Off.
At the core of these processors are, still, transistors, which are constructed of logic gates... so, for the sake of simplification, again, it is fitting to stick with the more tangible transistor model of processing as we continue forward.
Individuals as transistors...
Here, I arrive at the argument that I'd like to propose... is it a more accurate depiction to compare each individual human being to a transistor inside a specialized CPU or GPU, where a group of said CPUs or GPUs serve specific tasks... and the human collective is what the AI chips are to the "AI brain"?
I think that this perception can explain millions of years of human philosophical thought, and cultural norms, especially among those that overly prize conformity. Moreover, the thought aligns with the case I made for the invalidity of the nonsensical "big lie of the self-made man" concept, where I argue that no "man," as in human individual, makes it on their own... we all rely, heavily, on each-other, and our environments, past, present, and future... we are products of all those, and can't be removed from them while maintaining our life. What do you think?
Improper (shorted) connectivity is where we find the "Ashilles' heel" in our circuitry...
The power of humanity relies on, is dependent on, win-win collaboration... when we do not, we short-circuit!
We observe this being taught to humanity since the oldest surviving literary work found, 4000 years old, that is of the Epic of Gilgamesh, which, in a nutshell, describes that a "king" can't survive without the "savage," and vice versa, and for both to grow, they must work together through the least judgemental relationship possible: forging a friendship (a non-dominating unconditional form of acceptance and collaboration).
We can sift through all the commonly known theologies that mobilize humanity en-mass, and their thought processes, and we can summarize all the teachings into a central point: Work together ("through love"—another non-dominating unconditional form of acceptance, and comradery—another word founded on friendship).
领英推荐
The threat...
Most humans cherish the concept of "personal privacy," and it is expressed, if not coded, in the most intimate of guiding thoughts, such as religions; it is, also, perceived by many to be a basic human right to be protected.
For AI to grow, and be truly useful, it demands a constant stream of massive volumes of new and up-to-date information... all information, if it is indeed to be a central depository of profitable solution providing.
We have coined a term that describes that prerequisite: surveillance. A society that expects to exploit AI to its fullest potential, will have to accept the fact that they will have to become a "mass surveillance society," which is not a new concept for humans... one of the better known labels describing such a society—since the 20th Century—is being Orwellian.
The mass-surveillance industry
Many people perceive that commercial mass-surveillance is more benign than that of government surveillance, which I very much disagree with. The mass-surveillance industry is as big a threat, if not much bigger, than that of governments.
Why? Where not well regulated, the commercial entitles can sell your most intimate data to anyone in the world who will meet their asking price. While debatable, in the case of government surveillance, it is more geared towards how your 'existence' impacts their existence (power), and their image, and not much beyond it.
While public data may be readily accessible, it is not a common thing for the government to sell "personal data" they collect, about their citizens, to foreign entities, or businesses, either. Please note that I am not saying that government surveillance is benign, for, it seldom is... I am just setting the degrees of contrast and context involved.
When there are abuses, the government representatives may use touchy-feely descriptors to justify them, such as "national security," and "homeland security," but, at least, in Western societies, the citizen has more rights, and reasonable recourse, when dealing with their governments, than they would doing the same with a private business entity that is extremely profit-centric, and extremely liability averse... in other words, the latter will fight you tooth and nail, and can even be very malicious in their retaliation, too!
Government vs. business threats...
A friend of mine, who's no longer with us in the land of the living, still used to differentiate the threat, and argue that the government has the "power of the gun" behind them, but, I've demonstrated hundreds of examples, including in articles I've published, that prove that the "pen is mightier than the sword..." in other words, the real power resides in those who have the most access to useful information, not the ones who have most access to "guns."
Government may waste time collecting "useless data," as was demonstrated by many NSA programs, while commercial entities focus—from the onset—on optimized and targeted exploitation; a good example of such is how Facebook, and it's support systems and those of their partners, operate. In a fewer words, I think that Mark Zukerburg's collective is much more a threat on humanity than those of Mr. Vladimir Putin of Russia who has no beef with me, and can't benefit, either!
Truly, the ideology of which Zuckerburg, and his likes, subscribe to is that "resistance is futile;" and the business world will always have more wealth, collectively, than any government driven by tax-payers, hence can afford to invest, heavily, in controlling the masses for maximum exploitation. If you doubt such, just rewind to a very short time ago in human history, that of the one of the most powerful modern times empires: that of the British.
It is, truly, a mistaken perception to think that the king or queen of England were the centers of power... the power totally resided on the multinational entities such as The East India Company: The original corporate raiders! The king or queen were no more that notary public, who were paid a fee for their services while being given a fancy title.
Choices... choices...
"Causality" must never be underestimated. As I see it, the positive "choices" aren't really many, but insuring that entities not chose to negatively exploit the technology--without prohibiting its use, we must build systems of processes and well-enforced regulations where:
Without these programs being set, and being insured loop-hole free, as some of my Italian cousins would say (with the appropriate accompanying hand and body motions): "fo'get it!"
No, generally, you are NOT "stupider" than a computer!
"Garbage in, garbage out!" There is no taxing conscious, or ethical, choice involved.
So, whether it is from a level of consciousness or that of logical processing, comparing a human being and an AI entity is like comparing a bunch of grapes to Rubik's Cubes! More accurately, if we must use a similitude, a human being, as part of the human collective, is more akin to a transistor in a CPU that makes-up an "AI brain."
Artificial Intelligence is not more advanced than that of human intelligence, and is certainly less adaptable, and more rigid, for the AI is a product of our human intelligence—another one of our attempts at biomimicry, but we've never known of something inferior designing and creating something superior, even if the latter seems to have superior qualities.
Until we have a full understanding of the mechanics of the universe which we are part of, the latter will not be happening... if you desire to understand my perspective on that, study a book called Flatland, which was published in 1884... the first time I read it, it twisted my brain till I developed a headache! If you attempt to visualize the "novel" you will see what I mean!
All that said, as demonstrated earlier in the article, AI not being "superior" to humans doesn't mean that human malice can not use it, as we do any tool, to inflict tangible damage to ourselves, and our environment—as a virus may do to its host.
In conclusion...
There is no effective and efficient (optimized profitability) AI without a well established ultra-intrusive surveillance state. The question to the masses, that they must answer, while thinking long-term: Is it worth it?
There is much that we must continue to consider as we continue to develop artificial intelligence solutions. As we do so, we must remember that while the individual human may not be that relatively powerful, as a collective, we are very much so... so much so, that thorough our willful ignorance, or maliciousness, we can throw the equilibrium on this plant, this superorganism which we are part of, into a form of self-destructive chaos...
... I find it most amusing that, as I type this final paragraph in the still quietness of my study, I can clearly keep hearing the word "Dave," in a low monotone voice, inside my head...
... Yes, Hal?
?? OTHER ARTICLES BY AUTHOR: https://www.dhirubhai.net/in/raedmalexanderayyad/recent-activity/articles/
Sr. Systems Engineer / System Administrator, Educator and Trainer
1 个月Another question is AI worth the consumption of 25% of the energy produced globally? Currently it is thought social media and crypto mining and the internet consumes that much if not more. Stranger still we can't even quantify what it does use, only guess. Yet the Population of the United States is the same as it was 30 years ago. Our cars and home appliances use less energy, Our national power usage has increased by 60% It will double what was used in 2000 by 2030. So can AI solve climate change or make it worse? Microsoft Just paid to restart Unit One at Three mile Island for it's AI Data Center.