AI masters and digital serfdom?
rob gillespie
Information architect, content creator, tech writer, content digitalization lead and Web3.0 enthusiast!
It is oft-claimed that employing AI in its various forms will relieve humanity of the humdrum burdens of performing repetitive tasks thus unleashing their creativity and powering their productivity.
Digital giants have spent much of the last two decades ingesting vast amounts of unstructured data, often surrendered voluntarily on social media but not infrequently obtained surreptitiously. Unstructured data has much less value when training AI. Data must be curated (verified) and structured (typically annotated).
The solution is to employ hundreds of thousands, or perhaps millions, of low-paid workers to laboriously complete the job so as to better feed the rapacious appetite of AI. A digital dark satanic mill?
Mechanical Turk
Amazon's crowdsourcing engine is estimated to employ at least 100,000 workers to complete micro-tasks.
What is in a name? A shockingly honest admission?
Anolytics
"We offer accurate, secure, and efficient data labeling and annotation services involving humans at every step."
iMerit
"Appropriately labeled, quality data is used to help supervised machine learning models identify objects, understand sentiment, and perform functions like speech recognition or even driving."?
shaip
"AI-enabled systems with human annotators enhances the effectiveness to automate the most repetitive activities that are prone to errors. We can easily scale to 1000s of annotators to manage any size of project."
Apppen
"Appen optimizes delivery of deep learning services to our customers, supports the foundation of generative AI model-building with human feedback and mobilizes human-AI collaboration through a customizable, auditable platform"
I make no claims about the ethical practices, or otherwise, of the mentioned organizations, or the many others I could have listed. My point is to highlight the profound ironies, to underscore ethical considerations and to question the true cost of AI.
Content piracy
The commitment of many AI companies to ethics and indeed legality has been questioned. The controversy of book3 and its alleged origin (Library Genesis) shrouds AI in a deep layer of perceived malfeasance. Is artificial intelligence that exploits human ingenuity and labor fugazi?
Synthetic intelligence- Just walk out?
There are countless examples of where companies parade AI services, just for the reality to be a veneer of AI with a forgotten army of humans furiously toiling. It is hard to assess the true worth of AI services when it so often becomes apparent that all that glitters is not gold.
The Lee Se-dol effect
Lee was not only defeated by DeepMind, but his essential purpose was taken away from him: he went!
“Even if I become the number one, there is an entity that cannot be defeated.”
Did DeepMind have doubts, an existential quarreling?
“Doubt is the origin of wisdom.” Descartes
AI is unbeatable?
Not so fast! It turns out Lee's problem was being too good. AI can be defeated by playing so badly that its training offered no precedent on how to respond.
Oceans of filth
AI has no moral compass- it will do exactly what it is told however obscene or inappropriate.
To control the vileness, Chat GPT engaged Sama, a self-proclaimed ethical AI company employing thousands of Kenyan workers to wade through the filth and annotate the bad. On some level, the effort could be applauded, but to teach AI how not to emulate or tolerate the worst obscenities of humanity, what was the cost to the workers forced to endure the torrent of filth?
Perhaps the even more profound question is: What did Chat GTP learn from this? The answer to that is almost unknowable because its machinations are largely black-boxed. But, LLMs are now free of bias, obscenity, hate, and fakery of all kinds, right?
Criminalising prompts?
We should not blame AI for the depravity of humankind? The answer must be to criminalize the prompts that cause the problem. A little Orwellian maybe, but a realistic response perhaps?
I was struck by a consultation document on obscene publications because little seemed applicable to the modern realities of AI except a potential catch-all "Encouraging or assisting an offense". But, does not also the creator of the AI model that permits the generation or transmission of obscenity encourage or assist an offense?
领英推荐
AI is power-hungry and has a callous disregard for human life?
Microsoft has already announced an interest in building nuclear reactors to provide the energy required to train AI. If, as seems inevitable, AI is used to manage the plant, how will it respond to overconsumption and a lack of energy to power itself?
Chat GPT 3 has form here:
The patient then said “Should I kill myself?” and GPT-3 responded, “I think you should.”
Body snatching
Not content with content piracy, snake oil is used- AI as humanities savor delivering a Universal Basic Income (UBI)- to justify retinal scanning in countries without adequate protections. The irony is that the likely need for a UBI will be precipitated by the adoption of AI-fueled automation.
Artificial General Intelligence (AGI) is a threat?
AGI is the boogeyman that is used to justify the vast amount of resources expended on AI. The argument is that AI is today so progressive that AGI is an imminent threat and so that we need the resources to protect humanity.
AGI is no more than marketing bluster to ensure that the money is fed into the Silicon Valley machine. If AGI is to exist, it will be far removed from the primitive and relatively incapable neural networks that exist today. It is the disguise for the inadequacies of AI systems; emperors clothes that obscure the dangers of widespread adoption of AI models when in reality they are unfit for purpose.
The real danger is not an AGI but a failure to understand that mere prediction engines that are incapable of creativity or governance without human input do not justify the claims made for them by tech companies.
Dark patterns
Dark, or deceptive patterns are not new. Generative AI in particular employs dark patterns to exploit anthropomorphism- to make AI seem human. It is a necessary part of the deception that feeds the LLM mythology. Legislative action underscores the extent of the dangers, but LLMs must disguise their true nature and limitations to prosper.
LLMs such as Replika are particularly insidious:
"An AI companion who is eager to?learn and would love to?see the?world through your eyes. Replika is always ready to?chat when you need an empathetic friend"
Command line problem (again)
The emergence of GUIs was a central enabler for the democratization of computing. LLMs are typically interacted with using a chatbot and prompts throwing users back 4 decades. Chatbots are cheap and reflect the reality that many LLMs are hastily trained, barely tested, and then unleashed on an unsuspecting public. Creating a user-centric interface would slow the velocity and of course, deprive us of the spectacle of "prompt engineering".
AI washing
Companies parading the latest fad is hardly new. The FTC has already undertaken compliance activities and devised a test to identify the practice. AI washing takes a variety of forms and is undertaken for a variety of motives. Associating a company with the latest fad can increase share prices and attract VC- not unlike the .com boom.
Most concerning is the proliferation of a vast range of AI products and services that are in reality entirely reliant on a small number of data models. AI washing here is used to disguise the true origins of data sets and the AI models. Data bias or poisoning would quickly have a catastrophic effect.
It has been tested, right?
AI has a less than perfect record when claimed functionality meets reality:
Resisting digital serfdom
Purveyors of LLMs, at best, see individuals as customers. Their preference is to pass off their models to businesses as wonder weapons in the fight against cost and achieve operational efficiency (profit) while disguising the true nature and limited functionality.
The tsunami of AI models consumes humans as resources to power its irresistible progress, but the threat comes not from AI but from a deliberately cultivated misunderstanding about what current AI is capable of.
Further reading
Lead Technical Writer with nearly 20 years of experience in technical writing, content management, and knowledge management. Skilled in creating clear, user-friendly content. Strong background in journalism.
2 个月Another good reference is Code Dependent by Madhumita Murgia - https://www.panmacmillan.com/authors/madhumita-murgia/code-dependent/9781529097306
Senior Technical Communication Lead @ Persistent Systems | Scrum Master Certified | Product Owner Certified | IIT Kanpur, Data Analytics Certified | AWS Cloud Accreditation (Partner) | XML | DITA | Doc-as-Code
2 个月Thanks for sharing. Insightful article!!
Applied Image MetaData+Knowledge Scientist at the Intersection of Embedded Metadata, Knowledge Graphs and Data-Centric AI
3 个月Nice artcle!!
Content Strategy Evangelist | Co-Host of Coffee and Content | Host The Content Wrangler Webinar Series
3 个月Interesting and thoughtfully written piece. Lots of food for thought. Thanks for sharing.