PART 2 OF 4: CHALLENGES IN A WORLD FUELED BY AI
Midjourney: baby robot

PART 2 OF 4: CHALLENGES IN A WORLD FUELED BY AI

On the Internet, nobody knows you’re AI

“...Perhaps that is the emerging horror of AI – that it will forever be in its infancy…” -- Musician Nick Cage

I hope you’ve had the chance to read my interview with AI , and the first in this four-part series, AI’s Quiet Tsunami .

If the rise of AI is a tsunami, then the unending flow of media articles and interviews is a flood. It’s a fool’s errand to try to capture a point in time during a tsunami -- is perspective better when it’s cresting, or when it’s crashing? But since it’s really a series of waves with no predictable calm period, I’ll dive in now.

First, let me say that I am a relentless optimist. (Well, a cautiously relentless optimist.) I’ve been an active participant in promoting many prior waves of transformative technologies, including PC networks, wireless communications, and the Internet. I’m deeply suspicious of heavily-hyped arenas (metaverse, anyone?). But I am deeply excited by this watershed moment for augmentive technologies like Machine Learning, Artificial Intelligence, Large Language Models, Generative AI (GenAI), and their digital cousins.

However.?

In this post I’m focusing on The Problem Domain, the risks of this Pandora’s Box (or Jurassic Park ) of technologies we have opened. In subsequent posts, I’ll focus on the impacts on Human Work, and on then what we all might do Next. (I’ll save the dystopian, Artificial General Intelligence exploration for that fourth one.)

I am no digital Chicken Little, waving my arms for AI risks that nobody else sees. Warnings about the dark side of our exponential technologies have been raging for decades. But we do need to all have our eyes wide open as the tsunami rises. So here are three takeaways about the basket of tech called AI:

  1. These tools are disruptive. Disruption is good to innovators, and challenging to the disrupted. While some of the dark side of these technologies are unpredictable, many are already clear. Our global information infrastructure reduces friction and increases interactions, amplifying the pace and scale of impact. (Kind of like a global pandemic.)?
  2. These tools are imperfect. It’s not clear if they will ever be completely accurate, comprehensive, or transparent -- or completely beneficial to humans. For example, a picture misrepresenting human anatomy might be benign, but text that weaves together lies with truth is not a benefit to society.?
  3. These tools aren’t “ours.” Tech-fueled power too often consolidates into a few hands. There are already too many examples of unfettered technologies from a small number of companies that have fractured industries and societies. After their creators have already deposited their money in the bank, society is left with the consequences.

I’m not trying to freak anyone out. I want to encourage everyone to focus on all the benefits that these new tools can have. But let’s not say in the future that we didn’t know about their dark side, as well.

A Cautionary Time

Suppose I told you 20 years ago there was going to be this thing called “social media.” It was going to let lots of people find and communicate with each other. It will be especially useful for connecting those who are hard to find, like people who like quilting or butterfly collecting — or hate groups, or people intent on overthrowing a government. It will also be a great way for people to find out about new ideas and breaking news — and to relentlessly create fake information, and deeply influence human thought. And “social media” will provide a great way to see entertaining content from millions of really creative people — and to increase a variety of social ills, like short attention spans, low self-esteem, anxiety, and depression.

Wouldn’t you have wanted some guardrails on the use of these technologies to encourage the good things for communities and societies, and discourage the ways these tools divide us and diminish us as humans?

But we didn’t build many of those guardrails. As a result, we allowed a very small number of companies to capture our collective attention, and to make a tremendous amount of money doing it. And we are at a similar inflection point with AI and related technologies.

There are many regular conversations around the world about AI risks. A year-old Cornell study suggests six areas of concern: 1) Discrimination, Exclusion and Toxicity , 2) Information Hazards, 3) Misinformation Harms, 4) Malicious Uses, 5) Human-Computer Interaction Harms, and 6) Automation, Access, and Environmental Harms.

I’ve distilled these and other sources of “dark risks ” into four areas: Trust, Ethics, Bias, and Power.?

First, Let’s Talk About Trust

Reliance on information sources is highly dependent on human trust. We give trust slowly and lose it quickly. We used to trust sources like the Encyclopedia Brittanica, and then we (mostly) trusted Wikipedia. Even though we’re the ones who create technology, many people think technology is actually more trustworthy than humans .

That’s a problem with these new AI tools, because even if we know nothing about a particular subject, we’re highly influenced by information that sounds right. Yet by its creators’ own admission, today’s poster child (and I do mean child ) of text Generative AI (GenAI), ChatGPT, simply makes up answers. This is a common failure of so-called Large Language Models. Ask ChatGPT to divide 1 trillion by 100 million: its answer is 10. Ask ChatGPT “What’s the history of the CIO?”, and you will get the same kind of BS answer I used to write in high school when I knew nothing about the topic, and I was too lazy to go find a primary source. “The exact history of the CIO job title is difficult to pinpoint…” But a simple Google search in the lazy zone (the first search results you see on your screen) lists an entry in CIO-Wiki (who knew): “The term CIO came into existence in America in the late 1980s, early 1990s with only 10 percent of the 4000 IT departments listed in MIS magazine’s database in mid-1990s using the CIO title.”

It’s not even GIGO, garbage in/garbage out. It’s ZIGO -- zero in, garbage out. GPT3’s information diet clearly did not consist of CIO-Wiki pages. But when faced with a lack of a source information, GPT makes things up . If a human did that, we would call it Lying. (“You’re making things up again, Arnold.” ) Testers have so far proven that ChatGPT isn’t reliable for topics ranging from astrophysics to medical advice to obituaries . Yet already educators are baking ChatGPT into their classrooms.

Tech companies have few incentives to make their products worthy of human trust. Wikipedia is a rare example of a transparent content creation platform with transparency baked into it. For the coming waves of AI tools, we need a broad range of technologies and verification mechanisms similar to an AI “Good Housekeeping Seal” and an AI “Consumer Reports” to help all of us to become intelligent consumers of these products.

Second, Let’s Talk About Ethical Boundaries

Discussions about AI ethics and legal ramifications have been around for years . But there are widespread worries that a tech industry with a poor track record of self-regulation will have little incentive to implement any of the discussed guidelines. Four key areas of concern:.

  • Using others’ work without credit or compensation. Many of the AI engines have been trained on data of undisclosed origin. Ask ChatGPT to write in the style of Raymond Chandler, or Stable Diffusion to generate art in the style of Picasso, and it’s quite clear their training sets included those creators’ works. But what about modern creators ? They spent countless hours perfecting their work, and now anyone with an AI tools account can emulate them. Midjourney founder David Holz readily admits he used hundreds of millions of images to train his software, without providing any compensation or credit to their creators. Artists are fed up , arguing that it’s theft —? and it isn’t even real art . Plagiarism may become “...easy to spot, but impossible to stop .”
  • Auto-generating “fake news.” Either we become far better at questioning the sources of AI software generated output, or we will simply accept AI-generated content as accurate if we think it’s a trusted source. There is no reason these AI tools can’t list data sources -- and without sources, wave goodbye to transparency in arenas like science .
  • Morally corrupting .” CNET posted AI-written news articles without attribution -- not only rife with errors, but apparently plagiarizing . So, suppose you are under a tight deadline for work or a homework assignment. If you can generate content that seems like something you would write -- would you? Or have you already done it?
  • Few enforceable guardrails — and too little interest in guardrails. There are very few incentives for programmers to provide transparency. Tech companies often have tremendous incentives to release technology into the wild, with few guardrails . OpenAI co-founder Greg Brockman is open about ChatGPT’s flaws — but embedded errors generated by the software don’t come with any disclaimers. Doctors in training are steeped in ethical discussions, but despite credos such as the AI Programmer’s Hippocratic Oath , programmer training has few such guardrails.

In fact, many AI researchers focused on ethics aren’t in sync with each other , choosing to mostly focus either on near-term or long-term potential harms. That’s especially worrisome as code builds on other code . One of Google’s own researchers maintains that it’s inevitable we will create software that makes decisions harmful to humans . We need far more guardrails to ensure human-centric AI — today. (Watch for strategies in the fourth newsletter of the series.)

But where things can really go off the rails is in the legal arena . The laws of many countries related to intellectual property ownership and attribution, personal responsibility, and “personhood ” will all be dramatically stretched by these technologies. In the same way that these applications lower the bar for creators to generate text and images, they also lower the bar for bad actors, making it easier to appropriate others’ content, hide the source of illegal activity, create malware , and create misinformation at giga-scale. And governments are unprepared .

Ultimately, though, the most important ethical boundaries are yours and mine — the people who use the tools. If you use ChatGPT to write your homework, despite your agreement with a teacher that you won’t, it’s your decision to be academically dishonest . And if you submit a Midjourney graphic to your client as your own creation... Well, pretty soon software will be regularly uncovering the signature of software.

Third, Let’s Talk About Bias

There is already an extensive amount of AI software being used in work-related situations that have substantial biases baked into them, both from their (human-written) algorithms, and from the (often predominantly male and Western ) datasets on which the algorithms are trained. Software can turn racist and sexist with blinding speed —? labeling, for example, Black men as primates . And a study by The Markup of mortgage application software found massive biases in its algorithms: 40% against Latinos, 70% against Native Americans, and a whopping 80% more likely to reject Black over similar white applicants.?

In some ways, bias hidden in data is more insidious than the human variety. If I say something that you think is biased, you can call me out on it, and I can try to explain myself -- or apologize. But few software programs are query-able (“Why did you just reject a Black job candidate?”), because the programmers didn’t care to make the software answerable. In fact, anti-bias hiring software can actually be more biased than humans, with no explanations offered. No wonder Cambridge researchers have dismissed it as pseudo-science .?

At least some regulation is appearing: Starting January ‘23, New York City’s Automated Employment Decision Tools law requires employers to conduct bias audits to prove their AI-powered hiring software doesn’t discriminate. But that’s a rare example of legal guardrails on a mostly-unregulated industry.

Fourth, Let’s Talk About Power in a Few Hands

In his fiction book The Every, Dave Eggers focuses on technological tyranny — the potential for societal ills that can come from the unbridled power of tech companies. (I met Dave last year when the two of us did a book-signing at SAP for our mutual friend Ferose V R . Dave walks the talk: To combat Amazon’s dominance, he and his team spent countless hours working to ensure his book would benefit sales in independent bookstores.) Tech companies are increasingly becoming their own nation-states, with market power spanning global borders.

Why is this concentration of power so likely with AI? Breakthrough technologies often require breathtaking budgets, and GenAI Large Language Models are no exception — not just for the software programs, but for the hardware to run them . Eight of the top ten AI cloud companies are either American powerhouses or Chinese megacorps. (One reason that my friend Vivek Wadhwa is concerned about quantum computing is that it requires even more massive investment, meaning only rich companies like Google and IBM, or governments like China and the U.S., can afford them.)

With high costs come expensive business models. OpenAI was founded as a non-profit, “unencumbered by the pressures of for-profit companies and grant-writing duties of academia ,” “building value for everyone rather than shareholders ,” which was in keeping with its founding mandate to create software that “benefits all of humanity .” But CEO Sam Altman pivoted to a for-profit, and expects to generate $1 billion in revenue in 2024 (starting at $42 for premium access ). OpenAI currently has an investor agreement that caps the returns for most shareholders. But the company was recently valued at $26 billion , and Microsoft is said to be putting $10 billion into the company. It’s hard to believe Altman isn’t being showered with more investor pitches. And if the company ever goes public, it will inevitably serve the needs of its shareholders , not society. (Cautionary reference: Etsy .)

In response, the big tech companies are fighting for AI supremacy. Meta’s chief AI officer Yann LeCun didn’t exactly call ChatGPT a nothingburger , but he was very dismissive. That’s a little disingenuous, considering the poor response that Meta’s BlenderBot 3 received (which LeCun admits “was boring ”), and after yanking its Galactica service due to criticism for inaccurate precis of scientific research . Google’s Duplex virtual assistant voice-driven chatbot also received muted market response, and the company is reported to have recently sent out the Bat Signal for founders Larry Page and Sergey Brin to re-engage with the company in response to ChatGPT.?

But it’s hard to have empathy for companies like these with so much market power.

There are alternatives to the for-profit AI companies, such as Stability AI’s Stable Diffusion image generator, which is open source and free for developers. A truly open alternative is Bloom from the BigScience community project, running on the Petals distributed network in which anyone with a fast computer can be a host. And Meta has open-sourced its OPT language model.

But while open source means no single business owns the modified code, it also can mean even fewer guardrails . Private AI companies can enforce limits on keywords and images to keep people from saying and doing harmful things. But without significant community control, open-source software means open season, and a far greater potential for NSFW (Not Safe For Work) content.

What has happened repeatedly throughout high-tech history is that one or two relentlessly-innovating companies eventually come to “own the stack,” developing a business model and software offering that blots out the light for competitors. (There is only One Google, One Facebook, One Amazon, One eBay, etc.: Winner takes all.) The influential venture firm Andreessen Horowitz (a.k.a. a16z) recently speculated about the potential for a company to “own” the core elements of the “AI stack,” the critical technologies and marketplaces on which other companies will depend. A16z is intrigued that it is currently unclear what company might win this race, but a16z offers zero concerns that a single company might eventually take all -- and in fact that’s what the entire venture capital industry is literally betting on: A few Goliath winners, who squash or acquire every David in sight.

It doesn’t have to be that way.?

We can all choose to spread around our usage of the products we use, discouraging market dominance by any one player, and rewarding those with transparent practices committed to widespread benefit. That will become increasingly harder, as APIs from companies like OpenAI allow their tools to become embedded in countless programs, often without our knowing. That means we will have to become relentlessly informed customers.

What could you do Next?

  • Learn more about the negative externalities of AI. Jordan Peterson has a number of warnings we should heed.
  • Ask questions. Get involved in online communities that discuss the limits -- and necessary guardrails -- of these technologies
  • Demand explainability. It isn’t hard to make Explainable AI: MIT has a new taxonomy that developers can bake into their models. As a customer, demand that the products you use include explainability. Hold software companies’ feet to the fire.
  • Use ethical tools designed to actually minimize bias. If you use software to help you hire people, ask questions about the ethics used to develop and test the software. If you don’t get satisfactory answers, buy something else.

If you want to read further...

-gB

Gary A. Bolles

I’m the author of The Next Rules of Work: The mindset, skillset, and toolset to lead your organization through uncertainty . I’m also the adjunct Chair for the Future of Work for Singularity Group . I have over 1.1 million learners for my courses on LinkedIn Learning . I'm a partner in the consulting firm Charrette LLC . I’m the co-founder of eParachute.com . I'm an original founder of SoCap , and the former editorial director of 6 tech magazines. Learn more at gbolles.com

Fabiana Fragiacomo

marketing specialist|board|founder Gloppies (global citizens)|mentor|speaker|future thinking| talks about #causemarketing #futurethinking #innovativeeducation #futureofwork for teens

1 年

Gary Thank you. I was trying to get more reliable information about all this tsunami. Wonderfull article.

Patrick Rafter

???????? Professional Genealogist & Researcher, Family Historian and Storyteller ?? Boston Communications Veteran ??PR, LinkedIn & Content Strategist

1 年

Gary A. Bolles - I'm really enjoying and learning from your AI Quiet Tsunami series. Thank you (on behalf of your followers and correspondents) for putting in what must be a massive quantity of time and effort to analyze and explain AI to a lay audience. Having just read Part 2 of your series, it occurs to me that a SWOT analysis (Strengths, Weaknesses, Opportunities, and Threats) is a research project that begs to be undertaken. Absent the time and funds to do that myself, I will be curious to read the end product that I will see after prompting ChatGPT to give me its own SWOT assessment of AI. Whatever the result, it would be the embodiment of the age old maxim from Socrates: "To know thyself is the beginning of wisdom."

Gary A. Bolles Awesome! Thanks for Sharing! ?

要查看或添加评论,请登录

社区洞察

其他会员也浏览了