AI Should Challenge, Not Obey

AI Should Challenge, Not Obey

Increasingly, in the fields of creating writing, visual art, and programming, humans are letting AI to handle material production, reserving for themselves the tasks of integrating and curating that material. Now more than ever before, users face the task of thinking critically about AI output. How should we think of our relationship with AI?

In this edition of "Advances in Computing," we are spotlighting Advait S. 's Opinion piece, "AI Should Challenge, Not Obey," where he proposes that we "transform our robot secretaries into Socratic gadflies."

Also featured in this issue are a research article on the Semantic Reader Project and two stories from Interactions, a bimonthly ACM magazine that publishes work related to human-computer interaction (HCI), interaction design (IxD), and all related disciplines.

Enjoy!


AI Should Challenge, Not Obey

Now more than ever before, users face the task of thinking critically about AI output. Recent studies show a fundamental change across knowledge work, spanning activities as diverse as communication, creative writing, visual art, and programming. Instead of producing material, such as text or code, people focus on “critical integration.” AI handles the material production, while humans integrate and curate that material. Critical integration involves deciding when and how to use AI, properly framing the task, and assessing the output for accuracy and usefulness. It involves editorial decisions that demand creativity, expertise, intent, and critical thinking.

However, our approach to building and using AI tools envisions AI as an assistant, whose job is to progress the task in the direction set by the user. This vision pervades AI interaction metaphors, such as Cypher’s Watch What I Do and Lieberman’s Your Wish Is My Command. Science fiction tropes subvert this vision in the form of robot uprisings, or AI that begins to feel emotions, or develops goals and desires of its own. While entertaining, they unfortunately pigeonhole alternatives to the AI assistance paradigm in the public imagination: AI is either a compliant servant or a rebellious threat, either a cool and unsympathetic intellect or a pitiable and tragic romantic.

@ACM

AI as Provocateur

In between the two extreme visions of AI as a servant and AI as a sentient fighter-lover, resides an important and practical alternative: AI as a provocateur.

A provocateur does not complete your report. It does not draft your email. It does not write your code. It does not generate slides. Rather, it critiques your work. Where are your arguments thin? What are your assumptions and biases? What are the alternative perspectives? Is what you are doing worth doing in the first place? Rather than optimize speed and efficiency, a provocateur engages in discussions, offers counterarguments, and asks questions4 to stimulate our thinking.

The idea of AI as provocateur complements, yet challenges, current frameworks of “human-AI collaboration” (notwithstanding objections to the term), which situate AI within knowledge workflows. Human-AI collaborations can be categorized by how often the human (versus the AI) initiates an action, or whether human or AI takes on a supervisory role. AI can play roles such as “coordinator,” “creator,” “perfectionist,” “doer,” “friend,” “collaborator,” “student,” “manager.” Researchers have called for metacognitive support in AI tools, and to “educate people to be critical information seekers and users.” Yet the role of AI as provocateur, which improves the critical thinking of the human in the loop, has not been explicitly identified.

The “collaboration” metaphor easily accommodates the role of provocateur; challenging collaborators and presenting alternative perspectives are features of successful collaborations. How else might AI help? Edward De Bono’s influential Six Thinking Hats framework distinguishes roles for critical thinking conversations, such as information gathering (white hat), evaluation and caution (black hat), and so forth. “Black hat” conversational agents, for example, lead to higher-quality ideas in design thinking. Even within the remit of “provocateur,” there are many possibilities not well distinguished by existing theories of human-AI collaboration.

A constant barrage of criticism would frustrate users. This presents a design challenge, and a reason to look beyond the predominant interaction metaphor of “chat.” The AI provocateur is not primarily a tool of work, but a tool of thought. As Iverson notes, notations function as tools of thought by compressing complex ideas and offloading cognitive burdens. Earlier generations of knowledge tools, like maps, grids, writing, lists, place value numerals, and algebraic notation, each amplified how we naturally perceive and process information.

How should we build AI as provocateur, with interfaces less like chat and more like notations? For nearly a century, educators have been preoccupied with a strikingly similar question: How do we teach critical thinking?

Unlock the rest of the article here to learn how AI can be designed to better support critical thinking.

The Semantic Reader Project
@CACM

More like this:

The Semantic Reader Project

Digesting technical research papers in their conventional formats is difficult. This is why Kyle Lo , Joseph Chee Chang , and their colleagues are releasing an open platform with a public reader tool and software components for the community to experiment with their own AI reading interfaces.


From ACM Interactions:

AI through Gen Z: Partnerships toward New Research Agendas

For Gen Z, there is no before AI. It is crucial that researchers study the individual and societal impacts of AI systems on their young users. Nora McDonald, PhD , Karla Badillo-Urquiola, Ph.D. and their team share what they discovered here.

Making Voting Easier for Disabled and Overseas Voters

In this story, researchers Ted Selker and Justin Pelletier propose a secure, accessible virtual voting infrastructure for disabled and overseas voters. Learn more here .


Unlock our past editions:

The Myth of the Coder

Summer Special Issue: Research and Opinions from Latin America

New Metrics for a Changing World

Do Know Harm: Considering the Ethics of Online Community Research

Now, Later, and Lasting

The Science of Detecting LLM-Generated Text

Can Machines Be in Language?


Enjoyed this newsletter?

Subscribe now to keep up to speed with what's happening in computing. For more thought-provoking content from CACM, visit cacm.acm.org . If you are new to ACM, consider following us on X | IG | FB | YT | Mastodon .

@ACM


Ruchi Honale

Global Winner of Allianz Tech Championship | SWE India Scholar | Vice-Chairperson ACM-W CCEW | Student of Computer Engineering

1 个月

At CCEW ACM-W Student Chapter, we conducted an ACM Confluence based on this newsletter and received an amazing response! It was indeed interesting ??

Interesting. You know some of my best colleagues challenge respectfully.. And constructive tension is a balancing act and art.. if AI can play that role that's useful.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了