Neither artificial, nor intelligent
Among the gifts I received for my bar mitzvah in 1985 were two, maybe three, really nice atlases. They were enormous, coffeetable-sized affairs. One might have been leather-bound. Thirteen-year-old Dan loved them. It’s a pretty straight line between those intricate, data-rich books and my professional interest today in how people find, consume, and interact with information.
If there are any actual maps in Atlas of AI by Kate Crawford , they are few and far between. Crawford, in the introduction, justifies calling the book an atlas, saying that atlases “focus the observer’s attention on particular telling details.” That is, through this intersection of art and science, of aesthetics and knowledge, atlases reveal layers of complexity and connection in much larger, seemingly monolithic and opaque objects. Artificial intelligence is one of those concepts where it is easy to gloss over telling details.
This is the main message of Crawford’s introduction. Artificial Intelligence is a term that has come to mean many things — or mean nothing, really. And through that ambiguity, it appears both magical (look what the algorithms can do!) and mysterious (ignore the geopolitical and environmental devastation behind the curtain!) The lack of meaningful metrics for what intelligence is means that anything can appear to be intelligent. (The book starts with a story about a horse capable of basic arithmetic.)
But it is that very ambiguity that “gives us license to consider all” the ways in which the technology and industry of artificial intelligence industry exploit, extract, and manipulate the world around us. Artificial intelligence isn’t a single thing to expose, a hub at the center of a vast political and economic network. Instead, artificial intelligence is a means to enhance and enforce existing systems of power.
I’m anxious that Atlas of AI will be nothing but gloom and doom. As a naturally optimistic person, I know the future hasn’t been written yet, even if the paths we’re on seem immutable. Crawford offers a glimmer at the end of the introduction, saying:
领英推荐
But neither is this an irresolvable situation or a point of no return–dystopian forms of thinking can paralyze us from taking action and prevent urgently needed interventions.
Going on to quote Ursula Franklin, Crawford reminds us that “the practice of justice and the enforcement of limits to power” are crucial to political, social, environmental, and economic sustainability.
What does this have to do with information architecture? Information architects have long confronted, ignored, or reinforced systems of power. When the main material of our work is “structure,” we cannot avoid bumping up against them. Categorization and classification, wayfinding and organization are all instruments of power and revolution. Artificial intelligence, whatever that might mean, depends on systems of labeling and categorization and structure the same as any other technology. From what we call things to how we lump them together to how we model systems, power is everywhere.
I don’t know what happened to those atlases I got 40 years ago. Lost in some transition. They would be barely accurate enough to be meaningful, missing crucial layers of information about how the world works today.
This is a solid approach! To innovate further, consider leveraging diversified algorithm experimentation beyond A/B testing—dive into A/B/C/D/E/F/G testing to refine and pinpoint the most effective strategies tailored for your audience.
Social Information Architect. Talks about #ArtificialIntelligence #Culture #Africa #Indigenous #Social #UX #InformationArchitecture #KnowledgeManagement #Education
11 个月and when you are done I suggest you read the short paper "Making Kin with the Machines" 2018. https://jods.mitpress.mit.edu/pub/lewis-arista-pechawis-kite/release/1
Associate Professor of Experience Design and Information Architecture
11 个月90% of that book (minus the impressum, the back cover, etc ;) is about stuff that information architecture has a lot to say about
Principal Information Architect at Zillow
11 个月Crawford's description of AI as having a "central colonizing impulse - to know the world, determine how it is measured and defined, while simultaneously denying this is an inherently political activity" reminds me of the lesson I try to teach beginner IAs about classifications and taxonomies. It's easy to forget, or perhaps never realize at all, that choices to classify the world in certain ways are choices being made in the context of a system of beliefs. Crawford also proposes we ask three questions about the political dimension of AI: what is being optimized, for whom, and who gets to decide? These three questions sound a lot like some important IA lenses that we've talked about before...