Pig earrings
Jennifer Pahlka
Author, Recoding America: Why Government is Failing in the Digital Age and How We Can Do Better
I get a Google Alert for the name of my book. I click the link. It’s a Medium post by a guy named Alejandro G. Rangel. The post is called Shaping Tomorrow: Insights into Technology, Society, and the Future, subtitled Sunday Sagas: Wisdom and Insights from Literary Masterpieces. Id response: Ooh, I’m in a post about literary masterpieces! Then: To Kill a Mockingbird is a literary masterpiece. Recoding America is good general non-fiction. Hmmm.
The featured image is a guy wearing a backpack looking into a cheesy futuristic landscape. “Image created by author using Midjourney.” Right, of course. But Recoding America is not exactly futuristic. It points to a future, I hope, but we’re in a totally different genre here.
Being as vain as the next guy, I scroll past the half a dozen other books reviewed to find mine. It starts out strong.?
Then the first red flag.
I mean, sure, but I specifically say in the book that I am arguing for more digital competence and digital thinking (aka more digital talent) and not necessarily the adoption of tech, certainly not to continue buying tech the way we have been. (Actual line, p. 244: "It's not the tech, it's the tech people.") And transparent? The word doesn’t appear in the book. Anywhere. I checked.
[Aside: I just checked and the word innovation also doesn't appear in the book, except in reference to the Defense Innovation Board, and my role as Deputy Chief Technology Officer for Government Innovation. I'm suddenly hugely proud of this.]
Then we get to the notable quotes. They’re not from the book. They’re exactly the kinds of things people think I’ve said, but haven’t. Duh. The whole post is written by generative AI.?
The first two quotes are reasonably plausible. I could nitpick why I would never have said them, but they’re still impressive as simulacra. This one though:
The first part checks out, but the last part is very 2010. Transparency and participation are good values, but for better or worse (and there are definitely those who see it for the worse and give me a hard time about it) they haven’t been a focus of my work for over a decade. They are sometimes in tension with effective, equitable service delivery – chapters 7 and 8 are about how public input is largely captured by interests, and how the active practices of user research and product management deserve a greater share of time, attention, and capacity than the passive notice and comment practices that dominate today (and are, of course, required by law.) My feelings about this are summarized by Marina Nitze’s comment on Alexander Hertel-Fernandez’s guest post on Don Moynihan’s "Can We Still Govern?" blog, about how to “how to use the public comment process to reduce administrative burdens.” Marina writes:
领英推荐
I don't think it's a remotely appropriate expectation that members of the public spend their own time suggesting random improvements to the USDA school lunch assistance program program. It is explicitly the responsibility of USDA to user test these forms with real users and make improvements well before ever publishing or using the form. This should be the actual role of OIRA in today's world, I believe: holding agencies accountable to reducing data collection and making forms as simple to use as possible, not expecting the public to do free random labor (which is not even allowed under the Anti Deficiency Act) making random suggestions to improve a form that may or may not be correct.
Here, in agreement, is an actual quote from my book on this topic (p. 201):
In their interactions with government, we’ve been overburdening people, and product management thinks it’s time we did some of the work for them. The mechanisms we’ve built for public input have inadvertently allowed for colossal capture by government needs, special interests, and even well-meaning but often misguided advocates. Product management done right is an anti-capture mechanism.?
So this AI-authored blog post gets me a little wrong. TINY VIOLIN, you say! Sure. And let’s be clear, people get me wrong too. Here’s a description for a talk I was scheduled to give that was sent out without first running it by me:
I hesitate to emphasize the offending words in text here, for fear of further mistraining the algorithms, but I have honestly never really been about the “adoption of open data,” and for the reasons described above, I object to the second half of that sentence as well. Whenever I get characterized as a champion for open data or for the adoption of technology in government (the second one somewhat more forgivable than the first), I feel a bit like my sister, who once, when we were kids, said something about liking pigs, and then for like a dozen years, everyone gave her pig things at Christmas: pig fridge magnets, pig tea towels, even pig earrings. These presents became quickly unwelcome.
We’ve all been there, right? That’s how I feel about open data. There’s nothing wrong with pigs, and there’s nothing wrong with open data. I just want to be seen for who I am.?
We change over time, and AI models fail to update just as our aunts and uncles fail to update. The marketing staffer who wrote my talk description just searched the internet for something on me, and “the adoption of open data” is what came up. No AI needed to make that mistake. Several friends of mine are plaintiffs in the class action suit against OpenAI for feeding their books into GPT without permission, and I support them in that, but I found myself wondering if I should ask to have GPT ingest Recoding America so I could stop getting pig earrings for Christmas, so to speak. 100,000 words, and not one of them innovation or open data! Wouldn't that fix it all?
But like so much with AI, this isn’t a problem AI created, just one it reflects, and could amplify –- will my book eventually be “reviewed” by more AIs than humans? Humans at least read the book before they review it. Well, usually.?
If nothing else, AI is a good opportunity to complain about the same things we complained about before, but with something new to blame.?In the meantime, I will keep playing my tiny violin.
?
?? Rust Software Engineer ??
1 年reminds me of "misinformation vs disinformation"
Associate Professor, Duke University School of Medicine | Senior Data Strategist, NC HealthConnex | Adjunct Associate Professor, School of Data Science and Society at UNC Chapel Hill
1 年p.s. just got to the part about the guy ("Henry" maybe?) and the funeral plot- laughed out loud.
political education for community data resistance
1 年Thanks for the honesty and attention to clarity in this post, it's so important for us to name our intentions and share them! It is a shame that generative AI is taking the nuance out of writing and communication online. There's something in here though about what AI tells us about ourselves. In this case, that there are themes that are associated but different from what's covered in Recoding America, and that are being conflated by AI but maybe also by readers/people interpreting this work. If we implicitly reflect to each other that all conversation about digital services, innovation, and tech talent in government are also about transparency, open government, and participatory democracy, and the AI reflects that, then we do have a difficult challenge on our hands in that our language in the past has not drawn enough distinction between these agendas.
Facilitator & designer activating passion & talent to solve complex problems.
1 年"We change over time, and AI models fail to update just as our aunts and uncles fail to update." This had me laugh out loud. Obviously true. Also, I've recently been posting content in a new publication on Medium and finding the AI generated comments and over-abundance of claps are almost instant. I can see how I might get desensitized to these signals, as the reward for me is always opportunity to connect with real people (even if they want to relate to 25 year old me).
Design Researcher ? Product Innovator ? Design Thinking Devotee ? Collaborative Research and Design Leader
1 年ChatGPT is making us lazy and stupid.