A Timely Read: Deliberate Intervention

A Timely Read: Deliberate Intervention

With the current race to AI utopia, we’re seeing a real life case study on the differences between how and why companies develop new technology and governments' implement policies to regulate new technology.

And I thought I'd share more about "Deliberate Intervention: Using Policy and Design to Blunt the Harms of New Technology" by Alex Schmidt which skillfully frames the differences between businesses and governmental around new technology and regulations through discussions of design and policy basics, value tensions, harm considerations, and examples.? As a ?former Political Science major who started my career in state government regulatory affairs and the state legislature, I know it's not easy to explain how they differ so concisely.

For example, companies typically

  • Value Innovation
  • Don’t have to ask permission
  • Target specific customer segments and use cases for profitability
  • Are incentivized by VC Funding Models to promote profit over people
  • Can be influenced by internal factors such as Design practice standards, internal policies, worker protests, etc.)
  • Limit transparency (ex. algorithms)

While Government typically:

  • Has to consider public interest
  • Takes a measured approach (policy is typically reactive not proactive)
  • Has limited resources and limited specialized knowledge internally
  • Participates in collaborative policymaking through the regulatory rulemaking process
  • Can be influenced by public attention on harms and public whistleblowing
  • Has transparency requirements

Policy making influencing design to reduce harm isn’t a new concept.

  • Schmidt provides clear examples of how tangible and intangible harms played a role regulating automotive safety, dangerous toys, and railroads.
  • And the growing field of Civic Tech provides examples of practitioners both inside and outside of government borrowing tech industry methods to improve government and policy.

One refrain throughout the book was that we often don’t know what harm will occur or that the consequences are difficult to predict. I think it's more nuanced than that. I think we have enough historical and present day examples of how businesses, governments, society, and technology have caused harm to know important areas we should focused on even if we don’t know them all or even the final form the harm will take. Those areas include: Safety, Privacy, Discrimination, Sexism,?Harassment, Misinformation, Financial exploitation, Exploitation of children, seniors & other vulnerable groups, and Environmental concerns.

And when I read a well-placed quote in book from Laurence Lessing's Code is Law, it was clear private companies already regulate through their design of platforms, algorithms, etc. without the associated constraints.

“Our choice is not between "regulation" and "no regulation." The code regulates. It implements values, or not. It enables freedoms, or disables them. It protects privacy, or promotes monitoring. People choose how the code does these things. People write the code. Thus the choice is not whether people will decide how cyberspace regulates. People--coders--will. The only choice is whether we collectively will have a role in their choice--and thus in determining how these values regulate--or whether collectively we will allow the coders to select our values for us.” - Laurence Lessing (Code is Law)

While the last chapter “Wicked Problems and Baby Steps” doesn’t offer a specific solution to this immense challenge, it does offer a list of eleven areas to bring design of technology and policy together. As someone who advocates for teams to proactively identify and mitigate harms before a product, service or technology launches, the first item on the list was much to my ears: “Consider the distinction between “pain points” and “harms” designers - look for the former, but usually not the latter. Might we widen the lens?”

Nancy Chourasia

Intern at Scry AI

1 年

That's a great thought. A common characteristic of the previous industrial revolutions is that governments played an active role in them. They incentivized inventors through patents, protected their commercial interests, defended against foreign competition, and provided funding for research and development (either directly or via their militaries). Also, in the Second Industrial Revolution, the U.S. government dismantled monopolies. And, in the third, it protected inventors' commercial interests, lowered tariffs to defend against communism, increased military spending to create new markets, and increased overall spending in research and development (via DARPA). During the Fourth Industrial Revolution, governments of various countries are currently adopting different approaches. Some are espousing a laissez-faire attitude whereas others are actively enacting statutes. The use of data and AI systems is also being approached differently, with some governments emphasizing individual privacy but others allowing the use of data for the collective good. Similarly, governments and non-governmental organizations worldwide are approaching ethics and fairness regarding AI systems differently. More about this topic: https://lnkd.in/gPjFMgy7

回复

Incredibly thoughtful review Lisa, THANK YOU

Kathryn Campbell

Global Research & Insights Leader

2 年

Thanks for sharing and providing such a thoughtful summary & review, Lisa! I'm adding it to my list.

Marcela Musgrove

UX Researcher |Human Factors Engineer|Data Analyst | native Spanish-English bilingual

2 年

Ooh, that's on my wishlist. I think I'm going to ask my public library to buy it since it's a bit pricey.

要查看或添加评论,请登录

Lisa D. Dance的更多文章

社区洞察

其他会员也浏览了