(De)Regulation: the Why, What, and How of Technology Policy Making

(De)Regulation: the Why, What, and How of Technology Policy Making

Let me start with a thought experiment. Imagine you are designing a self-driving train and you need to (explicitly or implicitly) help the driving algorithm decide what it should do in the following situation:

  • The train perceives through its cameras that there are five workers on the track ahead who can’t see or hear anything around them.
  • The algorithm is certain, given the speed and distance, that it can’t brake in time and the impact will kill the five workers.
  • The algorithm knows there is a branch off the main track that it can choose to take before reaching the five workers.
  • On that side track, there is one worker who can’t see or hear anything, and he would also be killed by the impact if the algorithm chooses to take the alternate track.

This is a variation of classic philosophy puzzle often referred to as the trolley problem, and it ends up with a question. What is the right thing for the driver (in our case, the algorithm) to do?

Why (de)regulate technology?

Regulations are not always a direct translation of our moral thinking, but we always want them to promote healthy behaviors—especially when those behaviors don’t happen naturally—and deter unhealthy ones. This is not specific to technology regulation, it is how liberal democracy itself has evolved: good governance practices help prevent conflicts of interest, taxation laws (try to) bridge the growing income inequality gap, and election financing rules control undue influence. Some of these regulations work well, some are insufficient, and others are either completely useless or lead to adverse unintended consequences.

For example, I remember that while working with an Asian government on its innovation policy, we uncovered an old law that would have virtually prevented any digital health innovation. It stated that no hospital is allowed to let a health record leave its premises. When the rule was instituted a few decades ago, it was intended to protect patient information, but today it prevents even the sharing of medical files between physicians working together in different locations.

The point is, even when regulation is necessary, it is not always useful. Technology is no exception. Does it need some form of regulation? Yes, because we know that many technological applications, such as driving algorithms (referenced in the puzzle above) and gene editing, could be disproportionately harmful and/or unethical. Are there instances where we would be better off without any regulation? Probably. Would any amount of regulation be enough to cover all imaginable cases involving a technological medium? I doubt it. We should think of technology regulation as no different from regulation in general. It is a dynamic exercise where changes in context will force policymakers to address the same issues in different ways.

Take for example the telephony business. Until the mid-1950s in the US, it was illegal to attach your own phone handset to the network of your service provider. It took a major court decision to change that on the basis that such a choice by some customers was “privately beneficial without being publicly detrimental.” (United States Court Of Appeals District Of Columbia Circuit, 1956)

Would this decision have made sense years earlier when only a handful of people had access to a telephone line? No, and for two reasons. First, the immaturity of the technology needed providers to have an end-to-end control to guarantee service delivery. Second, there were multiple service providers, each with its own handset, and each with its own switchboards, wires, and poles. What we needed at that time was a regulation to ensure that this jungle would be consolidated into a single network, a natural monopoly of sorts, with certain universal service obligation.

What parts of technology should be regulated?

The story of telecommunications illustrates how a single industry goes through cycles: early innovation where no (or minimal) regulation is possible, a growth phase where regulation can be beneficial, and a mature stage where keeping the same old regulations might generate adverse effects.

Social media, e-commerce platforms, and many over-the-top (OTT) services are living today through the same dilemmas seen in the 1930s when telephone services providers agreed to regulation as a counterpart for their consolidation into a single company. We’ve all seen US technology giants calling for regulation in Congressional or Senate hearings. The more difficult question is what should such regulation look like?

If the past is a good indication, we should first change the question to consider the fact that any technology company operates many activities at once, each with specific constraints and objectives. While natural monopolies, for example, might make sense for building a network infrastructure, a policy to encourage competition is more beneficial for end-user devices or applications. This is harder to do. It requires policymakers to build a deep understanding of how technology works (separation between technology layers, roles of different protocols, etc.), and technical experts to understand how laws are constructed! Having worked with brilliant people on both sides, I can tell you it takes many late-night talks.

How can technology be regulated?

I have found that the first step is always to build a common understanding, and have those late night talks. Unfortunately, I don’t believe the best way to do this is through occasional congressional hearings that are divided into five-minutes sprints, each often resembling a monologue, rarely a conversation. I find it is more productive when, as a second step, the two sides initiate small targeted actions so they don’t end up lost in the endless legal drafting process. This can be done through, for example, transparently making data available to monitor the volume of alleged threats and benefits, or agreeing on targeted emergency interventions to help the groups most vulnerable to a technology misuse (depending on the case, this could be children, minorities, or threatened industries). Building a comprehensive policy or regulation is the last (and important) step in the journey.

No alt text provided for this image

Getting these three steps right is time-consuming; and we need to collectively realize that it is going to be an iterative process. We are repeating the patterns of the 1930s and the 1950s, and we will likely repeat them for each new type of problem: monopoly vs. inefficiency, free speech vs. safe speech, innovation vs. security, etc. I don’t believe there are simple answers to any of these problems today. Each requires tradeoffs that will likely differ between societies and periods of time.

The first and hardest step is to “understand”.

Whether you are a technology evangelist or skeptic, here is the question I want to leave you with: do you understand why smart and caring people on the other side think what they think?

Jorge Galarza

Insurance Broker For Family First Life ??

4 年

Great article you wrote Amane Dannouni! Especially the steps of how can technology be regulated. Thank you for sharing.

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了