What We Heard from the Experts at AI2's AI + Policy Workshop
Perhaps Congress was listening in on AI2's recent AI+ Policy Workshop. Last week, they introduced a new bill, The Algorithmic Accountability Act, that would require major companies, under the supervision of the Federal Trade Commission, to evaluate their algorithms for bias and discrimination. The idea of an "AI Watchdog" was a key discussion point when Moshe Vardi, Ben Shneiderman, Ryan Calo, and Tracy Kosa joined us at our Seattle office to share their perspectives on a range of topics, including ethics, privacy, trust, regulation, and policy. While the recommendations and approaches varied, our guests agreed that regulation and policy need a prominent place in the future of AI.
Moshe Vardi focused his talk on trust and the need for a focus on policy, not just ethics. He talked about a “Crisis of Trust” in the age of security breaches, a lack of transparency by technology vendors, and the exploitation of privacy. Vardi also touched on machine bias and the challenges arising from the application of machine learning technology to important decisions in our justice system, like parole decisions. When data reflects a historical bias, and the AI is trained on that data, bias is inevitably part of the machine’s decision. “I want to be the ethics skeptic,” he said. “Ethics should inform public policy.” Vardi used the automobile as an example of why policy, not ethics should be the focus. Automobile safety and reduction in motor vehicle deaths are attributed to public policy, not “ethics training for drivers.” Vardi concluded by talking about the ‘opaque tax’ we pay in the form of personal data for ‘free’ services, like Google, and pondered why there is no IT public policy. “Technology is driving the future, but who is doing the steering?”
Ben Shneiderman posed an answer to that question during his talk recommending independent external oversight, like a National Transportation Safety Board (NTSB), for the field. The NTSB is a respected group operating independently from the departments it investigates and whose reports are publicly available. Why not create a National Algorithm Safety Board that establishes rules and conducts reviews? Then, when there is an AI or algorithm failure, this group can come in, investigate, and generate a report with recommendations. “It’s time to grow up and start saying who does what by when. That’s how things happen.” Shneiderman also talked about ensuring human control while increasing automation- “humans remain in control even as computers become more powerful, not more intelligent. People are not computers and computers are not people.”
“It could be that AI changes everything. And if it changes everything, then law and legal institutions have to change too,” said Ryan Calo in his talk centered on the policy levers we have and recommendations for what to do and when. Calo highlighted the need for greater expertise and awareness in government for dealing with technologies they simply don’t understand. Consider the irony of detractors who fear AI also calling for its use in our most sensitive social institutions, like social justice and healthcare. Calo called for an immediate change in the current rhetoric, noting language like “the AI race” and “American AI” leads to bad policy choices. “If something is a race, then you have to win it- innovating at any cost.” He also suggested AI regulation be considered for the military, privacy, and due process. He concluded by contemplating a point where we think of AI as people, “That’s gonna break everything because there are so many biological assumptions in the law. Imagine there’s some AI that says, ‘I want to run for President.’ Does the fact that they were built last year mean they have to wait 35 years to run?”
Tracy Kosa began her presentation by considering knowability. “We need to accept that at least 25% of the time, something bad is going to happen." So, what can we do about the rest? Kosa proposed an embedded ethics model—distributing responsibilities throughout an organization rather than designated a “Chief Ethics Officer” whose job it is to bear the burden alone. Kosa also shared a framework for ethical review and demonstrated its application using a case study on facial recognition software as a tool for locating a kidnapping victim. Finally, she shared two concluding thoughts: 1) Recommending a professional regulating body for engineers, like those for doctors, lawyers, and architects, and 2) Calling for the breaking up the “monopolistic creation of technology companies” where data sharing is happening at a level "way worse than we can imagine."
Whether you call it ethics or policy or trust or privacy, all of these topics will be part of shaping what’s next with the field of AI.
Leadership & Workplace Strategist | Advising Executives on Emotional Cadence & Sustainable People Strategies | “Emotions are data for better decisions”
5 年Valuable conversation and I expect they will continue to occur until we are able to come together and agree on how we apply policy to the growing use of AI in all areas of business. I agree with Moshe, ethics should inform public policy. Now to see who will lead this charge because nothing gets done without someone leading it.
Coding Bootcamp Graduate, History and Recreation Major
5 年Yes! Just ask Joy Buolamwini.
Senior Test Consultant at Judo Bank
5 年Addressing lack of transparency and the exploitation of privacy is very important, however the solution to the problem does not look appropriate. AI alghorithms created in one country will have to economically compete against similar products from another country where there is no such legislation as the Algorithmic Accountability Act. On the other hand, any external body attempting to test AI will be faced with a huge problem. For example?how to validate an algorithm that resolves some optimisation problem? Anyway it is a step forward, only a step but not the solution.
Technology Leader
5 年Add GDPR and CCPA to the controls list converging with this article....Privacy, Privacy, Privacy harder and harder to comply to and more fines coming with GRC regulation globally
[email protected]
5 年no, but boeing product engineers should be encouraged to do the samurai thing with the sword. ... i have much more to say from a legal-tech perspective, but so many loud voices out there at the moment.