It Takes a Village to Raise an AI: Regulation Alone Won’t Tame the Beast, But It Would be a Good Start
Original mosaic design created by Peter Maxwell Barlow

It Takes a Village to Raise an AI: Regulation Alone Won’t Tame the Beast, But It Would be a Good Start

Thought Experiment No. 1: It’s the year 2030. The earth passes through a radioactive cloud rendering all humans sterile. The only way to avoid extinction is by creating a new generation of humans. But who gets to choose the genetic material that will be used to manufacture these new humans? Would you be okay if the choices were made primarily by government agencies, mega-cap tech companies, venture capital firms, private equity investors and a handful of academics at elite universities???

?My guess is that you wouldn’t be happy with that scenario. And yet we’re remarkably submissive about allowing AI developers and their financial backers do whatever they want – even as it becomes increasingly clear that nobody really knows what kind of harm AI is capable of doing.

?Who speaks for us at this critical moment? Who looks out for our safety and well-being as AI morphs from a science experiment into a force of nature?

?We have an alphabet soup of regulatory agencies monitoring and enforcing safety standards for planes, trains, boats, automobiles, toasters, medical devices, microwave ovens and nuclear power plants. Why are we so nonchalant about AI?

Part of our collective nonchalance is undoubtedly rooted in a sense of helplessness. And there’s reason for feeling helpless in the face of the gathering AI storm. It’s all well and good to talk about regulations, but it’s hard regulating a phenomenon that we don’t fully understand.?

“At this point, I don’t think there’s enough understanding of the nuance and complexity of?generative AI technology to create robust regulation,” says Caryn Lusinchi, an AI governance and auditing consultant.??“For example, EU AI Act Article 13 (1) says, ‘AI systems shall be designed and developed to ensure their operation is sufficiently transparent to enable users to interpret a system's output,’ yet AI like ChatGPT and Midjourney are entirely based on a neural network architecture design, which means they have high performance yet very poor interpretability.”

Lusinchi makes a fair point. Yet our inability to fully grasp the mechanics of AI shouldn’t stop us from imagining what might go wrong. The fact that we don’t 100-percent understand how nuclear energy works didn’t stop us from creating the Nuclear Regulatory Commission. Aeronautical engineers still argue about what enables planes to fly, but that didn’t stop us from regulating aviation. We’ve never actually “seen” an electron, but we have safety standards for electric heaters. Why does AI merit special treatment?

?It's not like there’s a shortage of recommended rules and guidelines. In January 2023, the National Institute of Standards and Technology (NIST) published the AI Risk Management Framework, a comprehensive set of guidelines, recommendations and advice that could easily form the basis of a formal rulebook for AI and ML development. The ?International Organization for Standardization (ISO) has published a set of recommended standards for AI development and the EU is moving closer to adopting formal rules “to ensure that AI systems are overseen by people, are safe, transparent, traceable, non-discriminatory, and environmentally friendly” and that would “strictly” prohibit “AI systems with an unacceptable level of risk to people’s safety,” according to the European Parliament.

There are also common-sense “seat-of-the-pants” rules and practices that could be adopted without creating a blizzard of new laws from scratch. “Think before you code,” says Patrick Hall, a coauthor of Machine Learning for High-Risk Applications: Approaches to Responsible AI. “Make sure to allocate time and resources for risk management. Do not connect a bunch of error-prone systems. Think about whether you're doing experiments on humans and consider the repercussions of those experiments.”

Even if you’re not programming high-risk AI or ML applications, the book is worth reading for its refreshingly human-centered perspective. Here’s a snippet that should give pause to any reasonable person: “Much of AI and ML is still evolving sociotechnical science, and not yet ready to be productized using only software engineering techniques … Generally speaking, we as data scientists seem to have forgotten that we need to apply the scientific method carefully … because we are often conducting implicit experiments.”

Hall and his coauthors, James Curtis and Parul Pandey, wrote their book before the unexpectedly rapid uptake of ChatGPT rattled the world and introduced the term “generative AI” to non-technical audiences. Machine Learning for High-Risk Applications is essentially a manual for doing AI the right way – but it left me wondering how organizations that prize speed above everything else would incentivize their software engineers to slow down and apply scientific methods to AI projects.

At minimum, Lusinchi suggests, there should be some form of Hippocratic Oath for AI developers that clearly states: Do no harm. Way back in 2018, Oren Etzioni, founding CEO of the Allen Institute for AI, drafted exactly such an oath. It’s a bit wordier than what Lusinchi is suggesting, but it’s a good start.

Ethics vs. Efficiency: A False Dichotomy?

The problem of balancing speed and ethical behavior in software development isn’t new. But that doesn’t mean it has to be the status quo; there’s no law of nature stating that working fast and writing good code are mutually exclusive. The relatively new field of application security operations (AppSecOps) addresses the issue by including security at the beginning of the software development process, rather than waiting for a potentially harmful vulnerability to emerge after the product has been released. Is it too much to ask that safety, security, privacy and a reasonable standard of fairness should be coded into every AI solution from Day One?

The software bill of materials (SBOM) is another good idea from the cyber security world that could be adopted by AI developers. Essentially, the SBOM serves the same purpose as the list of ingredients on a can of soup or the names of the grapes used in a bottle of wine. Adopting the SBOM concept for AI solutions would greatly increase their transparency, security and long-term effectiveness, essentially by making it more difficult to use code or training data from unreliable or inappropriate sources. If an Agile methodology is used for development, it’s important to include controls for safety, security and privacy in each iteration of the process.

A non-technical way of adding transparency to the process would be simply using the power of government regulatory agencies to hold public hearings on AI. It’s easy to imagine agencies such as the Food and Drug Administration (FDA), Federal Trade Commission (FTC), Federal Communications Commission (FCC), Securities and Exchange Commission (SEC), Department of Labor (DOL), Federal Aviation Administration (FAA) and Federal Reserve System, aka “the FED,” weighing into the debate over AI development.

Closer scrutiny from a host of existing government agencies and bureaus wouldn’t stop AI development in its tracks, but it might slow it down by forcing AI developers to think harder about the downstream impact of their products. Government entities don’t merely enforce regulations, they also create transparency and encourage open discussions – which is more the tech companies are doing.

Top Down and Bottom Up

But the bureaucracy can only do so much. AI needs to become a community effort. This broader community would include social scientists, educators, historians, linguists, cultural anthropologists, physicians, psychotherapists, artists, lawyers and – dare I say it – politicians. The community would have a shared sense of purpose and common standards of behavior. It would look more like the crew of a Federation starship and less like the happy hour crowd at a bar in Silicon Valley. The community would be diverse and polyglottal, because there are a lot of people in the world who want to use AI and who don’t speak English.??

The linguistic component isn’t trivial. It costs money to develop AIs that work in multiple languages; translating an AI solution from English into Hindu or Arabic can be a major undertaking. Does the AI industry sincerely want to limit its products to Anglophones? “There definitely needs to be more seats at the table for non-English speakers,” says Lusinchi.

And speaking of linguistics, has anyone else noticed that the sudden popularity of ChatGPT has shifted the AI conversation from mathematics to language? Generative AI – the form of AI that’s all the rage – isn’t exactly a science. It’s more of an elaborate parlor trick based on a pre-trained AI’s ability to pick the next mostly likely word or sentence. As a feat of technology, it’s amazing. But it’s not science. If it were science, it would be interpretable, reproducible and refutable. The generative AI products on today’s market are black boxes, so there’s no way to know what’s going on inside them or what kinds of data were used to train them. The lack of transparency – some might call it secrecy – makes it hard to think of generative AI as a science.

And if it’s not a science, why are we treating it with kid gloves? With each passing day, generative AI looks less like a miracle of modern science and more like a consumer product. In our society, we expect consumer products to meet minimum standards of safety and reliability. When they don’t meet those standards, we begin talking about negligence and liability. We identify the responsible parties, we hold them accountable and, if necessary, we file lawsuits. That’s the way the system works.

Setting an Example for Future Generations

For the past couple of years, AI has been hiding behind a curtain of misguided notions and lazy assumptions. Sure, AI began as a science. But now it’s a product. The companies racing to develop ever more powerful forms of AI aren’t doing it for the advancement of humanity – they’re chasing the almighty dollar. Not that there’s anything wrong with chasing a few dollars – it’s the heart and soul of capitalism. Which brings us to Thought Experiment No. 2:

Imagine if the National Hockey League decided that faster, bloodier games that would bring more fans into hockey arenas. The easiest way to increase the sport’s brutality would be to strip the regulations to bare minimums and fire most of the referees. As a hockey fan, my hunch is that a strategy like that might work for a season or two before fans and players grew tired of the violence and the league was forced to reinstate its thick and meticulously detailed rulebook.

Capitalism, like ice hockey, works better when there are rules to follow and officials to enforce them. Pretty soon, we’ll apply that logic to AI and begin regulating it. Before we begin writing the AI rulebook, however, lets make sure that all of the stakeholders – which means all of us – have a seat at the table and a voice in the rulemaking process. Let’s democratize the process of regulating AI and set a good example that future generations can follow. ?

?Author’s notes: The Washington Post has reported that two lawmakers, Reps.?Ted Lieu?(D-Calif.) and?Ken Buck?(R-Colo.), have proposed a “blue-ribbon commission” that would help develop strategies for governing AI development. Also, the European Commission has adopted its adequacy decision for the?EU-U.S. Data Privacy Framework. Thanks to Dr. Sina Wulfmeyer for sharing the news. Here's a link to the press release:

https://ec.europa.eu/commission/presscorner/detail/en/ip_23_3721

On July 21, 2023, the New York Times reported that seven AI companies — Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI — have agreed to "voluntary safeguards." The companies "formally announced their commitment to new standards in the areas of safety, security and trust at a meeting with President Biden at the White House," according to the Times.

Hopefully, this is the beginning of a good trend. Stay tuned and keep your fingers crossed!

#AI?#ML?#appsec?#ethicalai?#responsibleai?#generativeai

Add Philosophers to the group - Applied Ethicists can be contributors to the community too.

Dolly B

PhD candidate in Organizational Leadership | Visionary Leader | Legal & Multicultural Expert | Public Speaker | Debut Author of "The Dolly Effect" (Coming Soon) | Founder & CEO at Lexentrix Solution

1 年

This is a very thoughtful article. Thank you Mike for this awarness and alarming assessments. Keep doing your great work and sharing your perceptions and thoughts as it brings a knowledge transfer for people like me to understand more about AI in a different stand point.

Joe Apfelbaum

??CEO, evyAI -AI LinkedIn? Trainer, Business Development Training B2B Marketing via Ajax Union // Networking Connector, Author, Speaker, Entrepreneur, AI Expert, Single Father????????????

1 年

Mike Barlow I completely agree with the importance of building a diverse and inclusive AI community. By involving professionals from various fields like social sciences, education, and law, we can ensure a more holistic approach to AI development. This collaboration will not only lead to more responsible and ethical AI practices but also foster innovation and creativity. It's encouraging to see the recognition of the need for a multidisciplinary approach in shaping the future of AI.

Julie Alterio

Creative, resourceful communicator with a passion for stories that inform, inspire, and connect.

1 年

Thanks for sharing your latest exploration into how emerging technology will affect us in deeper ways than we ponder if all we're doing is scanning the headlines and social media blurbs. It emphasized for me that without true engagement from a wider circle (and who isn't a stakeholder, really?), the result will almost certainly be a loss of privacy with the potential of much deeper harm (even without "Terminator"-level catastrophizing).

Dr. Marcia Layton Turner

Bestselling Business Book Ghostwriter | Thought leadership, entrepreneurship, corporate histories

1 年

Thanks for ending on a hopeful note, Mike Barlow!

要查看或添加评论,请登录

Mike Barlow的更多文章

社区洞察

其他会员也浏览了