AI Regulation and Risk for St. Louis Businesses
I recently had a thought provoking conversation about artificial intelligence regulations with David Nicklaus , St. Louis Post-Dispatch columnist covering business & tech. The resulting article is here, which also has input from others in the St. Louis AI community like Dr. Stephen Thaler and Marc Bernstein . Being interviewed was a good opportunity to organize my broader thoughts here on how AI Regulations stand to affect regional and local businesses.
AI is everywhere in the news cycle these days.?
Tools like OpenAI’s ChatGPT, which is led by St. Louis native Sam Altman, have captured the public’s imagination and sparked discussions about how these radical new capabilities should be responsibly regulated.
Debates at the level of Governments and Big Tech firms will shape this landscape - and everyday businesses on Main Street will need to be thoughtfully prepared for the resulting watershed of implications.
Around the?World
The European Union has already become the first to propose a comprehensive set of AI regulations in its “AI Act,” outlining rules to constrain the potential risks posed by AI systems. It is draft language that still needs to be debated and negotiated by the 27 member states before being enacted, but it is nonetheless instructive to look at their framework, which includes four different tiers of regulatory issues:
Unacceptable risk — AI systems that are considered a threat to people and should be banned. They include:?
High risk — AI systems with high risk, split into two sub-groups:
Generative AI — In its own unique category, these models that create text, images, video, etc., must comply with transparency requirements, including clearly indicating when content is AI-generated and implementing safeguards to prevent the generation of illegal content.
Limited risk — These AI systems should comply with minimal transparency requirements that would allow users to make informed decisions.
China is also drafting its own rules for AI regulation, but for brevity I’ll just provide the link if you want to dig into more of those details.
The bottom line is that world governments are recognizing the growing importance of these issues, including our own government in the US:
United States
In the United States, U.S. Senator Schumer introduced the “SAFE Innovation Framework for AI Policy” on June 21, 2023 (SAFE = Security, Accountability, Foundations, Explainability).
This initiative aims to create a process to navigate toward a regulatory framework, and does not yet propose the regulations themselves. It aims to do this primarily by creating a 20-member commission to explore the issues. This includes: educating themselves, understanding existing capacity to regulate, distributing the regulatory responsibilities across existing agencies as appropriate, and ensuring alignment of potential enforcement actions.
Schumer says: “I will invite the top AI experts to come to Congress and convene a series of first-ever AI Insight Forums, for a new and unique approach to developing AI legislation.”
My understanding is that these AI Insight Forums are essentially breakout groups to deep-dive on various topics, which I see as a welcome change from the less-focused, overly-general Congressional hearings that have occurred in the recent past.
The main issues these Insight Forums will plan to explore are given as:
This list of concerns is similar to those raised by the EU discussions, particularly around privacy and public safety, but also looks beyond it at broader economic and social issues.
Voluntary Compliance
Meanwhile, the White House is also pursuing these issues, announcing on July 21, 2023, that it has secured voluntary commitments for compliance on several AI safety measures from seven companies in the industry: Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI.
These commitments are for actions such as safety and security testing of AI systems before release, investment in cybersecurity and vulnerability reporting, and enhancement of trust through implementation of tracking/watermarking for AI-generated content.
These commitments all sound promising, but they have no legal enforcement mechanism. I suspect the companies agreed to most, if not all, of the included actions because they are perceived as manageable and already in the companies’ roadmaps due to market forces and consumer preferences. Indeed, the commercial marketplace is often the best watchdog for complex issues like AI, and making good on these commitments would help the companies stay on the right side of consumers and investors. It will be interesting to see what happens, however, if the other issues outside of the agreement rise in importance; or if some of these voluntary commitments turn out to be more onerous than anticipated.
Perspectives on the?Ground
Overall, I am encouraged that thoughtful people are mobilizing to discuss and debate these things. For Congress to propose studying the issues and seeking input from experts is a refreshingly humble opening stance. Those involved in the process will do well to remember, however, to keep an open and flexible mind as the process unfolds, such that the end result reflects the process, and is not just a sideshow to justify existing, incoming positions.
On an issue by issue basis, it may be the case that markets will self-regulate… or it may be the case that government regulation will be required. It may also be the case that the issues can be addressed entirely by existing regulatory bodies… or it may be the case that a new entity is required. There are indeed a number of high impact, low probability risks unique to AI that warrant very careful thought and consideration at the policy level.
As I suggested in the St. Louis Post Dispatch article, I hope that regulation will be conducted within existing regulatory bodies wherever possible. Spinning up another three-letter acronym to administer a new layer of regulations comes with considerable cost and friction; felt by the existing regulatory regime, the business community, and ultimately the consumer / tax-payer.
It is also worth noting that AI is a very powerful “horizontal” technology, meaning that it cuts across multiple industry verticals. This will make it tricky to define and divide jurisdiction appropriately. However, this is not the first time a wave of horizontal technology has rippled through Capitol Hill. Lessons, pitfalls, and analogies should readily be drawn from the regulation of prior, similarly cross-cutting issues like data-privacy, mobile, cloud, or even the internet itself.
Builders
For the subset of businesses directly building AI tools, products, and services; ill-considered regulatory constraints pose a risk to innovation. For example, one way to regulate AI models may be to add tracking requirements at various points in the technology stack, whether it be:
Different versions of such tracking could range from light-touch to onerous, potentially adding a layer of cost and administration that would bog down startups and small businesses. This would hand a comparative advantage to the well capitalized and lawyer-gilded Big Tech players that have already achieved scale with AI models. Such an ecosystem would eventually consolidate the field into fewer companies taking important shots on goal to innovate and think about AI from diverse perspectives.
领英推荐
Adding friction to innovation can also decrease the benefit of AI technologies derived by consumers, as well as slow down national competitiveness in the space and related use-cases.
Hope
On a hopeful note, it is absolutely possible for regulation to be a highly productive endeavor.
First and foremost, it should protect against real risks. But by establishing standard guard rails for innovation in a given space, regulation also has the corollary effect of decreasing uncertainty, thereby expanding planning visibility and investment appetite.
Regulation can even be creative itself, and a net driver of innovation! Thoughtful regulation can be iterative, adaptive, and scientific. Thoughtful regulation can — and should — include provisions such as: revisiting policies when certain milestones or metrics are attained, incrementally adjusting numeric targets over a phase-in period, supporting a flexible appeals process, etc. There is such a large space of possible solutions that I am confident multiple pathways exist to thread the needle of the myriad considerations.
In fact, Large Language Models (LLMs) are the perfect tool to act as a copilot for policy makers to draw analogies from past regulation, brainstorm new solutions, and identify appropriate sets of targets, incentives, penalties, and strategies.
Users
For most individuals and businesses, the risks and issues surrounding AI will be debated far afield and trickle down to us in a thinly dispersed film. It will be critical for policy makers to address the macro concerns…but most of us won’t be directly affected in our day to day.
In reality, the biggest risk I see for St. Louis knowledge workers and businesses is falling behind on integrating these new AI tools and not benefiting from their new capabilities and use cases.
Employee Productivity has been shown to increase by an average of 66% across 3 cases. (See study and figure below). Your mileage may vary based on the nuance of the task, but gains like this cannot be ignored in a competitive landscape. Every business and employee should be contemplating how to leverage AI capabilities, both internally to improve productivity, and externally, to enhance products and services for customers.
Let’s walk through some of the common objections I hear from friends and clients...
Objections
Objection: I’m Afraid of the?Cost
It is usually the case that the ROI from AI is worth the investment for several use cases at the businesses I am talking with. If you are worried about getting started, or about the absolute level of cost, let me provide some data points. Most of the services on the market today have generous freemium tiers and/or very low pricing, courtesy of the competitive land grab presently occurring for early market share.
I signed up for an account with Open AI to get API keys, and set myself a limit of $100 per month so I wouldn’t have runaway costs. I’ve put the language models through the ringer with thousands of API calls in various python scripts and loops, generating documents, proposals, analyses, chatbots, and more… The result? June has been my heaviest month so far with an overall spend of only $4.14 (see screenshot below).
In addition to commercial competition, relentless activity in the open-source space is further democratizing and lowering the cost of incremental components on a daily basis.
Objection: I Want to Protect My?Data
While there are already lots of safe and secure ways to use commercial tools and API’s, the rapid development of open source LLMs like Llama 2 portend a future where you will easily be able to run your own models and “Intelligence as a Service” within your organization’s firewalls. Tooling and documentation around this use case is still challenging, but I can easily see large enterprises in highly regulated industries like healthcare or financial services making the decision to stand up this internal service over the next few years.
Objection: My Customers Don’t Like Dealing with?AI’s
Well…it depends.
For quick, transactional use cases, my observation is that the world is already rapidly acclimating to text messages and bots. Accelerator to the floor on automation and AI here.
For more sophisticated use cases, absolutely. You don’t want an AI giving erroneous information to a customer. Or to an employee, for that matter. You probably always want to consider how best to incorporate a Human in the Loop (HITL), and the essence of this is creating the right routing, escalation, and review logic — which in itself can often involve AI models making predictions. There are well-defined best practices on how to do this based on the given use case.
Objection: I Don’t Want to Get?Sued
As discussed above, there are already myriad ways to implement AI solutions that are safe, secure, and measured. Broader government regulations are still very much taking shape, but unless your business model straddles some gray area where the marketplace of consumers and investors has not yet demonstrated their preference (hint, there are not many such areas), my opinion is that you are unlikely to be surprised in the near future by a some new “gotchya” law.
Now, if you are just starting out, I would recommend going after the low-hanging fruit first. There are plenty of straightforward, low-risk AI solutions in this category, like: customer chatbots, data entry automation, recommender systems, or supply chain optimization. Some of the more sophisticated use cases that deal with highly sensitive decisions, such as loan approval, medical diagnoses, etc, can absolutely be implemented with the right guard rails and processes, but they are probably not the easiest ones to cut your teeth on.
What Can You Do Right?Now?
First off, take a breath! There are a lot of smart people with the right skills and knowledge who can help you, wherever you are in your AI journey. I know many such people and companies in St. Louis where I can make connections and introductions, so please don’t hesitate to reach out!
Second, brace for change. Feedback loops and new capabilities are coming faster and faster. Each new generation of AI is able to up-level and accelerate the creation of training data to build subsequent models. This means we need to learn how to incorporate continuous change as part of our culture, and manage a diversified portfolio of innovation and experiments.
Third, we have compiled the following resources to help. Please reach out if you think either of these may be valuable to you:
The rise of #AI and the growing call for its regulation presents new opportunities and challenges. Businesses in #StLouis, and elsewhere, must stay informed and be ready to adapt in this new era. Don’t hesitate to reach out to me about any of the topics and resources above.
Thanks, and Happy Innovating!
Dave