FTC: The new AI Watchdog?

FTC: The new AI Watchdog?

The United States Federal Trade Commission's ("FTC") primary mission is to protect the public from unfair, deceptive, and anti-competitive business practices. ?If the agency believes you violated the FTC ACT, they may bring an official charge asserting how they think you engaged in a prohibited practice.

While complaints can be challenged and appealed, overwhelmingly, they are settled through a "Consent Decree," or settlement order detailing an agreement between the FTC and the charged business. Historically, punishments agreed to in consent decrees included injunctive actions and restitution payments. However, in a series of rattling settlement orders in 2019, the FTC deployed a new punishment for organizations deemed to have unfairly used data to train machine learning models. Colloquially referred to as "algorithmic disgorgement," the FTC raised the stakes for businesses that invested in training large AI models. Algorithmic disgorgement is scary because it forces companies to delete machine learning models entirely, not just the data used in training them.

Given that the FTC has signaled it intends to continue to use algorithmic disgorgement and the catastrophic result of being compelled to agree to it, it's prudent to try and avoid it..???

We could fill an entire book with the FTC's signals of their intention to turn discerning eyes toward enforcing what they believe are unfair and deceptive creations or uses of algorithms. To their credit, the FTC isn't obfuscating their desire to become the czar of AI Regulation. Samuel Levine, director of the FTC's Bureau of Consumer Protection, bluntly stated, "The FTC welcomes innovation, but being innovative is not a license to be reckless. We are prepared to use all our tools, including enforcement, to challenge harmful practices in [ai]." Additionally, Commissioner Alvaro Bedoya of the FTC believes AI is already regulated in the U.S. by unfair and deceptive trade practice laws and that companies that make, sell, or use AI will be accountable for violations:?

We have frequently brought actions against companies for the failure to take reasonable measures to prevent reasonably foreseeable risks. And the commission has historically not responded well to the idea that a company is not responsible for their product because that product is a black box that was unintelligible or difficult to test[.]?

Bedoya further urged the business representatives in attendance at the conference where he made these remarks, that if companies are developing or deploying AI for important eligibility decisions, they should "…closely consider the ability to explain your product and predict how the risks it will generate may be critical to your ability to comply with the law."?

Related to the idea of saying what you do and doing what you say, in back-to-back months (February and March 2023), the FTC released two AI-themed Business Blogs where they call upon businesses to immediately address the "fake AI" and "AI fake" problems These two sides of the same aggressively regulated coin are worth paying attention to. The "fake AI" problem concerns marketing false or deceptive claims about using AI or your AI's capabilities. Whereas "AI fake" deals with AI operating "behind the screen" to create deception. Some examples of "AI fake" are tools used for nefarious cybersecurity attacks (such as the language used for spearfishing attacks), fake testimonials, and artificial media designed to trick consumers into believing specific content is authentic. In addition, the FTC clarifies that the AI fake problem implicates both the creators of AI and those that offer (or use) it and holds each responsible for "reasonably foreseeable" misuses.???

The aggressive "talk" from the FTC is striking and worth tracking to see where their regulatory spotlight lands. For example, returning to the fake AI problem, the FTC said, "Marketers should know that — for FTC enforcement purposes — false or unsubstantiated claims about a product's efficacy are our bread and butter.”?

While it may be their bread and butter, our objective is to help keep your generative AI out of Cafe FTC. Clearly, the FTC has teeth and is looking to satiate its hunger for those they believe are unfairly and deceptively utilizing AI. In fact, just last week, the Washington Post reported that the FTC started an investigation against OpenAI’s ChatGPT regarding potential data and security violations. This will certainly be worth following. If you rely upon a risk-based approach for decision-making, hopefully, you're convinced to take some action, but what? The recommendations in our recently published book, Generative Artificial Intelligence: More Than You Asked For, co-author Rich Heimann and I propose best practices for using generative technologies responsibly. Additionally, the FTC provided some best practices that are worth following. Summaries of some of those practices are listed below.?

Action 1: Take an inventory and review your internal and public policies regarding personal data usage, privacy, and machine learning and ensure you can evidence compliance with your stated commitments.?


Action 2: Ask Yourself the Jurassic Park Question: “Just because you can build a generative model, should you?”?


Action 3: Be transparent and honest about using generative technology and do not mislead consumers.?


Action 4: Take extra precautions to protect vulnerable classes, such as children.?


Action 5: Do not market the use of AI, or assert using AI if you can not support the veracity of your claims.??


Action 6: The FTC has made clear that it will not accept the “black box” defense for an inability to explain how your solution makes decisions that impact consumers. Therefore, document how your solutions work and make decisions.?


Action 7: Have an outside auditor ensure your model has safeguards to prevent biases, discrimination, and inequalities.?


Action 8: Have a system in place for your model to regularly be monitored to ensure its working as intended, isn’t violating any of the other action items, is secure, that a human is in the loop, and that a system is in place for users to communicate with a person about and concerns or inquires.?


Action 9: Be transparent about the data your model was trained on, how your model works, how well it works, and what you do with consumer information.?


Action 10: Honor the Golden Rule: Do more good than harm. The advocacy for the Golden Rule isn’t too idealistic as the FTC considers an act that does more harm than good as “unfair,” it's good to keep this top of mind when designing, deploying, and operating your AI.?


Rich Heimann

Generative Artificial Intelligence Revealed: Understanding AI from Technical Depth to Business Insights to Responsible Adoption is now available!

1 年

This informative article serves as a timely reminder of the critical importance of compliance and ethical considerations for businesses that leverage AI.

回复

要查看或添加评论,请登录

Clayton Pummill的更多文章

社区洞察

其他会员也浏览了