Deploy at Your Peril: Why Ignoring the FTC’s Stance on AI Could Sink Your Business

I, probably like you, have been receiving more than my fair share of AI-generated advertisements. It seems like everyone is running to implement the new technology. To a certain degree, that’s great. New technology can bring convenience, drive efficiency, productivity, and innovation. However, a company would be wise to ensure it understands its continuing legal obligations. The FTC has made it clear that ignoring your obligations regarding AI will negatively impact your business.

For some, when a new technology arrives that captures the public imagination, there is an almost reflexive assumption that it must require an entirely new legal framework. Artificial intelligence is no exception. Vendors, investors, and some policymakers have suggested that AI’s novelty places it in a regulatory category of its own—one where legacy rules do not quite fit and new ones have not yet been written. The Federal Trade Commission has firmly rejected that premise. Through a series of policy statements, enforcement actions, and public guidance, the agency has made clear that Section 5 of the FTC Act, the foundational prohibition on unfair or deceptive acts and practices, applies to AI just as it applies to any other commercial product or service. This “no AI exemption” position is neither reactionary nor arbitrary; it reflects a principled reading of the statute and a coherent theory of consumer harm that remains as relevant in the age of large language models as it was in the age of patent medicine.

The Architecture of Section 5

Section 5 of the FTC Act declares unlawful “unfair or deceptive acts or practices in or affecting commerce.” The language is deliberately broad. Congress designed the provision to be adaptive, granting the Commission authority to respond to commercial misconduct that legislators could not fully anticipate. Deception, under the Commission’s longstanding framework, occurs when a representation, omission, or practice is likely to mislead a reasonable consumer in a material way. Unfairness, codified in the 1994 amendments to the Act, covers acts that cause or are likely to cause substantial injury to consumers, which are not reasonably avoidable by consumers themselves, and which are not outweighed by countervailing benefits to consumers or competition.

Neither prong of Section 5 requires the offending practice to be technologically primitive. The statute speaks to commercial conduct, not to delivery mechanisms. A company that uses an algorithm to conceal material terms from a borrower is engaged in deception regardless of how sophisticated the underlying model may be. A firm that deploys a recommendation engine to steer vulnerable consumers toward harmful products causes unfair injury regardless of whether the engine runs on a rules-based system or a neural network with hundreds of billions of parameters. The FTC’s insistence that AI triggers no special exemption is, at its core, a refusal to let technical complexity serve as legal camouflage.

The Commission’s Articulated Position

The FTC began signaling its intent to apply traditional consumer protection principles to AI well before the current wave of generative AI products reached consumers. Its 2020 report on Algorithmic Accountability and its subsequent guidance on “Luring Tests and AI” established that AI-generated personas capable of deceiving consumers about their non-human nature raise straightforward deception concerns under existing law. In 2023, as chatbots and AI-generated content proliferated, the Commission reiterated and sharpened that position. Its joint statement with international counterparts warned that AI-enabled deception—whether in the form of fabricated endorsements, misleading capability claims, or synthetic media designed to manipulate—falls squarely within the agencies’ existing mandates.

The Commission has been equally direct about the supply side of the AI ecosystem. Developers who make unsubstantiated performance claims about their models—asserting, for instance, that a medical diagnosis tool achieves clinical-grade accuracy without adequate evidence—risk Section 5 liability for deceptive advertising. The “novelty defense,” the implicit argument that regulators should hold back because the technology is new and its risks are still being understood, has found no purchase at the Commission. If anything, the FTC has suggested that novelty heightens rather than diminishes its interest, because consumers are least equipped to evaluate claims about technologies they do not yet understand.

Why the “No Exemption” Stance is Correct

Critics of the FTC’s approach sometimes argue that Section 5 was not designed with AI in mind and that applying it to machine learning systems stretches the statute beyond its intended scope. In my opinion, the argument proves too much. Section 5 was not designed with internet advertising, mobile apps, or algorithmic credit scoring in mind either, yet courts and the Commission have applied it to all of these technologies without serious constitutional objection. The deception framework does not ask what technology produced a misleading statement; it asks whether a reasonable consumer was likely to be misled and whether that misleading was material. Those questions translate directly to AI contexts.

A contrary rule would produce perverse incentives. If AI systems enjoyed a de facto exemption from consumer protection law during some indeterminate “development phase,” companies would have every reason to route their most commercially aggressive practices through AI-enabled interfaces. Deceptive subscription traps, manipulative pricing, fabricated testimonials, and discriminatory targeting would all become safer if conducted by an algorithm than by a human agent. The “no exemption” principle prevents that from happening by holding that the legal obligation to deal honestly with consumers does not dissolve when a company outsources customer-facing conduct to a machine.

There is also a structural argument rooted in the purpose of consumer protection law itself. The core insight of Section 5 is that markets function well only when consumers can make informed, voluntary decisions. Deception and unfairness corrupt the informational substrate on which market choices depend. AI systems, precisely because of their scale, personalization capabilities, and persuasive sophistication, have the potential to distort that substrate far more effectively than earlier technologies. Applying existing consumer protection principles to AI is thus not a case of old law chasing new technology; it is a case of enduring principles becoming more, not less, important as technology advances.

Practical Implications for AI Developers and Deployers

The FTC’s position has concrete implications for companies operating in the AI space. Marketing claims about AI products must be substantiated, just as claims about any other product must be. A company asserting that its AI-powered hiring tool eliminates bias, or that its AI financial advisor outperforms human advisors, bears the burden of having a reasonable basis for those claims before they are made. The burden does not shift because the product is technically complex or because the industry is young.

Disclosures must be clear and conspicuous. If an AI system makes personalized recommendations based on data that consumers would find surprising or objectionable, that data use must be disclosed in a manner consumers can actually find and understand—not buried in a terms-of-service document that no reasonable person reads. If a consumer is interacting with an AI-generated persona rather than a human being, and that distinction is material to the consumer’s decision-making, concealing the AI’s nature is deceptive.

Design choices are not insulated from scrutiny either. A user interface engineered to make cancellation difficult, a chatbot scripted to deflect complaints rather than resolve them, or an AI recommendation system optimized to maximize engagement at the expense of consumer well-being can all give rise to claims of unfairness. The Commission has made it clear that intent does not determine liability; the question is the practice’s effect on consumers, not the subjective motivations of those who designed it.

Conclusion

The FTC’s “no AI exemption” policy is, at its simplest, a statement that consumer protection law means what it says. Section 5 reaches unfair or deceptive commercial conduct. AI-enabled conduct is commercial conduct. The logical conclusion follows without any need to break new legal ground. What the Commission has done is refuse to treat technological novelty as a reason for regulatory forbearance when consumers are at risk of real harm. That refusal reflects a serious and defensible reading of the agency’s statutory mandate, and it sends a message to the AI industry that is both necessary and overdue: building something unprecedented does not exempt you from the obligation to build it honestly.

I love AI; it will have a key place in the growing world economy. People just need to make sure they do not use the technology to unfairly take advantage of or abuse consumers, because the FTC has warned that it is watching and has already taken enforcement actions.

If you want to read the FTC’s press release that addresses enforcement actions on AI, here is the link: https://www.ftc.gov/news-events/news/press-releases/2024/09/ftc-announces-crackdown-deceptive-ai-claims-schemes#:~:text=Image,through%20automation%20and%20problem%20solving.

If you need legal counsel to review your data privacy and AI governance policies and practices, consider the attorneys at Troutman Amin LLP.

Leave a comment