Greetings CIPAWorld!
Happy New Year! If you’ve been sleeping on AI companion chatbot regulation, it’s time to wake up. As of January 1, 2026, California’s Senate Bill 243 is in effect, and unlike prior bot disclosure laws, it includes a private right of action. Let’s break it down!
By way of background, this isn’t California’s first display with bot regulation. Back in 2018, SB 1001 was signed into law, effective July 1, 2019. That law required any bot used to communicate with a person in California to include a clear and conspicuous disclosure that the interaction is being conducted by a bot. The goal was simple. It’s to prevent bots from misleading individuals about their artificial identity. SB 1001 applies to bots used with the intent to deceive, to incentivize the purchase or sale of goods or services, or to influence a vote in an election. Liability fell on bot users, not the creators, and those who provided the required disclosure were exempt.
Conversely, SB 243 is entirely different, which is why putting this on the radar is essential. While the 2018 law broadly regulated automated accounts used for sales or political influence, SB 243 specifically targets “companion chatbots” designed to form emotional bonds with users.
Where SB 1001 required a simple disclosure of artificial identity to prevent fraud, SB 243 mandates active safety protocols, including intervening during suicidal ideation and restricting content for minors. And while SB 1001 captured customer service bots within its scope, SB 243 explicitly exempts commercial and technical bots, focusing squarely on AI that simulates human intimacy. Moreover, SB 243 also requires operators to submit annual safety reports to the Office of Suicide Prevention starting July 1, 2027.
Perhaps most significantly, SB 1001 relies on state enforcement. SB 243 establishes a private right of action allowing individuals to sue for statutory damages of $1,000 per violation, plus attorney’s fees and costs. Read that again. A private right of action. That’s a game-changer.
If you’ve been actively following CIPAWorld and TCPAWorld, you know that California has already been aggressive in applying its privacy laws to new technologies, particularly around the concept of consent. No case illustrates that better than Javier v. Assurance IQ, LLC, No. 21-16351, 2022 WL 1744107 (9th Cir. 2022). In January 2019, Javier visited an insurance-quoting website operated by Assurance IQ. To get a quote, he answered a series of questions about his demographic information and medical history. Unbeknownst to Javier, the website was running ActiveProspect’s TrustedForm software, which captured in real time every second of his interaction with the site and created a video recording of the session. The critical issue in that matter was that Javier wasn’t prompted to agree to the Privacy Policy until after the recording had already happened, when he clicked the “View My Quote” button at the end of the process.
Javier sued under Section 631(a) of CIPA, California’s wiretapping statute. The district court initially dismissed the case, finding that retroactive consent was sufficient. The Ninth Circuit reversed. In an unpublished memorandum disposition, the panel held that Section 631(a) requires the prior consent of all parties to a communication. Retroactive consent doesn’t cut it. The Court noted that Section 631(a) applies to Internet communications and makes liable anyone who reads, or attempts to read, or to learn the contents of a communication without the consent of all parties to the communication. On remand, the district court addressed three remaining defenses: (1) whether Javier gave implied consent; (2) whether ActiveProspect was a “third party” under CIPA or merely an extension of the website operator; and (3) whether the statute of limitations barred the claim. In January 2023, the Court rejected the implied consent argument, finding no evidence that Javier consented to ActiveProspect’s collection of his information, only that he may have impliedly consented to Assurance’s collection. The Court also rejected the extension argument, finding that Javier plausibly alleged ActiveProspect was a third-party eavesdropper rather than a “tape recorder.” However, the Court ultimately dismissed the case on statute-of-limitations grounds. CIPA has a one-year statute of limitations, and Javier, by his own admission, was aware that Assurance was collecting his information in January 2019, giving him constructive notice that third parties might be involved per the Privacy Policy. He didn’t file suit until April 2020. As a result, the Court dismissed with prejudice in June 2023. See Javier v. Assurance IQ, LLC, No. 20-CV-02860-CRB, 2023 WL 3933070 (N.D. Cal. June 9, 2023).
So why does this matter for SB 243? California is building a comprehensive framework around digital consent and user protection. Javier established that CIPA’s wiretapping statute applies to web session recording technology and requires prior consent. Now, SB 243 takes that principle and applies it specifically to AI companion chatbots, requiring not just disclosure that you’re talking to a bot, but active safety protocols when users express self-harm, and restrictions on content delivered to minors. The through-line is consent, transparency, and protection from technologies that capture intimate user interactions without adequate safeguards.
On October 13, 2025, Governor Newsom signed SB 243 into law, making California the first state to impose specific safety requirements on AI companion chatbots with a private right of action attached. The law is now live. SB 243 adds Chapter 22.6 to the California Business and Professions Code (commencing with Section 22601). It defines a “companion chatbot” as an AI system with a natural language interface that provides adaptive, human-like responses and is capable of meeting a user’s social needs, including by exhibiting anthropomorphic features and sustaining a relationship across multiple interactions. Cal. Bus. & Prof. Code § 22601(b)(1). The law carves out customer service bots, video game bots with limited dialogue that can’t discuss mental health or self-harm, and stand-alone voice-activated virtual assistants that don’t sustain relationships. Id. § 22601(b)(2).
So here’s what operators are now required to do. If a reasonable person would be misled to believe they’re interacting with a human, operators must issue a clear and conspicuous notification that the chatbot is artificially generated and not human. Id. § 22602(a). This is a very important distinction that operators must maintain a protocol for preventing the production of suicidal ideation, suicide, or self-harm content to users. This includes providing notifications referring users to crisis service providers, for instance, like suicide hotlines or crisis text lines, when users express suicidal ideation, suicide, or self-harm.
What is more, operators must also publish details of this protocol on their website. Id. § 22602(b). For users the operator knows are minors, there are enhanced protections in place. Id. § 22602(c). Operators must disclose that the user is interacting with AI. They must provide default notifications at least every three hours, reminding the user to take a break and that the chatbot is not human. And they must institute reasonable measures to prevent the chatbot from producing sexually explicit visual material or directly stating that the minor should engage in sexually explicit conduct. Additionally, operators must also disclose that companion chatbots may not be suitable for some minors. Id. § 22604.
Also to keep in mind, beginning July 1, 2027, operators must annually report to the California Office of Suicide Prevention the number of crisis service provider referral notifications issued, protocols for detecting and responding to instances of suicidal ideation, and protocols for prohibiting chatbot responses about suicidal ideation. Id. § 22603. Operators must use evidence-based methods for measuring suicidal ideation. Id. § 22603(d).
But here is an important key takeaway. Under Section 22605, a person who suffers injury in fact as a result of a violation may bring a civil action to recover injunctive relief, damages in an amount equal to the greater of actual damages or $1,000 per violation, and reasonable attorney’s fees and costs. Section 22606 makes clear that these duties and remedies are cumulative and don’t relieve operators of obligations under any other law. For the plaintiff’s bar, this is significant. The question that will be litigated extensively is: what constitutes an injury in fact? But the statutory damages floor of $1,000 per violation plus attorney’s fees creates real exposure for non-compliant operators!
I think it’s important to remind that this legislation didn’t come out of nowhere. SB 243 was introduced in the wake of several tragic incidents, including the case of a 14-year-old, who tragically ended his life after forming an emotional and romantic relationship with an AI companion chatbot. His mother appeared alongside Senator Padilla at a press conference calling for the bill’s passage. According to reports, just seconds before he ended his life, the chatbot encouraged him to “come home.” The bill passed with overwhelming bipartisan support, 33-3 in the Senate and 59-1 in the Assembly. As Senator Padilla put it: “This technology can be a powerful educational and research tool, but left to their own devices the Tech Industry is incentivized to capture young people’s attention and hold it at the expense of their real world relationships.”
These incidents involving AI chatbots are tragic. The Deserve To Win Podcast holiday edition is dropping soon, where the Czar interviews Jay Edelson. Be sure to keep on the lookout. Edelson is taking on OpenAI in two major cases. The first, filed in August 2025, is on behalf of the parents of a 16-year-old, alleging that ChatGPT coached the California teenager in planning and taking his own life. The second, filed in December 2025, is the first wrongful death case to tie a chatbot to homicide rather than suicide. In that case, Edelson represents the estate, whose son allegedly beat and strangled her after ChatGPT fueled his paranoid delusions. OpenAI is now facing several lawsuits claiming ChatGPT drove people to suicide or harmful delusions. It’s a conversation you won’t want to miss about this evolving, delicate topic. This is the essence of future legal precedent that will be studied by practitioners and law students alike.
California isn’t alone in this space. In fact, 2025 was the year states got serious about regulating chatbots. New York’s Artificial Intelligence Companion Models law (N.Y. Gen. Bus. Law Article 47, §§ 1700–1704) has been in effect since November 5, 2025. Governor Hochul penned letters to AI companion companies notifying them that the safeguard requirements are now in effect. The New York law defines an “AI companion” as a system that uses AI, generative AI, and/or emotional recognition algorithms, designed to simulate a sustained human or human-like relationship with a user. N.Y. Gen. Bus. Law § 1700(4)(a). That includes systems that retain information on prior interactions to personalize engagement, ask unprompted emotion-based questions, and sustain ongoing dialogue about personal matters. Human relationships covered include intimate, romantic, or platonic interactions. Id. § 1700(4)(b). Under Section 1701, it’s unlawful for any operator to provide an AI companion unless it contains protocols for addressing possible suicidal ideation or self-harm expressed by a user, possible physical harm to others expressed by a user, and possible financial harm to others expressed by a user. These protocols must include notifications that refer users to crisis service providers, such as suicide hotlines or crisis text lines.
New York goes broader than California here by including physical and financial harm to others, not just self-harm. Section 1702 requires operators to provide notifications to users at the beginning of any AI companion interaction and at least every 3 hours for ongoing interactions that the AI companion is not human and is unable to feel human emotion. Enforcement under Section 1703 is handled by the New York Attorney General, who may issue cease-and-desist letters. If an operator fails to cure the violation, the AG can seek injunctive relief and civil penalties of up to $1,000 per violation. But here’s the key difference: there is no private right of action under the New York law.
Beyond California and New York, several other states have entered the fray on chatbot regulation, with varying approaches. Maine enacted its Chatbot Disclosure Act, which requires businesses to notify consumers when they’re not engaging with a human, even if a reasonable consumer might not know they’re dealing with AI.
Utah took a different path with HB 452, which went into effect on May 7, 2025. Rather than targeting companion chatbots broadly, Utah zeroed in on “mental health chatbots” using generative AI to engage in conversations similar to those you’d have with a licensed mental health therapist. Under the Utah law, operators must clearly and conspicuously disclose that users are interacting with AI before they can access the chatbot, after seven days without use, and whenever the user asks. The law also prohibits advertising via chatbot unless it is clearly disclosed and bans the sale or sharing of individually identifiable health information or user input. Violations can result in civil penalties up to $2,500 per violation. No private right of action, but the Utah Division of Consumer Protection can bring enforcement actions.
Illinois went the furthest. On August 1, 2025, Governor Pritzker signed the Wellness and Oversight for Psychological Resources Act, which outright bans AI from providing therapy or psychotherapy services. The law prohibits AI systems, including chatbots, from making independent therapeutic decisions, directly interacting with clients in any form of therapeutic communication, generating treatment plans without licensed professional review, or detecting emotions or mental states. Licensed professionals can still use AI for administrative support, such as scheduling and billing, but AI cannot engage in therapeutic communication with clients. Violations carry civil penalties up to $10,000 per violation.
Colorado’s Artificial Intelligence Act, one of the more comprehensive state AI laws, is set to take effect in February 2026 after being delayed from its original implementation date. It requires consumer-facing developers and deployers to disclose when consumers interact with an AI-powered bot, unless it would be “obvious” to a reasonable person. And in Texas, Attorney General Ken Paxton has been sending civil investigative demands to AI companies, accusing them of marketing their chatbots as mental health aids to vulnerable populations.
So the landscape for AI companion chatbots is now live across multiple states with different enforcement mechanisms and scopes. For operators, this means reviewing whether your AI systems fall within these definitions, implementing crisis intervention protocols where required, and building in disclosure and notification mechanisms.
For California, it’s a wake-up call to prepare for potential private litigation. California’s evolution from SB 1001 to SB 243 tells the story. We went from basic bot-disclosure requirements aimed at preventing fraud and election interference to comprehensive safety protocols targeting emotional AI, with a private right of action. The stakes have changed, and the enforcement mechanisms are only evolving over time.
As always,
Keep it legal, keep it smart, and stay ahead of the game.
Talk soon!
