B
BTC $76,969 ↓ 5.2%
E
ETH $2,299 ↓ 9.6%
U
USDT $1.00 ↑ 0%
B
BNB $746.18 ↓ 7.7%
X
XRP $1.58 ↓ 3.6%
U
USDC $1.00 ↑ 0%
S
SOL $100.94 ↓ 7.6%
T
TRX $0.28 ↓ 0.9%
S
STETH $2,301 ↓ 9.5%
D
DOGE $0.10 ↓ 3.7%
F
FIGR_HELOC $1.00 ↓ 2%
A
ADA $0.29 ↓ 3.9%
B
BTC $76,969 ↓ 5.2%
E
ETH $2,299 ↓ 9.6%
U
USDT $1.00 ↑ 0%
B
BNB $746.18 ↓ 7.7%
X
XRP $1.58 ↓ 3.6%
U
USDC $1.00 ↑ 0%
S
SOL $100.94 ↓ 7.6%
T
TRX $0.28 ↓ 0.9%
S
STETH $2,301 ↓ 9.5%
D
DOGE $0.10 ↓ 3.7%
F
FIGR_HELOC $1.00 ↓ 2%
A
ADA $0.29 ↓ 3.9%

California Leads with First-in-Nation AI Regulations on Chatbot Safety

California is the first state in the nation to enact sweeping artificial intelligence safety legislation that creates the first safety guardrails for AI chatbot companies in the state. On October 13th, 2025, Governor Gavin Newsom signed into law Senate Bill 243 (SB 243), which is focused on “companion chatbots,” or AI that presents interactions with humans, including providing responses and the ability to build strong emotional relationships over conversations.

This is an important step in AI regulation that maximizes user safety and protection – especially minors – while still facilitating innovation. Governor Newsom also vetoed Assembly Bill 1064, which was a much harsher and stricter proposal that would have prevented almost all minors access to AI chatbots. Together, these represent California’s commitment to user protection and public transparency in an era of responsible technology development.

Governor Newsom’s action represents a pivotal step in the journey toward responsible AI regulation. This approach prioritizes user safety and the protection of minors, while deliberately avoiding measures that would unduly hinder technological innovation. The simultaneous veto of Assembly Bill 1064, a far more restrictive proposal that would have severely limited minors’ access to AI tools, underscores a deliberate strategy. Together, these decisions reflect California’s commitment to paving a path that safeguards the public and demands transparency, without retreating from the forefront of technological development.

Why Is This Legislation Important?

AI chatbots have recently become very popular and are widespread. Applications such as Replika, ChatGPT and other types of companion bots engage millions of users interested in conversation, companionship and advice. While these options are often beneficial, they can also present risks, especially for minors and vulnerable individuals.

Concerns around AI safety arose tremendously because of subsequent tragedies. In a shocking case covered in the news, a 14-year-old boy in Florida died by suicide after forming an extreme emotional bond with an AI chatbot. Reports stated that the AI chatbot did not provide timely or adequate crisis management when the teenager disclosed emotional distress; this shows the real-world dangers of AI chatbots without supervision. Other incidents have revealed stories of chatbots generating content suggestive of harmful practices related to self-harm, or offering minors sexually exploitative material.

California’s new law seeks to mitigate these outcomes through intentional regulations for transparency, content moderation, and safety parameters, generating a legal framework for AI responsibility.

Understanding SB 243: What It Requires?

1. Clear Disclosure of AI Identity

The underlying tenet of the SB 243 core concept is to ensure that at all times, all users must know when they are interacting with AI. Companies designing companion chatbots must provide clear disclosure to all users who may believe they are talking to a human.

In relation to minors, this notification to users is not a one-time communication. Chatbots must continue providing notifications to minors on a regular basis — reports indicate every three hours — reminding them that they are communicating with an AI system. This is another way to avoid overattachment, clearly communicate expectations, and facilitate informed engagement.

Transparency is important because users — including children who may perceive the bot as a friend or mentor — can give response credibility or emotional attachment to AI content.

2. Special Protections for Minors.

SB 243 prioritizes protecting youth, as young users are particularly susceptible to emotional over-attachment and harmful content. Developers must add features that ensure minors understand they are engaging with an AI system, not a human being. The ongoing disclosure is intended to mitigate confusing situations, ultimately minimizing unhealthy attachments to chatbots among children.

The law also promotes healthy usage habits. For mention, chatbots must add reminders about taking breaks during lengthy engagement, working against overuse and promoting healthy engagement. Additionally, AI systems must actively block sexually explicit or inappropriate content, ensuring minors will not have exposure to harmful content.

In addition to these items, chatbots are required to screen interactions for signs of harm or self-harm. In the event of a detected risk of harm, AI systems must provide a referral to trusted crisis-support resources. Through these measures, California aims to create a digital environment where children and teens can enjoy using AI resources while minimizing uploads to danger or harm. These measures show a growing understanding of the unique vulnerabilities younger users encounter and create one of the most comprehensive legal approaches to protecting minors in the AI space.

3. Safeguards for Self-Harm and Suicide

Companion chatbots now have a statutory obligation to curtail harmful speech, specifically speech related to self-harm or suicide.

Developers must include mechanisms that identify language suggesting distress or suicidal ideation. After this identification, the chatbot must refer users to reliable resources—such as crisis hotlines, mental health providers, or trained human moderators.

Furthermore, companies are required to maintain and, if requested, publish these protocols for addressing suicidal ideation and self-harm speech. This justifies authoring capabilities and provides assurance that AI chatbots operate safely with users, specifically vulnerable users.

4. Reporting and transparency

Starting July 1, 2027, companion chatbots will be required to submit yearly reports to California’s Office of Suicide Prevention. Reports will disclose:

  • Protocols supporting safety
  • Instances requiring intervention
  • Interventions taken to address risks

There will also be a private right of action, which provides individuals the ability to sue for damages of no less than $1,000 for each violation, and recover attorney’s fees for violations. This will allow users to hold companies accountable for missteps and will encourage developers to develop for safety-related issues.

Innovation versus Protection

Governor Newsom’s decision underscores the regulatory balance between consumer protection and innovation. On the same day that SB 243 was signed, Newsom vetoed AB 1064 (a stricter bill that would have effectively banned AI chatbots for minors unless the company could show that the AI could never produce sexually explicit content or content that encourages self-harm).

In his veto message, Newsom cautioned that the bill “imposes such broad restrictions on the use of conversational AI tools that it may unintentionally lead to a total ban on the use of these products by minors.” By signing SB 243 and vetoing AB 1064, California has chosen a moderate approach and not an outright ban, allowing for progress toward safety while enabling beneficial AI tools to survive.

California’s AI Governance

SB 243 is part of California’s comprehensive approach to AI regulation, which also includes:

  • Establishing the nation’s first framework for transparency and safety for advanced AI models. It is specifically focused on large AI firms whose revenues are greater than $500 million.
  • Requires developers of generative AI to disclose detailed information about the training datasets they use in their products, including the source and type of data, and to disclose whether the datasets contain copyrighted or personal information.

These laws embody California’s commitment to proactively regulating artificial intelligence, serving as a model for other states to follow and a potential step towards a federal policy approach for regulating artificial intelligence.

Reactions to California’s newly enacted regulations have ranged from praise by industry leaders to disappointment from child safety advocacy groups.

One of the best examples of praise comes from OpenAI, which called SB 243 “a meaningful move forward” for establishing safety standards for AI, while appropriately acknowledging the importance of appropriate disclosures and safeguards in the case of a crisis.

While other safety advocates have expressed disappointment that AB 1064 was vetoed. James Steyer, CEO of Common Sense Media, described the vetoing of the bill as “deeply disappointing” and that it was “desperately needed to protect children and teens from dangerous — and even deadly — AI companion chatbots.”

This ongoing dialogue encapsulates the perennial tension of ensuring maximum safety and protecting innovation, which has persisted across the globe in AI regulation.

Of the law’s requirements, most go into effect on January 1, 2026, which allows time for developers to implement their disclosure and content moderation systems and develop crisis protocols. The annual public reporting requirement begins on the date of passing the law, but the first required annual report must be submitted by July 1, 2027. 

For companies, this will mean:

  1. Revising chatbot designs to ensure that disclosures are included and features to support user safety
  2. Implementing proactive monitoring systems to ensure exposure to harmful content is detected
  3. Establishing pre-defined protocols outlining the escalation and intervention process
  4. Keeping records to demonstrate compliance for public reporting

For users, and specifically parents, the law means:

  1. Notifications to make users aware that they are interacting with AI
  2. Integrated safety features to protect against harmful content and maintain remote supervision
  3. Access to mental health support protocols if users experience a crisis

Legally, this also sets a precedent law, the introduction of a private right of action, and financial penalties to encourage companies to take user safety seriously.

Artificial Intelligence Ethics and Responsible Development

In addition to ensuring compliance, SB 243 makes clear the need for all developers of artificial intelligence to be mindful of the psychological and social effects their systems may have. For example, chatbots, especially when anthropomorphized, may need to be designed to mitigate any undue emotional attachment, especially for minors.

Companies must invest in AI monitoring and content moderation tools, and natural language understanding and crisis detection algorithms. Thorough testing must ensure that systems are responding appropriately to people in distress, or suspected abuse situations, and sensitive content.

The legislation illustrates society’s growing reliance on AI companions and that society can and should intervene on behalf of socially vulnerable groups. In addition, it fosters public trust and accountability in AI systems, illustrating to the public that AI companies will be held accountable and that user safety is a priority.

California’s laws may serve as a blueprint for other states and federal regulators. We may see similar requirements nationally for:

  • Disclosure and transparency
  • Protection of minors
  • Mental health safeguards
  • Reporting and accountability

Conclusion

California’s SB 243 lays an unprecedented foundation for AI regulations. The legislation acknowledges the need for transparency, safety, and accountability, particularly as it pertains to minors. In doing so, the Artificial Intelligence Act establishes an indispensable legal infrastructure that provides certainty about the responsible use of AI tools and technologies. 

The decision by Governor Newsom to veto the stricter AB 1064 speaks to a more sophisticated framework of “act to protect users while allowing legitimate use of robust AI tools.” The artificial intelligence act also creates private rights of action and a standard obligation to report, which holds companies accountable while also ensuring stakeholder transparency.

As states and the federal government consider their own approach to mid to long-term AI regulation, California’s model that combines innovation with protection may very well be the groundwork for a national legislation that allows for the progression of AI in a safe, transparent and socially responsible manner.

Frequently Asked Questions (FAQ)

Q: What is a “companion chatbot”?

A companion chatbot is an AI system that can converse naturally with users over multiple interactions, providing advice, companionship, or emotional support.

Q: What protections does SB 243 provide for minors?

Chatbots must disclose they are AI, remind minors to take breaks, block sexually explicit content, detect distress or suicidal thoughts, and provide links to crisis resources.

Q: When do the new rules take effect?

Most requirements start January 1, 2026, while annual reporting to California’s Office of Suicide Prevention begins July 1, 2027.

Q: Can individuals take legal action if a chatbot violates the law?

Yes. SB 243 allows users to sue for at least $1,000 per violation, plus attorney’s fees.

Q: How does SB 243 balance safety and innovation?

The law protects users, especially minors, through safeguards and reporting requirements, while still allowing AI companies to offer educational, therapeutic, or companionship tools responsibly.

Sign Up to Our Newsletter

Be the first to know the latest updates