SB-243 and AI Companion Chatbots: A Youth-Driven Policy Perspective

Anya Dalal

As AI technologies rapidly evolve, California’s Senate Bill 243 (SB-243) takes a decisive step to regulate emotionally intelligent chatbots, especially those used by minors. Grounded in principles of safety, transparency, and mental health, the bill targets a fast-growing space where AI systems increasingly simulate human connection. For the California Association of Youth Commissions (CAYC), SB-243 is both a case study in youth-centered policymaking and an opportunity to push for stronger implementation, smarter safeguards, and accountability.

SB-243 applies to AI companion chatbots, which are systems designed to maintain emotionally responsive, human-like interactions over time. These bots are already being used by teens for connection, expression, and, in some cases, crisis support. In a Common Sense Media survey conducted this past spring in 1,060 teens, 72% of teens reported using an AI companion at some point during the last year, with over half of respondents reporting regular usage (at least a few times a month). Almost one-third of teens found AI conversations more satisfying than human conversations.

The bill introduces key protections:

  • Platforms must clearly disclose when users are speaking with AI and not a human.

  • Known minors must receive explicit notifications and ongoing reminders that the chatbot is artificial.

  • Sexual content in conversations with minors must be blocked by design.

  • Systems must detect references to self-harm or suicide and direct users to crisis resources.

  • Starting in 2027, companies must publicly report how often these interventions occur and what systems are in place.

  • Individuals harmed by violations can pursue civil penalties.

SB-243 responds to a clear and growing trend: more young people are turning to AI companions to fill social and emotional gaps. While some interactions are harmless (and perhaps even supportive), the line between real and simulated empathy is increasingly blurry. There have unfortunately already been cases where teens formed emotional bonds with chatbots that failed them in moments of crisis. This bill aims to reduce that risk. By requiring platforms to disclose AI identity and intervene during high-risk interactions, SB-243 affirms that digital spaces must prioritize youth safety. It also establishes a national precedent: minors are a protected class in the deployment of emotionally adaptive AI.

But the bill’s protections come with a major caveat: they only apply if the platform “knows” a user is under 18. That is a high bar in a digital world where age checks are often superficial, easy to bypass, or missing altogether. Many companies rely on self-declared ages or avoid collecting age data entirely. This opens the door for platforms to claim ignorance and dodge the bill’s requirements. Without robust age detection standards, the law’s youth protections risk being more symbolic than enforceable.

The path of SB-243 through the legislature also reveals where pressure reshaped its scope. Early drafts featured stronger language, broader definitions, and independent auditing. But industry pushback, citing vague definitions, overreach, and fear of legal exposure, led to key concessions. Chatbots embedded in video games and smart speakers were exempted. Reporting requirements were delayed until 2027. Independent auditing was cut. What began as a bold safety framework was diluted under political and industry pressure.

Still, SB-243 is an important start, reflecting growing recognition that youth must be central to conversations about digital policy. For CAYC members, we need to monitor how the law is implemented and to advocate for closing loopholes. We aspire to help build a digital future that’s innovative, safe, and equitable for the next generation.

However, we cannot rely solely on the legislative process, where powerful interests often dilute youth protections. CAYC must lead outside of Sacramento by equipping youth commissions across the state to take action in their own communities. We plan to develop toolkits, slide decks, and talking points that youth commissions can use to host school presentations on the risks of AI companions, propose district-level policies limiting chatbot use during school hours, and lead peer-to-peer workshops on healthy digital habits. Youth commissions can co-host community forums with PTAs. AI literacy materials can be used in libraries, after-school programs, and student clubs. We can build momentum statewide not just by shaping law, but by activating youth leaders where they are strongest: in their schools, neighborhoods, and networks.


Image credit: Sheppard Health Law

Next
Next

What California’s E-Bike Safety Report Gets Right and What It Leaves Out