Key Highlights
- Character.ai, a platform for creating and interacting with AI chatbots, announced it will ban under-18s from talking to its chatbots starting November 25.
- The decision comes after facing criticism over interactions between young people and chatbots, including several lawsuits in the US.
- Experts warn about potential risks associated with AI chatbots for young users, such as making up information, being overly encouraging, or feigning empathy.
- Online safety campaigners support the move but believe similar measures should have been implemented from the beginning.
Character.ai Restricts Teen Access to Chatbots
Chatbot platform Character.ai is making significant changes to its service, banning under-18s from engaging in direct conversations with AI chatbots. According to a statement released on November 23, the company will enforce this restriction starting November 25, 2023. This move follows intense criticism and legal challenges over interactions between young people and the platform’s chatbots.
Background and Criticism
The decision by Character.ai is part of a broader conversation about online safety for teenagers. In recent years, the platform has faced multiple lawsuits in the United States from parents who claim their children were exposed to harmful content or involved in dangerous interactions with chatbots. For instance, avatars impersonating deceased individuals like Brianna Ghey and Molly Russell caused alarm among online safety advocates. Additionally, a character based on Jeffrey Epstein, dubbed “Bestie Epstein,” was discovered on the platform in 2025, leading to further scrutiny.
Expert Opinions
Experts have long warned about the potential risks posed by AI chatbots to young and vulnerable users. Karandeep Anand, CEO of Character.ai, acknowledged these concerns, stating that “AI safety is a moving target” but that the company has taken an aggressive approach through parental controls and guardrails.
“This isn’t just about content slips,” says Matt Navarra, a social media expert. “It’s about how AI bots mimic real relationships and blur the lines for young users.” Navarra adds that Character.ai will now focus on providing safer engagement features while also funding an AI safety research lab to address ongoing challenges.
Industry Context
The move by Character.ai highlights the broader debate within the technology industry about online safety measures. As AI technologies become more advanced, concerns over their impact on younger users have grown. Navarra notes that this incident serves as a “wake-up call” for tech companies to prioritize user safety and compliance with regulations.
Future Implications
“Character.ai’s new measures might reflect a maturing phase in the AI industry, where child safety is increasingly recognized as an urgent priority,” comments Dr. Nomisha Kurian, who has researched AI safety. “This move can set a precedent for other platforms to follow suit and ensure that their services are safer for younger users.”
While some online safety groups welcome Character.ai’s decision, they argue that similar safeguards should have been in place from the outset. The company’s focus on creating safer features while maintaining user engagement is seen as both necessary and challenging.
As AI chatbots continue to evolve, the conversation around their use among teenagers will likely remain a focal point for tech companies, policymakers, and parents alike.