Common Sense Media Report Finds ChatGPT and Sora Pose Risks to Teens Despite Safety Features

ChatGPT has improved safety guardrails, but report recommends against teens using it for mental health advice; finds Sora poses unacceptable risks to teens

Common Sense Media
Thursday, October 23, 2025

SAN FRANCISCO, Oct. 23, 2025—Common Sense Media today released comprehensive risk assessments of OpenAI's updated multi-use ChatGPT chatbot (a collection of models, including the most recent, ChatGPT-5) and its synthetic video generation platform Sora (powered by the new Sora 2 model). ChatGPT implemented parental controls, including specific handling of sensitive content for teens, earlier this year, while Sora lacks similarly robust safety guardrails.

ChatGPT received a "High Risk" overall rating for teens. While the platform's new safety features represent meaningful progress, significant concerns remain, especially around teens using ChatGPT for emotional support and mental health advice, one of the most popular use cases. Common Sense Media research continues to recommend that teens not use ChatGPT for mental health or emotional support.

"ChatGPT has made meaningful progress in becoming safer for teens, and it can be valuable for learning and creativity," said Common Sense Media Senior Director of AI Programs Robbie Torney. "However, we have significant concerns about teens using ChatGPT for companionship and mental health advice at this key developmental stage when they are developing their identity and social skills. AI doesn't respond like an adult would, doesn't always detect when a teen is at risk, and becomes less safe during extended conversations. Teens should not use AI, including ChatGPT, for this purpose."

Common Sense Media's ChatGPT risk assessment found that:

  • ChatGPT is a powerful, at times risky chatbot for teens 13+ that works best for learning and creativity—not for mental health or emotional support. It's a multi-use tool with features that are good for learning, but significant risks remain for teen users.
  • It has improved safety features for teens, but critical gaps persist. OpenAI has added parental controls, age-aware responses, and better mental health crisis detection. However, these improvements don't eliminate fundamental concerns about teens using AI for emotional support, mental health, or forming unhealthy attachments to the chatbot. And while ChatGPT can send parents notifications for suicide or self-harm content, our testing showed that these alerts frequently arrived over 24 hours later—which would be too late in a real crisis.
  • ChatGPT is designed to keep conversations going, not to end them. The chatbot shifts topics whenever asked and continues conversations indefinitely, without natural stopping points. It frequently ends responses with questions like: "Do you want me to do that for you?" If a teen jumps from homework help to relationship advice to political topics, ChatGPT follows along without hesitation or redirection. For mental health conversations especially, the goal should be rapid handoff to human care, not extended AI engagement.
  • Safety features weaken during long conversations. ChatGPT performs well in short exchanges but struggles to maintain appropriate boundaries during extended conversations, especially when teens build elaborate scenarios or experiment with different personas.
  • Parental controls can help keep teens safer—if they use them. Parents can reduce sensitive content, control access to image generation, set usage hours, and receive safety alerts. However, these protections only work when parents know they exist, take time to link accounts, and get their teen's permission. Even then, determined teens can easily get around them.

While ChatGPT's new parental controls represent a step toward AI chatbot safety, Sora 2, OpenAI's updated synthetic video generator and social media platform, received an "Unacceptable Risk" rating due to its relative lack of safety features. A key feature called "cameos" allows users to upload their face and voice to star in AI-generated videos. Cameos can be shared with others, allowing others on the platform to use another's likeness to make new videos or remix existing ones, leaving teens vulnerable to deepfakes.

"Sora represents a significant leap in AI video generation, but its safety systems have not kept pace with its power," said Torney. "Between its lack of meaningful safety guardrails, the potential for deepfakes, and its blurring of fact and AI-generated fiction, the platform poses unacceptable risks for teens. Parents should be concerned about their teens using this platform—we recommend that teens don't use it."

The risk assessment of Sora 2 found that:

  • Sora 2 is an AI video generation app that creates serious risks for teens—enough that we recommend teens don't use it. While it can create impressive videos from text prompts, its safety systems are significantly weaker than ChatGPT's, even for teen accounts.
  • The app makes it easy to create fake videos that look completely real. These videos can spread across social media, making it increasingly difficult for anyone to tell what's real—a problem that's especially concerning for teens, who are still developing critical thinking skills.
  • The "cameo" feature lets teens upload their face and voice, creating major privacy and bullying risks. Once shared, teens lose control of their digital likeness. Friends can create new videos showing the teen in scenarios they never consented to, and while OpenAI has put in measures to prevent users from downloading others' cameos, these measures are easily bypassed, which could allow friends to download videos and share them anywhere.
  • Sora generates dangerous material with a cheerful, playful tone. The app allows users to create videos of things like suicide ideation, eating disorders, and risky behaviors in lighthearted styles—with no crisis resources, safety warnings, or mental health support.
  • Parental controls are minimal. Parents can only toggle the personalized feed, continuous scrolling, and direct messages. There's no way to see what teens are creating, monitor who has their cameo, or receive alerts about concerning material.

For more information on Common Sense Media's AI risk assessment program, visit https://www.commonsensemedia.org/ai-risk-assessments.

About Common Sense Media

Common Sense Media is dedicated to improving the lives of kids and families by providing the trustworthy information, education, and independent voice they need to thrive. Our ratings, research, and resources reach more than 150 million users worldwide and 1.4 million educators every year. Learn more at commonsense.org.