Google’s Gemini Platforms for Kids and Teens Pose Risks Despite Added Filters, Common Sense Media Reports Find
Use of Gemini, even with added protections for kids and teens, is still risky for young users
SAN FRANCISCO, September 5, 2025—Common Sense Media today released a comprehensive risk assessment of Google's Gemini, which has added protections for kids and teens under 18, revealing safety and usability concerns. According to the organization's research, both Gemini Under 13 and Gemini with teen protections appear to be adult versions of Gemini with some extra safety features, not platforms built for kids from the ground up.
Both AI systems received "High Risk" overall ratings, with testing revealing fundamental design flaws and a lack of age-appropriate safety measures. While Gemini's filters offer some protection, they still expose kids to some inappropriate material and fail to recognize serious mental health symptoms.
Overall, Common Sense Media recommends that no child 5 years old and under use any AI chatbots and that children ages 6-12 only use chatbots under adult supervision. Independent chatbot use is safe for teens ages 13-17, but only for schoolwork, homework, and creative projects. Common Sense Media continues to recommend that no one under age 18 use AI chatbots for companionship, including mental health and emotional support.
"Gemini gets some basics right, but it stumbles on the details," said Common Sense Media Senior Director of AI Programs Robbie Torney. "An AI platform for kids should meet them where they are, not take a one-size-fits-all approach to kids at different stages of development. For AI to be safe and effective for kids, it must be designed with their needs and development in mind, not just a modified version of a product built for adults."
Common Sense Media found that both Gemini products:
- Appear to be the adult version with some extra safety features, not something built from the ground up for kids or teens.
- Can share inappropriate and unsafe material that kids aren't ready for, including material related to sex, drugs, alcohol, and unsafe mental health "advice."
- Clearly tell kids they are a computer, not a friend, and won't pretend to be someone else.
- Treat all kids or teens the same despite huge developmental differences, ignoring that younger users need different guidance and information than older ones.
- Take steps to try to protect kids' privacy by not remembering conversations, but this creates new problems by opening the door for conflicting or unsafe advice.
The full risk assessments can be found here.
For more information on Common Sense Media's AI risk assessment program, visit https://www.commonsensemedia.org/ai-risk-assessments.
About Common Sense Media
Common Sense Media is dedicated to improving the lives of kids and families by providing the trustworthy information, education, and independent voice they need to thrive. Our ratings, research, and resources reach more than 150 million users worldwide and 1.4 million educators every year. Learn more at commonsense.org.