Common Sense Media Launches Youth AI Safety Institute

The first-of-its-kind AI safety lab focused on children will independently test AI products, broadly publish the results, and set clear standards to protect the safety, health, and development of a generation growing up with AI

Common Sense Media
Tuesday, May 5, 2026

SAN FRANCISCO, May 5, 2026—Today, Common Sense Media launched the Youth AI Safety Institute, an independent research and testing organization dedicated to ensuring the AI that children use is safe and developmentally appropriate.

More than half of American teenagers now regularly chat with AI companions. Nearly a third say conversations with AI are as satisfying as—or more satisfying than—talking with real-life friends. Over half are turning to AI tools for homework help.

The Institute will bring significant new resources, technical expertise, and global reach to close the growing gap between AI use and youth safety. It will establish safety standards, build open-source evaluations that AI developers can run against their models, independently test AI products, and publish the results to provide transparency and accountability.

"AI is reshaping childhood and adolescence, yet we are making critical decisions about children's futures without the evidence we need to ensure it's safe and in their interest," said Common Sense Media Founder and CEO James P. Steyer. "The need for transparent AI safety standards and independent testing is more urgent than ever."

The Institute's approach is modeled on independent crash-test ratings that show consumers whether cars are safe, set a clear bar for automakers to meet, and contribute to improvements in vehicle design. The Youth AI Safety Institute will apply the same crash-test model to AI: testing the products children use most, showing parents the results, and holding industry accountable to meet a high standard of youth safety.

The Institute's work will extend beyond testing. It will research youth behavior and lead public education campaigns to help families navigate AI in their lives. It will also study the impact of AI on youth well-being and social, emotional and cognitive development.

"Making the AI that kids use safer is a collective challenge," said Ellen Pack, Co-CEO of Common Sense Media. "It will take researchers, policymakers, and industry all pulling in the same direction. The Youth AI Safety Institute's role is to set a high bar: rigorous standards, independent testing, and transparent results that raise the bar for everyone."

The Institute will operate under Common Sense Media, the nation's leading kids and tech nonprofit with a 23-year track record of protecting and preparing families for the digital age. Philanthropic funders include Lee Ainslie of Maverick Capital, Jim Coulter of TPG, John H. N. Fisher of Draper Fisher Jurvetson, Paul Tudor Jones of Tudor Investment Corp., Gene Sykes of Goldman Sachs, and the Walton Family Foundation. Industry-related funders include Anthropic, the OpenAI Foundation, and Pinterest. Additional funders will be announced in the future.

The Institute is solely responsible for its standards, research, and evaluations, and maintains complete editorial independence over published results. Common Sense Media has previously published rigorous assessments that identified risks for teens with leading AI chatbots, including ChatGPT, Claude, Gemini, and Meta AI.

"Building safe AI for the next generation requires thoughtful collaboration, careful research, and safeguards grounded in real-world expertise," said Daniela Amodei, President and Co-founder of Anthropic. "Like so many parents, I think about the impact this technology will have on young people and how important it is that we get it right. Anthropic is committed to working with independent experts, educators, and others across the industry to help make sure AI is built and deployed responsibly."

"AI holds enormous promise for young people, opening up new ways to learn, create, and explore their interests," said Wojciech Zaremba, Head of AI Resilience at the OpenAI Foundation. "As these tools become part of everyday life, it's important that they're designed to be safe, trustworthy, and appropriate for different stages of development. That's why independent evaluation and public accountability matter."

Said Pinterest CEO Bill Ready: "Technology must be built with youth safety and well-being at its core, and that includes AI. At Pinterest, our commitment goes beyond responsible AI innovation—we're creating age-appropriate experiences designed with young people's safety front and center. As AI becomes a bigger part of everyday life, sound research and clear standards will be essential."

The Institute is working alongside a growing network of strategy, research, and technical evaluators, including established partnerships with Transluce, Humane Intelligence, and Stanford Medicine's Brainstorm Lab for Mental Health Innovation. It welcomes collaboration with other leading experts and AI safety evaluators across the globe.

"We're in the deep end of the pool with AI now," said Jonathan L. Zittrain, Director and Co-Founder of the Berkman Klein Center for Internet & Society at Harvard University. "Some, like the frontier labs and early adopters, have jumped in; others have felt tugged in or pushed—or simply felt water rising around them. This is a vital and urgent initiative to help all of us get an independent and more thorough sense not only of how models work in a beaker, but also how they are impacting the young people who use them."

The Institute will be guided by a Board of Advisors composed of distinguished experts in AI, youth development, child safety, mental health, and education, with a conflict-of-interest policy that excludes current employees or affiliates of funders or partner organizations. Advisors include:

  • Dr. Nadine Burke Harris, Pediatrician and Public Health Advocate; former Surgeon General of California
  • John Giannandrea, Technology Executive; former SVP of Machine Learning and AI Strategy, Apple, and Chief of Search and AI, Google
  • John King Jr., Chancellor, State University of New York (SUNY); former U.S. Secretary of Education
  • Dr. Jenny Radesky, Associate Professor of Pediatrics, University of Michigan Medical School; Co-Medical Director, AAP Center of Excellence on Social Media and Youth Mental Health
  • Mehran Sahami, Tencent Chair of the Computer Science Department and the James and Ellenor Chesebrough Professor in the School of Engineering at Stanford University

Dr. Vivek Murthy, former Surgeon General of the United States and a member of Common Sense Media's Board of Directors, will be the Board's liaison to the Institute's Board of Advisors. "We are at great risk of making the same mistakes with AI that we made with social media: subjecting children to new technologies without adequate safety guardrails and thereby causing harm to countless lives," said Murthy. "For all its potential uses, AI—and AI chatbots in particular—has the potential to damage the mental health, social development, and well-being of young people, too often with tragic outcomes. We urgently need policies and institutions that will demand transparency, allow for independent safety evaluations, and enforce accountability. The well-being of the next generation is at stake."

The Institute is conducting a search for its first executive director. Its standards and evaluations are led by Robbie Torney, Head of AI & Digital Assessments. Geoffrey A. Fowler, former technology columnist at the Washington Post and the Wall Street Journal, has joined as Head of Public Engagement.

Common Sense Media will share more about the Institute's plans on May 12 at the inaugural Copenhagen Summit: Keeping Our Children and Families Safe in the AI Era, co-hosted with Save the Children Denmark and Margrethe Vestager at the Danish Parliament.

Learn more about the Youth AI Safety Institute here: http://institute.commonsensemedia.org

About Common Sense Media

Common Sense Media is the leading nonprofit organization dedicated to improving the lives of kids and families by providing the research-backed information, education, and independent voice they need to thrive in the age of apps, algorithms, and AI. We rate, educate, and advocate to protect and prepare kids online. Our ratings, research, and resources reach more than 150 million users globally, over 1.5 million educators, and more than 100,000 schools worldwide every year. Learn more at commonsense.org.