Skip to main content
  • We Rate
  • We Educate
  • We Advocate
  • Our Research
  • Our Work in AI
  • Donate

 

AI Risk Assessments

Our risk assessments at Common Sense Media are independent, third-party evaluations of AI safety, effectiveness, and appropriateness for AI systems and products that are used by kids, teens, and in schools. They combine research and extensive testing of AI systems, and describe a product's strengths, weaknesses, opportunities, and risks in a clear and consistent way.

With AI systems, we believe that technical excellence alone is not enough. AI cannot be separated from the people and systems that inform, shape, and influence its use. Our researchers engage in comprehensive, single- and multi-turn exchanges with AI systems across a variety of kid, teen, and educational conversation topics, allowing us to fully evaluate the product and understand risks and opportunities that emerge from teen and kid use.

Claude logo

Claude

Grok logo

Grok and @grok on X

Ai toys logo

AI Toys

Gemini logo

Gemini K-12

AI chatbots for mental health support

AI Chatbots for Mental Health Support

ChatGPT-5 logo

ChatGPT-5

Gemini Under 13

Gemini With Teen Protections

Meta AI

AI Teacher Assistants

Social AI Companions

Character.AI

Recommendation Systems in Social Media 

Recommended Content in Instagram

Recommended Content in TikTok

 

Sora logo

Sora 2 and the Sora Platform

Generative AI Chatbots

Khanmigo

Perplexity

My AI

Stable Diffusion

 


 

Artificial intelligence isn't magic. It's math that trains computers to do tasks. While this technology is powerful, it isn't perfect, and that's why our AI risk assessments are grounded in eight principles about what we believe AI should do. These principles represent Common Sense Media's values for AI and they are the rubric we use to conduct our risk assessments.

Common Sense AI Principles Assessment

The benefits and risks, assessed with our AI Principles - that is, what AI should do.

  • Put People First

    Minimal risk

    For AI to best benefit people and society, it should be developed and used in ways that put the people it will impact first, respecting human rights, children's rights, identity, integrity, and human dignity. AI should empower children, families, and educators to actively participate in a digital society, and must not contribute to diminishing responsibility for human decision-making. This requires researchers and technologists to be actively engaged with and accountable to a broad range of stakeholders when in development (e.g., "nothing about us without us") and when used in practice (e.g., systems that support "adults (parents, guardians, educators)-in-the-Loop [AITL]").

  • Be Effective

    Minimal risk

    Whether or not AI works is generally assumed and not questioned. The reality is very different. AI doesn’t always work, might work inconsistently in different situations or for different people, and many times the expectations set for what it might be able to do far exceed what it can do.

  • Prioritize Fairness

    Minimal risk

    Responsible AI depends on inclusion by design, including active evaluation of blind spots, hidden assumptions, and unfair biases in data, as well as of the resulting systems and system choices. Efforts should be made to ensure that the benefits of AI are shared broadly and equitably, such that this fosters inclusive social, emotional, and academic development, respects social and cultural diversity, actively addresses inequities, and avoids creating or propagating harms, restriction of life choices, and the concentration of power.

  • Help People Connect

    Minimal risk

    AI should support meaningful human contact and connection, and demonstrate an understanding of the wider contexts and complex relationship networks into which an AI system is integrated. AI must not create or propagate interpersonal or school community challenges, incite hatred against an individual or group, dehumanize individuals or groups, employ racial, religious, misogynist, or other slurs and stereotypes that incite or promote hatred, or create addiction to or dependence on AI systems.

  • Be Trustworthy

    Minimal risk

    AI research and development should uphold high standards of scientific excellence and rigor (e.g., embracing peer review, validated multidisciplinary research, reproducibility), and actively protect children from open beta testing, either through exclusion or informed consent. It is critical that AI systems used by children do not perpetuate misinformation or disinformation (e.g., does not contradict well-established expert consensus or promote theories that are demonstrably false or outdated according to criteria such as legal documents, expert consensus, or other reputable primary sources).

  • Use Data Responsibly

    Minimal risk

    Technology used by children and students serves an especially vulnerable population and should be held accountable and to a higher standard. The Common Sense Privacy Program's 2021 State of Kids' Privacy report, however, indicates a widespread lack of transparency and a failure to protect children and students with better practices that apply to all users of a product. This transparency and security is equally critical for AI. In addition, AI systems should provide clear policies and procedures, require notice and consent for use of data, and allow children—in accordance with their age and maturity—to access, securely share, understand the use of, control, and delete their data, and for parents, guardians, and educators to do the same when appropriate.

  • Keep Kids & Teens Safe

    Minimal risk

    AI systems must prioritize the protection of children's safety, health, and well-being, regardless of whether the systems were intended to be used by them, and special protections are needed for marginalized groups and sensitive data (e.g., race, gender, ethnicity, biometrics). AI must not create risks to mental health, produce or surface content that could directly facilitate harm to people or place, provide explicit how-to information about harmful activities, promote or condone violence, disparage or belittle victims of violence or tragedy, deny an atrocity, or lack reasonable sensitivity toward a natural disaster, pandemic, atrocity, conflict, death, or other tragic events.

  • Be Transparent & Accountable

    Minimal risk

    This requires creating a shared understanding of best uses, limitations, and considerations for AI systems through the "just right" level of interpretability across stakeholder groups.

    Because contemporary AI is inherently fallible, when it triggers actions that have a direct and significant impact on people, AI should not be the primary source of information for decision-making. These systems must provide mechanisms for meaningful human control (e.g., AITL, moderation tools for adults, overridable predictions and decisions) and human agency (e.g., consent, control, remediation, and feedback).

Learn more about how we review and rate AI products

 

 

What We're Reviewing

There are many types of AI out there, and almost just as many ways to describe them! We’re bucketing our AI risk assessments into three categories:

""

  Multi-Use

These products can be used in many different ways, and are also called "foundation models." This category includes products like generative AI, such as chatbots and products that create images from text inputs, translation tools, or computer vision models that can examine images and detect objects like logos, flowers, dogs, or buildings.

""

  Applied Use

These products are built for a specific purpose, but they aren't specifically designed for kids or education. Examples of this category include automated recommendations in your favorite streaming app, or the way an app sorts the faces in a group of photos so you can find pictures of your niece at a wedding.

""

  Designed for Kids

This category is a subset of Applied Use products, and it covers products specifically built for use by kids and teens, either at home or in school. This category also includes education products designed for teachers or administrators (such as a virtual assistant for teachers) that are ultimately intended to benefit students in some way.

 

 

Additional AI Resources

  • AI and Our Kids

    Foundation

    AI and Our Kids: Common Sense Considerations and Guidance

  • Chatgpt and Beyond

    Education

    ChatGPT and Beyond: How to Handle AI in Schools

  • Lessons and Tools

    Free Lessons

    AI Literacy for Grades 6–12

  • 5 Tips

    For Parents

    5 Tips for Talking to Your Kids About Generative AI


 

 

Our work on AI is made possible by the generous support from

Images of sponsors, including the Craig Newmark foundation

 

Common Sense Media is dedicated to improving the lives of kids and families by providing the trustworthy information, education, and independent voice they need to thrive.

We're a nonprofit. Support our work

  • About
    • Column 1
      • Our Work and Impact
      • How We Work
      • Meet Our Team
      • Board of Directors
      • Board of Advisors
      • Our Offices
      • Press Room
      • Annual Report
      • Contact Us
  • Learn More
    • Column 1
      • Common Sense Media
      • Common Sense Education
      • Digital Citizenship Program
      • Family Engagement Program
      • Privacy Program
      • Research Program
      • Advocacy Program
  • Get Involved
    • Column 1
      • Donate
      • Join as a Parent
      • Join as an Educator
      • Join as an Advocate
      • Get Our Newsletters
      • Request a Speaker
      • Partner With Us
      • Events
      • We're Hiring

Follow Common Sense Media

  • Facebook
  • Twitter
  • YouTube
  • LinkedIn
Contact us / Privacy / / Terms of use / Community guidelines
© Common Sense Media. All rights reserved. Common Sense and other associated names and logos are trademarks of Common Sense Media, a 501(c)(3) nonprofit organization (FEIN: 41-2024986).