Congress Risks Failing the Next Generation on AI
An 18-Year-Old's Warning About the AI Moratorium
I am 18 years old. Over the past decade, I have learned how to add fractions, ride a bike, and write a cogent essay. I've moved between states, gotten my driver's license, and lost loved ones. From an 8-year-old pretending to be a penguin to a teenager pretending to be an adult, I've had experiences that have and will continue to mold the rest of my life. A decade is an entire childhood. In a decade, everything can change.
On May 22, the U.S. House of Representatives quickly passed a massive budget bill that sounds like it's only about spending and taxes, but tucked inside the more than 1,000-page bill of tax breaks, cuts to health care and food assistance, and increased military and border security spending is a 10-year ban that prevents states from enforcing the safe use of artificial intelligence.
Supporters of this provision say that forcing state attorneys general and legislators to stand down on AI safety would reduce confusion for the AI industry among a patchwork of state laws. The problem is that Congress is attempting to ban the enforcement of state laws without ensuring there are strong federal AI safety laws in place. It is not hard to see why many are wary of entrusting Congress with unilaterally regulating all AI, as it was historically unproductive in 2024 and continues to be derailed by partisan gridlock for basic updates to online privacy and safety. As California State Senator Scott Wiener recently remarked, "Congress is incapable of meaningful AI regulation to protect the public. It is, however, quite capable of failing to act while also banning states from acting."
I share his concern. The proposed ban is a gift to AI companies; by restraining state actions while simultaneously failing to pass meaningful safeguards at the federal level, Congress will enable AI companies to circulate technologies that have serious risks—and in some cases have already harmed kids' mental health and well-being. The moratorium could have sweeping implications: allowing deepfakes to spread unchecked, blocking states from regulating chatbots that groom users to commit self-harm, silencing efforts to prevent bias and algorithmic discrimination, and undermining protections against surveillance and data privacy.
I fear a future where kids are emotionally attached to artificial companions modeled after their favorite characters from television or books, encouraged to harm themselves or others by malfunctioning AIs, and emboldened to misbehave by sycophantic chatbots. I imagine how the past decade of my life would have been different if I had grown up with the technology we have today. What if I had used AI to write my essays? What if I had asked ChatGPT to comfort me after the death of a relative? What if, out of loneliness, I had befriended my computer after moving to Virginia?
AI policymaking is remarkably difficult because of the ethical questions generated by the use of these technologies. Even just defining artificial intelligence has proven to be a challenge. One function of the states in our democracy is to serve as policy laboratories. With a nascent industry like AI, testing multiple regulatory approaches would provide the federal government with crucial insights, as it did with public health, food safety, and environmental regulations. State-level AI regulation doesn't have to be burdensome; states have already adopted practical measures like protecting residents from discrimination by requiring transparency and due process in AI-driven decision-making, prohibiting social media platforms from providing an addictive feed to minors, and protecting the rights of artists' work in Nashville. States can respond quickly to local needs, and the federal government can step in with clarifying legislation if conflicts arise.
AI is already spiraling out of control as companies race to get their products to market. Aside from inappropriate interactions with impressionable kids, AI-generated or enhanced images on social media reinforce stereotypes and have the potential to damage the body image of young girls. Recently, my friend put a picture of our prom group through Meta AI and asked it to put sunglasses on us. It did—and lightened those of us with brown skin. Meanwhile, half of the final papers in an AP Research class at my high school were flagged for AI usage. While solutions to these issues may differ, they point to a larger truth: Over the past 10 years, there's been an explosion of new, intractable tech products, from TikTok to Grok. We do not know what technologies will emerge in the next 10 years or what unique challenges we will face. By extension, we do not know what kind of regulatory protections the states will be ceding if this provision makes it into law.
Congress is years behind on protecting kids and teens on social media. They cannot afford to drop the ball on AI as well. Competing in the global AI race may be important, but it doesn't have to come at kids' expense. And if Congress really is concerned about American innovation, perhaps they should consider greater investments in education, rather than more handouts to tech companies.

Common Sense Media offers the largest, most trusted library of independent age-based ratings and reviews. Our timely parenting advice supports families as they navigate the challenges and possibilities of raising kids in the digital age.