It's Time to Stop Hate Online to Protect Families
Hate speech is all over the internet. Cruelty against a religion, ethnicity, sexual identity, or race has reached a fever pitch and often finds its way into kids' lives. And with screen time and social media usage up for teens and families, tech companies' responsibility to create safe and healthy online spaces is more important than ever before.
Families, educators, and industry all have a role to play when it comes to keeping kids safe and healthy online. Unfortunately, tech companies are failing to protect kids and families from racist, sexist, and homophobic content online. Common Sense found that 64 percent of teen social media users say they come across hateful content on social media; one in five report they "often" see inappropriate content.
With screen time and social media usage up for teens and families, tech companies' responsibility to create safe and healthy online spaces is more important than ever. Advocating that tech companies fix this has become a big priority, and this summer, Common Sense helped launch the Stop Hate for Profit coalition to address the rampant racism, disinformation, and hate online. The coalition started with a call for companies to stop advertising on Facebook until the social media giant takes 10 common-sense steps to address the toxic stream of content on its platform. Hundreds of major brands, from Coca-Cola to Ford, have joined the campaign.
These problems exist across Big Tech platforms, so why are we focused on Facebook? Although teens have moved away from Facebook to newer social media platforms like Snapchat and TikTok, hate and harassment are a common daily occurrence on Facebook -- 42 percent of daily Facebook users experienced harassment on the platform. And as the largest and most profitable social media platform in the world, it sets the standard for what is allowable for all users -- kids and all vulnerable populations. What Facebook allows harms families, as well as our democracy.
We are asking Facebook to (1) be more accountable for what users see on Facebook, (2) stop treating some public figures differently from other Facebook users, and (3) provide more support to victims of hate and harassment.
Some of these demands are technically complicated; others could be done at the wave of Mark Zuckerberg's hand. Part of the problem is that Facebook is a black box to outsiders. For years, social media researchers and civil rights groups have been trying to learn more about how Facebook works -- and how it targets content to its users. This is no easy feat. Facebook prioritizes user engagement, recommending and amplifying the most outrageous content if it receives user clicks and gets eyeballs watching. Conspiracy theories run rampant on social media. For example, recent research found that among people who refuse to believe the COVID-19 pandemic even exists, 56 percent cited Facebook as their primary source of news.
Facebook also has double standards in how it treats content. Some of this is because of the sheer volume of material people post and share on Facebook each day. Many, if not most, of the photos and stories people share on Facebook are never reviewed by human eyes. But Facebook also exempts politicians from having to follow the community guidelines every other user agrees to. Given the importance of this content for our democracy, Facebook's blank check to politicians is especially dangerous.
Finally, Facebook has long relied on underpaid and outsourced staff to review and moderate content online. The COVID-19 pandemic has shifted this further, and tech companies are now more reliant on automated processes and AI that don't work. There's also no one to call when things go wrong. Victims of severe harassment have no ability to connect with a live Facebook employee for help. (Only tech platforms seem to get away with such terrible customer service!)
See more about how we are combating hate speech throughout the organization, and what you can do to join us.

Hate speech is all over the internet. Cruelty against a religion, ethnicity, sexual identity, or race has reached a fever pitch and often finds its way into kids' lives. And with screen time and social media usage up for teens and families, tech companies' responsibility to create safe and healthy online spaces is more important than ever before.
Families, educators, and industry all have a role to play when it comes to keeping kids safe and healthy online. Unfortunately, tech companies are failing to protect kids and families from racist, sexist, and homophobic content online. Common Sense found that 64 percent of teen social media users say they come across hateful content on social media; one in five report they "often" see inappropriate content.
With screen time and social media usage up for teens and families, tech companies' responsibility to create safe and healthy online spaces is more important than ever. Advocating that tech companies fix this has become a big priority, and this summer, Common Sense helped launch the Stop Hate for Profit coalition to address the rampant racism, disinformation, and hate online. The coalition started with a call for companies to stop advertising on Facebook until the social media giant takes 10 common-sense steps to address the toxic stream of content on its platform. Hundreds of major brands, from Coca-Cola to Ford, have joined the campaign.
These problems exist across Big Tech platforms, so why are we focused on Facebook? Although teens have moved away from Facebook to newer social media platforms like Snapchat and TikTok, hate and harassment are a common daily occurrence on Facebook -- 42 percent of daily Facebook users experienced harassment on the platform. And as the largest and most profitable social media platform in the world, it sets the standard for what is allowable for all users -- kids and all vulnerable populations. What Facebook allows harms families, as well as our democracy.
We are asking Facebook to (1) be more accountable for what users see on Facebook, (2) stop treating some public figures differently from other Facebook users, and (3) provide more support to victims of hate and harassment.
Some of these demands are technically complicated; others could be done at the wave of Mark Zuckerberg's hand. Part of the problem is that Facebook is a black box to outsiders. For years, social media researchers and civil rights groups have been trying to learn more about how Facebook works -- and how it targets content to its users. This is no easy feat. Facebook prioritizes user engagement, recommending and amplifying the most outrageous content if it receives user clicks and gets eyeballs watching. Conspiracy theories run rampant on social media. For example, recent research found that among people who refuse to believe the COVID-19 pandemic even exists, 56 percent cited Facebook as their primary source of news.
Facebook also has double standards in how it treats content. Some of this is because of the sheer volume of material people post and share on Facebook each day. Many, if not most, of the photos and stories people share on Facebook are never reviewed by human eyes. But Facebook also exempts politicians from having to follow the community guidelines every other user agrees to. Given the importance of this content for our democracy, Facebook's blank check to politicians is especially dangerous.
Finally, Facebook has long relied on underpaid and outsourced staff to review and moderate content online. The COVID-19 pandemic has shifted this further, and tech companies are now more reliant on automated processes and AI that don't work. There's also no one to call when things go wrong. Victims of severe harassment have no ability to connect with a live Facebook employee for help. (Only tech platforms seem to get away with such terrible customer service!)
See more about how we are combating hate speech throughout the organization, and what you can do to join us.
