2020 Social Media Voter Scorecard
This election year -- and every election year -- it's best to get information straight from nonpartisan news outlets (they can be sued for reporting false information) and to check primary sources. But since social media is going to be filled with election info these next couple of months, here's a look at how different platforms are handling the flood of videos, memes, and hashtags and what you can do to get the most out of social this election season. Check back, as platform policies may change.
Check out the overview below, and see details by platform.

*We reviewed platform policies to assess how well platforms were addressing the integrity of election-related posts. We also consulted technologists working to improve social media including Dr. Hany Farid, UC Berkeley School of Information and TikTok Content Advisory Council; Leslie Miley, former Twitter engineer and Common Sense tech adviser; and Thomas Dimson, former director of engineering at Instagram.
Select a platform to see how it is handling this election season and what you can do:
Platform policies:
- Facebook allows political speech and advertising, and posts by politicians are exempt from fact-checking.
- They have an ad library users can view; they are working on a plan to allow people to opt out of seeing political ads.
- People being paid by campaigns are supposed to disclose endorsements. However, this is difficult to enforce, and often violations are removed far after the ad is spread.
- Facebook's policies don't require all false content to be removed; they only demote the content so it doesn't get amplified by their algorithms. The platform has limited ability to moderate private Facebook groups, where misinformation and hate runs rampant.
- They say they will combat voter suppression by banning anything related to misrepresentation of how and where to vote, who can vote, and whether you should vote.
- Facebook recently said they are increasing efforts to monitor election-related posts and removing misinformation related to voting, though it's unclear whether their efforts will extend to private groups.
- Facebook policy says threats of violence will be removed from the platform if they are determined by Facebook to be credible threats to personal safety.
- Facebook has an oversight board tasked with addressing misinformation and hate speech, but it will not be functional before Nov. 3.
What experts say:
- Politics is popular on Facebook, and the algorithms know it. The more you engage with political content, the more of it you'll be likely to see without searching for it.
- Facebook claims to not allow hate speech on the platform, but the enforcement is uneven and inconsistent.
- Facebook groups are likely to have more misinformation, because they are subject to less moderation and enforcement.
What you can do:
- Check out Facebook's Voting Information Center to learn more about election resources in your area.
- You can change your timeline settings to "Most Recent" instead of "Top Stories" to prevent Facebook algorithms from deciding what you see.
- Report misinformation by clicking the three dots in the upper right corner of a post and selecting "Find support or report post." Much of how content is moderated on Facebook starts with user reports.
- If you think something should be labeled as a political ad, flag it for review.
- Try not to get too much information from private Facebook groups; when false, this content is almost never removed.
- Ask Facebook not to sell your data: donotsell.org/Facebook.
Platform policies:
- Instagram follows Facebook's lead of allowing political speech and advertising, and posts by politicians are exempt from fact-checking.
- False content isn't removed, only demoted.
- They say they will ban misrepresentation of how and where to vote, who can vote, and whether you should vote and "any content containing statements of intent, calls for action, or advocating for high or mid-severity violence due to voting, voter registration, or the outcome of an election."
- Threats of violence will be removed from the platform if they are determined to be credible threats to personal safety.
- They don't allow hate speech, but it still finds a way onto the platform.
What experts say:
- Instagram is owned by Facebook, so the same cautions apply.
- Instagram says they fact-check posts, but they have yet to prove they can effectively catch misinformation.
What you can do:
- Pay attention to the warnings on flagged content.
- If you see influencers or accounts posting political content that looks like it should be labeled as ads or partnerships, report it!
- Influencers are paid to deliver messages, and they will be endorsing candidates and issues as part of their businesses and brands. Ask yourself (and maybe them) why they are speaking out in support of or against a candidate or cause.
- Ask Instagram not to sell your data: donotsell.org/instagram.
Platform policies:
- Political ads at the federal level are allowed, but they're not allowed for state or local elections.
- Reddit's advertising policy forbids deceptive, untrue, or misleading advertising (political advertisers included).
- Moderators on Reddit do not fact-check political content that isn't ads.
- Content moderators on Reddit do ban content if they see it and it violates their guidelines.
- In 2019 they began listing all the ads that came from political campaigns.
- Reddit does label their ads and has a straightforward advertising policy around political information.
- Reddit asks users to not engage in "vote manipulation."
What experts say:
- Reddit is a hotbed for bullying and hate speech, as comments that are inflammatory are the ones usually "upvoted" and thereby highlighted for people to see.
- Reddit says they will ban "[c]ommunities and users that incite violence or that promote hate based on identity or vulnerability," but until recently this has barely happened and it is not yet an effective policy.
What you can do:
- If you see something that is factually incorrect, you can comment on it to give people a heads up that the post isn't true.
- Subscribe to r/RedditPoliticalAds for more information and to see all the political ad campaigns running on Reddit dating back to January 2019.
Snapchat
Platform policies:
- Snap says they fact-check political ads and remove false content.
- They do not allow discriminatory posts or the deliberate spreading of false information that "causes harm," and political misinformation can fall under this. When content like this is found, they say it will be removed.
- When users violate user policies, Snap "may remove the offending content, terminate your account, and/or notify law enforcement."
What experts say:
- You choose the accounts to follow, but only the content that people post directly shows up in the main feed, which means you, rather than the platform, are in charge of what you see.
What you can do:
- Check out Snapchat's voter resources when they become available.
- Snap has a downloadable political ads library by year that tells people who paid for political ads.
- Be careful of believing everything people have in their stories. If it doesn't say where it's from, do your research and don't share unless you can verify.
- Ask Snapchat not to sell your data: donotsell.org/snapchat.
TikTok
Platform policies:
- TikTok does not allow political advertising.
- Political content is allowed so long as it's "creative and joyful."
- TikTok "does not permit misinformation that will cause harm to our community or the larger public."
- A committee of outside experts advises on and reviews content-moderation policies.
- Moderators will demote or remove false content when they find it; users also can report it.
- TikTok does not allow hate speech or anything that promotes violence; when they are made aware of it, they say they remove it from their platform.
What experts say:
- Political TikToks thus far have been big traffic-getters, so expect a lot of them. Hopefully their popularity doesn't slow TikTok's review and takedown process for false posts.
- Without a bio or a way to verify users, you won't know much about the source of videos or memes.
- TikTok should add a requirement that any paid "ambassadors" disclose who they are endorsing.
- Misinformation is a big problem on the platform, and TikTok is working to educate users about it.
What you can do:
- Flag misleading videos and/or an account if you notice misleading information.
- Consider following primary news sources on TikTok. Here's a list of 100 outlets with channels where you can catch original reporting on the elections and more.
- Switch to "following" in your settings to see videos from the accounts you follow and those they recommend, rather than what TikTok's algorithms suggest.
- Watch out for altered videos or deepfakes. Even short videos can be doctored!
- Ask TikTok not to sell your data: donotsell.org/tiktok.
Platform policies:
- Twitter will not host political ads. They say, "Political message reach should be earned, not bought." But it is very popular for political content. You can check out previous years' political ads in their Ad Transparency Center.
- Influencers who have relationships with campaigns may be able to help candidates sidestep these rules, because individuals can give personal endorsements.
- Currently, labels appear on the Twitter accounts of candidates running for office during the 2020 U.S. election cycle.
- Twitter doesn't allow misinformation about voting and elections, and they say they will step up their efforts in combating election misinformation. However, they haven't said how.
- According to Twitter, "Not all false or untrue information about politics or civic processes constitutes manipulation or interference," so there is a lot of potentially unchecked political content on Twitter.
- Twitter is testing features to label false content. It also will label videos that have been altered as "manipulated media" but won't necessarily remove them.
- Twitter does not permit "behavior that targets individuals with abuse based on [a] protected category" but relies on users reporting it.
What experts say:
- Timelines default to showing you what Twitter's algorithms decide is most popular; even if you change the timeline to show the most recent tweets, Twitter will switch back after a few days if you aren't active.
- Viral content can be false, but it can spread fast before it gets flagged, removed, or debunked.
What you can do:
- Report voter suppression, hate speech, and harassment. Twitter has moderators and AI tracking violations but relies on users to report.
- Set your timeline to "Latest Tweets" to see the most recent (and not the algorithm's recommended) posts.
- Follow reputable sources, including voter registration organizations, news outlets, verified journalists, and experts (the verified symbol can help you know who is legit, but it doesn't necessarily mean an individual is a good actor).
- Ask Twitter not to sell your data: donotsell.org/twitter.
YouTube
Platform policies:
- YouTube limits how election ads are allowed to target people by only allowing for targeting of age, gender, and ZIP code.
- They will remove videos that spread lies about a political candidate's citizenship status or eligibility for public office and other election-related misinformation that contributes to voter suppression.
- YouTube is launching fact-checking panels in the U.S. for COVID-19-related misinformation.
What experts say:
- Algorithms work to keep you watching, which may mean feeding you videos that are more and more radical (for example, you start with a vitamin how-to and end up at an anti-vax video).
- When you open YouTube, you are not in control of what you see on the homepage.
- YouTube will remove deepfakes only if they are created with malicious intent, so beware of deepfakes that are meant to be funny.
- YouTube does not allow hate speech, and they say they remove it from their platform, but it's hard to keep up without legions of moderators and AI that prioritizes incendiary videos.
What you can do:
- You can turn off "recommended" and decide for yourself what to do next: In settings, select notifications and uncheck the box for recommended videos.
- Report videos that violate community guidelines. YouTube staff reviews the content and will remove it if it breaks the rules.
- Make sure you know what you're watching. If it seems unrealistic, it might be -- and you should do some more research.
- Ask YouTube not to sell your data: donotsell.org/youtube.
Want more resources? See our Young Voter's Guide to Social Media and the News.