This April 26, 2017, file photo shows the Twitter app icon on a mobile phone in Philadelphia. In what might be the biggest security breach in Twitter’s history, accounts belonging to a host of big names got hacked with postings promoting a cryptocurrency scam, CNN reported. Credit: Matt Rourke | AP

Facebook Inc. Chief Executive Officer Mark Zuckerberg, like many Americans, expects election night to be confusing. He’s worried it could also be violent.

“There is, unfortunately, I think, a heightened risk of civil unrest in the period between voting and a result being called,” Zuckerberg told Axios last month. “I think we need to be doing everything that we can to reduce the chances of violence or civil unrest in the wake of this election.”

Social media companies are facing heightened scrutiny and pressure to do more to keep their platforms from becoming vectors of misinformation, election meddling and all-around disorder. Over the past two months, Facebook and rival network Twitter Inc. have rolled out new policies as they brace for a complicated and possibly chaotic election night, in which results may be unclear or delayed. Both companies updated their rules in recent days, a sign that they’re still fine-tuning their strategies less than a month before Election Day.

Any delay in the official declaration of winners on the night of Nov. 3 is likely to cause confusion for voters — and could provide an opportunity for people, including the candidates themselves, to spread incomplete or inaccurate information online. Some watchers worry that a lack of clear answers or conflicting information about voting outcomes could lead to riots or protests. Others have warned about possible intimidation at polling places, or that misinformation about how or when to vote could mean people don’t vote at all.

Here are some of the things Facebook and Twitter are doing in preparation for election night:

Facebook

Facebook has added restrictions to political advertising, including blocking new campaign ads starting a week before Election Day. This week, the company said it would suspend political ads entirely after the polls close, a policy Google also announced. Facebook’s goal is to prevent candidates from promoting misleading claims via ads in the hours before or after the vote — a possibility because Facebook doesn’t fact-check political ads.

The social network will label posts from candidates that claim victory prematurely, and forbid ads that do the same, linking users to a Voting Information Center that will include updated results compiled by the news agency Reuters. It will also add links to posts when a candidate tries to undermine election results, suggesting they are fraudulent or rigged, for example. U.S. President Donald Trump routinely suggests that the results will be rigged.

Facebook will put an alert atop every user’s Facebook and Instagram feed once the polls close on Nov. 3 directing people to the Voting Information Center. If candidates claim victory early, the language for this mass alert will specify that vote counting is still in progress.

The company says it will take a firmer stance on voting intimidation, and will remove posts that encourage people to “engage in poll watching when those calls use militarized language or suggest that the goal is to intimidate, exert control, or display power over election officials or voters,” according to a company blog. Trump and his son Donald Trump Jr. have both encouraged their supporters to proactively watch polling locations.

Facebook will utilize a virtual election “war room” this year — an online space where employees from different teams will work together to spot and respond to election-related incidents quickly. The company used a similar setup during the 2018 midterms, and also uses them around other important political events, like debates and conventions.

Twitter

Twitter says posts that misrepresent election-night results are also a violation of its policies, and will be labeled. The company will assume votes are still being counted until there is a public projection from at least two national news outlets.

Posts that “undermine faith” in the voting process, like suggesting results are rigged, are also against the rules and will be labeled or removed. But the type of label Twitter will apply — for example, a link to more information versus a warning screen that covers the original post — will depend on the user and the language posted, a spokesperson said.

Twitter, like Facebook, has a voting information hub that will be updated throughout election night with tweets from mainstream news organizations.

A handful of product changes were unveiled Friday to discourage users from sharing misinformation more widely. Twitter will show users links to more information when they try to retweet a post that has been flagged as untrue. The company will also prompt users to add their own comments to retweets instead of just sharing them directly, an effort to get people to provide more context to the stuff they share.

A feature called Birdwatch, which the company is considering as a tool to let users help fact-check the service, won’t be ready or used for the election, according to a person familiar with the company’s plans. Twitter is also considering more visible labels for fact-checked posts, though it’s unclear if those will be used before Election Day.

In many cases, misleading info posted on both services will be labeled, but not removed, leading some election watchers to question how effective this strategy will be. “It’s just as problematic” to leave misleading posts up even if they have a label, said Michael Serazio, as associate professor of communication at Boston College. “In 2020 there is a real fear and anxiety about what will circulate on social media in the absence of a declared winner.”

Labels linking to additional information puts the onus on Facebook users to do the actual fact-checking, as the tags themselves don’t necessarily say outright that a post is wrong, said Gautam Hans, a First Amendment law professor at Vanderbilt Law School.

“It’s just more information for people to parse through, and people are notoriously bad at doing that,” Hans said.

Google-owned YouTube is dealing with similar issues. The video platform draws millions of viewers to its political and news videos, which have added traffic since the pandemic began. The company has taken several measures to suppress political misinformation on its site. Google itself plans to shut off political ads on its properties once polls close on Nov. 3. YouTube also bans doctored footage and videos that incite violence, mislead people about voting or “interfere with democratic processes.”

However, the company declined to say how it would treat videos from a candidate or others that declare an outcome that doesn’t match official counts. “We continue to stay vigilant and are working to have the right protections in place leading up to, on and after Election Day, globally,” said Ivy Choi, a YouTube spokeswoman.

One challenge for all the online services will be speed. Even if these companies do label posts from violating politicians, those messages can achieve massive reach in minutes. After Trump tweeted earlier this week suggesting that COVID-19 was no worse than the flu, Twitter hid the post behind a warning label — but not before it garnered more than 180,000 likes and more than 43,000 retweets. Some of Twitter’s labels, like the one applied to Trump’s COVID tweet, block users from liking or sharing a tweet further. But by the time companies act to halt the spread of misinformation, much of the damage may already be done.

Some lawmakers expressed worry that Facebook and Google’s advertising policies are short-sighted. “That political ads will be blocked on Facebook and Google after polls close on Election Day do little to stop bad actors from pushing dangerous disinformation organically,” wrote U.S. Rep. Cheri Bustos of Illinois, the chairwoman of the Democratic Congressional Campaign Committee, and Sen. Catherine Cortez Masto of Nevada, the chairwoman of the Democratic Senatorial Campaign Committee, in a statement. The two lawmakers continued to question whether technology companies were prepared for “potential run-off scenarios or urgent announcements like protocol around recounts.”

“I think that it is good that the companies are aware of this,” Hans said. “November 3rd is going to be election night in some ways, but I just think this is going to play out a lot longer than that for obvious reasons, and I hope that the companies are ready for that saga.”

Kurt Wagner, Bloomberg News

Watch more: