Four years ago, in the run-up to the 2016 election, a lot of people were afraid that malicious foreign actors would use social media disinformation to interfere with American politics. And, as it turned out, they had good reason. But increasingly during President Donald Trump’s term, the threat of disinformation came not from outside the country, but from inside our own house — the White House, to be exact.
What happens when disinformation comes from the top?
In 2020, tech platforms have shifted their role from being gatekeepers of foreign and public misinformation to navigating a whole new type of content-moderation challenge: What happens when disinformation comes from the top?
During this election cycle, it seems Twitter has been leading the fight against election disinformation. The platform took a decisive stance on the issue, flagging many of President Donald Trump’s own tweets as false or misleading, something the company has done only rarely since it first began fact-checking Trump’s tweets earlier this year. Twitter has added interstitial disclaimer screens over the tweets, so users have to click past the warnings in order to view the underlying tweets. Facebook continued to show similarly problematic posts, without requiring users to click through, but added warnings to the bottom edge of the posts.
Facebook reports it is now prepared to take a stronger role in taking down disinformation, including taking steps to shut down groups dedicated to spreading disinformation or advocating for political violence. For example, on the Thursday after Election Day, Facebook shut down a viral group, “Stop the Steal,” which had gained more than 350,000 members in a single day while promoting false information about the election. As Trump continues to insist he will not concede the presidency to President-elect Joe Biden, Facebook has also announced a prolonged pause on political ads, a policy that may prove to have a substantial impact on crucial political events like the Georgia Senate runoffs than first realized.
Misinformation from Trump is no new issue. In 2019, Vice President-elect Kamala Harris launched a campaign urging Twitter to delete Trump’s account, citing Trump’s history of false, offensive, and inflammatory tweets. The big question, of course, is: Why did it take so long for tech platforms and media outlets to wise up to Trump’s antics? One explanation is that these companies — and the American people — were simply caught unaware the first time around. Before Trump, we had become conditioned to accept that the president of the United States would be a person who would generally say things that one could at least argue to be factually true, in some charitable reading of the situation.
However, that doesn’t explain why platforms have been so slow to deal with disinformation coming from the top, when they had countless examples of foreign and domestic election interference since before 2016. One possible explanation is simply that power dynamics have shifted. As the Trump presidency is waning, tech companies may now feel less concerned about repercussions from checking the president. With the decisive results of the election, tech companies can freely reign in disinformation coming even from the White House, safe in the knowledge that America (and the world) will support them.
Just as the independent role of the free press is critical for checking the power of the government, so too, we are finding, is the independent role of companies and private individuals. Tech companies have been able to finally take some decisive action to fight disinformation, standing strong against the power of the president, and they would not have been able to do this without the ability to independently control their platforms and the flow of speech they host on their sites and services. Some policymakers want to restrict that ability, but we must be careful: To be truly beneficial, any regulation of speech for private platforms must be broad enough to allow companies like Twitter and Facebook to continue acting independently, supporting free speech and making sure users have access to verified information.
What is clear in all of this is that tech companies do have the capacity to stop the spread of disinformation and protect the security of our elections — even when it comes from the president.
Tech platforms like Twitter, Facebook, and YouTube have been dealing with misinformation for years now. But throughout the past four long years of Trump’s abysmal record of misrepresenting — nearly everything, tech companies have never taken actions as strong as the ones they are taking now. Each has developed content moderation policies and workflows to flag, take down, and stop the spread of false information, with mixed results. Policymakers have accused platforms of political bias in content moderation, though there is no evidence of this. Some have criticized tech companies for not taking down enough content, while others have criticized them for taking down too much content.
What is clear in all of this is that tech companies do have the capacity to stop the spread of disinformation and protect the security of our elections — even when it comes from the president. Much of the success (if we can call it a success at this early stage) comes to years of research, consultation with experts, and many, many hours spent working through how to effectively design and implement protocols to stop the spread of disinformation. From designing disinformation warning labels to creating policies on election integrity and when to take down misleading posts, tech companies have essentially been preparing for years for this election cycle, the greatest challenge any have faced so far, with global consequences.
Trump and his associates want you to believe that he has won the election. In the days immediately after the election, Trump’s campaign wanted you to believe him when he said the counting should be stopped (and, confusingly, he also wanted you to believe him when he said the counting should continue.) Even today, after it has become abundantly clear to most Americans, and most world leaders, that the election has been settled and Biden is the president-elect, still Trump wants you to somehow believe that the mail-in ballots were fake, the neutral vote counting was rigged, and the election itself was a fraud. He wants you to believe so many lies. But this time, tech platforms are pushing back.
Traditional media won’t let him get away with spreading false information, either. The evening of Nov. 5, Trump attempted to give a press conference from the White House, spouting out more false information about the election. In the past, networks have quibbled over how to carry this misleading content. This time, MSNBC, NBC, ABC, CBS, and NPR all cut away from his broadcast, citing the falseness of his claims. CNN and Fox carried the broadcast but fact-checked the misinformation live and immediately after. Just yesterday, Fox News cut off a stream of false information from White House Press Secretary Kayleigh McEnany, with Fox News host Neil Cavuto stating he could not “in good countenance” continue to show the false claims from the White House representative.
What we’re seeing is a sea change in the way both tech and media treat Trump and his propensity for making claims that are, at best, without strong evidence, and at worst, intentionally false and misleading. This strong united front from both new and traditional media is critical, as Trump and his associates continue their futile, embarrassing fight against the lawful election results and obstruct the peaceful transition of power.
Regardless of whether they should have acted sooner, or acted more boldly, both tech and the media are now taking strong stances on combatting misinformation, even when it comes from the highest office in the land. It’s certainly too much to say that Twitter and TV news are saving our democracy, but they are at least finally doing their part to better uphold it.