IE 11 is not supported. For an optimal experience visit our site on another browser.

Twitter, Facebook and Instagram once had a decent tool to fight misinformation. It’s gone.

Blue checks have changed from a trust-building tool to a new battleground for culture wars.
Photo Illustration: Multiple Instagram verification badges
MSNBC / Instagram

Meta announced this week that it’s overhauling its system for giving out verification badges — those blue check marks that are pinned to the Instagram and Facebook profiles of public figures such as politicians, celebrities and journalists. In the past they were distributed to people and organizations based on their status as influential figures who are particularly vulnerable to impersonation; their officially confirmed identity made their statements reliable for reporting. Now those badges can be bought by anyone for the price of $11.99 per month for web, or $14.99 for mobile.

It’s likely that Meta CEO Mark Zuckerberg was inspired to unveil this plan after seeing Twitter CEO Elon Musk make a similar move a few months ago when he rolled out Twitter Blue subscriptions. The criteria for a blue verification badge on Twitter used to be tied to qualifying as a public figure — now it’s anybody willing to pay 8 bucks a month, and their identity does not need to be authenticated.

Making verification badges available to anyone for a monthly rate transforms their entire meaning.

This is not just a minor tweak of the system: Making verification badges available to anyone for a monthly rate transforms their entire meaning. Twitter and Meta appear to be banking on it as a way to monetize users seeking clout and influence. But the public is losing one of the most useful ways to mitigate the spread of misinformation and disinformation.

My general perspective on the issue of regulating misinformation is that we mostly can’t trust corporations to do it. On a philosophical level, I reject the claim that there is such a thing as objectivity when it comes to assessing complex truths. On a political level, I have little trust in unregulated, profit-seeking, engagement-maximizing entities to serve the public interest in their evaluations of what’s true or not. And on an empirical level, social media companies, with their wayward algorithms and opaque decision-making processes, have not yet demonstrated the operational capacity to deal with the issue of regulating information with the precision and transparency it requires.

This is why I’ve always been fond of the verification badge system: It allows social media companies to deal with the question of bad actors peddling false information with a lighter touch. Social media companies help the public sniff out impostors by confirming that influential figures who shape public discourse are who they say they are. In the context of a political climate in which many actors weaponize misleading and false information, it is valuable to have clear identifiers of how people are affiliated with specific institutions and to ensure their biographical backgrounds are more easily searchable. A viral tweet about Covid means something different coming from an epidemiologist than it does coming from a lawyer, and it can be taken more seriously when we know that the self-identified epidemiologist is who they say they are.

Verification of identity isn’t verification of truth. Public figures like journalists or scientists are capable of pushing out misleading information. But they generally have an interest in maintaining a reputation as honest brokers, and it is professionally costly for them to behave in bad faith. More importantly, verification allows users to assess the credibility of an influential speaker on their own.

Old-school verification had its flaws, in part because the process behind determining who could be verified should’ve been much more transparent. But overall it mitigated some of the problems of living in a sea of shoddy and false information without being intrusive or having companies have to weigh in on what’s true.

That looks like it's changing. Under the new rules, verification will no longer reliably serve as a tool that can help us understand the identity or credibility of any given individual. In fact, it could be weaponized to do the opposite. That's because the new verification systems are not primarily about confirming identity, but about social media companies aiming to cash in on the social status associated with blue check marks.

While the blue checks were not originally designed to confer clout to users, over time some have come to perceive them primarily as a marker of power online. Now Meta and Twitter want to capitalize on that aura of clout. They're luring people to join by promising that they won't just have a verified badge, but that the badge will change their place in the algorithm to give them more visibility and reach online. That's a draw for all kinds of aspiring cultural and political influencers, as well as various kinds of content creators. Some estimate Meta could make billions off of new verified subscriptions.

Simultaneously, identity confirmation is either being discarded or has become questionable. Twitter Blue does not confirm people's identities. When Musk first rolled it out, it led to a massive spree of impersonations — including an impersonation of a pharmaceutical company that wrecked the valuation of the company. Twitter suspended the service amid a firestorm of controversy (and mockery for being so shortsighted), then relaunched it with additional requirements that slow down the application process. But notably it still does not authenticate people's identities, and it does not protect against impersonation of others, including major public figures. Meta’s new mass verification program, which has begun to roll out in Australia and New Zealand, does at least require government ID to authenticate a profile. But it remains to be seen if it has adequate safeguards to protect against more sophisticated impersonation attempts; tech security experts say these systems can be duped. It's safe to guess that at least for some time many social media users will be confused about what blue checks signify and place undue trust in them.

The meaning of the blue check has shifted from a way to help build trust and clarify where information is coming from to a battleground for culture wars and influence.

The changes in the verification system reflect broader problems with our social media ecosystem. Meta and Twitter have created highly influential civic spaces that have a tremendous impact on society. But they mainly look at these spaces as opportunities to engineer new ways to exploit people's attention spans and desire for affirmation and influence. In the case of Twitter, it also seems that Musk, who has increasingly identified as right-wing, is partially motivated by the idea of waging war against establishment media. He has boasted about Twitter Blue as a "great leveler," promised to scrap the old blue check marks entirely, and implied that he views winding down the old system as a way to disrupt the influence of establishment media because he doesn't trust it.

To be fair, we're entering a transitional phase with verification, and it's possible that solutions will emerge in the coming months and years. Perhaps identity verification and anti-impersonation rules will grow more robust. Maybe a new taxonomy of badges can be used to distinguish between different kinds of users. (Twitter is trying to do this to some extent with affiliate badges, but they have not been widely adopted, and there are questions about their value given that Twitter Blue doesn't do proper identity confirmation.) But overall the meaning of the blue check has shifted from a way to help build trust and clarify where information is coming from to a battleground for culture wars and influence. Hopefully social media companies think more carefully about ways to help people develop trust in the sources of their information. But it's evidently not their top priority.