The Supreme Court could fundamentally reshape the law that determines whether and when tech companies can face legal liability for content posted on their sites. If you’re reading this column, chances are these changes could affect you.
Under our current legal framework, established over a quarter century ago (that’s about 200 years in internet-age years), huge technology companies such as Twitter, Facebook and YouTube are largely immune from being sued for content that users post on those sites. That means if you want to pop over to Facebook and post some defamatory statements about me, I can, with very few exceptions, sue you — but not Facebook.
If you’re reading this column, chances are these changes could affect you.
But the Supreme Court should not be the body to balance those interests. This week, the court is hearing oral arguments in two cases which ask when, under federal law, tech companies can face legal consequences for content on their sites. But, as a number of the justices repeatedly discussed during oral arguments in one of the cases, the court shouldn’t be the institution fashioning new rules for a new internet age. The court shouldn’t be the one creating new rules to balance the desire to protect tech companies, and prevent a floodgate of litigation, with the potential need to impose some liability on these giants. The court is interpreting federal statutes, not the Constitution, and therefore it is well within Congress’ purview to enact these changes. In fact, our representatives’ responsibility is to update our laws to reflect changing technology.
But because Congress appears mired in indecision, the court may be the one to reshape the rules for social media giants.
The issue in the first case, Gonzalez v. Google LLC, is whether a federal law protects social media companies from liability when those companies make recommendations to users about other users’ content or amplify certain content. Imagine that based on previous videos you have watched, YouTube, owned by Google, recommends additional videos. If, for instance, you’ve watched four YouTube videos on knitting, chances are YouTube’s algorithm might conclude that you’d like to watch a fifth.
Currently, that federal law, Section 230(c)(1) of the Communications Decency Act, immunizes social media giants from liability when they do things such as decide whether to post or take down certain content. The question is how far that legal protection stretches and whether it includes decisions to recommend and amplify other users’ content. In this case, the family of an American killed in an Islamic State terrorist group attack in 2015 in Paris sued YouTube (again, owned by Google) claiming that by directing users to certain content, YouTube was helping ISIS’ recruitment activities.
Facebook decision to uphold Trump ban protected by Section 230May 5, 202101:37
The second case deals with similar questions but comes at it from a different legal lens. In Twitter v. Taamneh, the general legal issue is whether social media companies can be liable under a different federal law, Section 2333 of the Anti-Terrorism Act, for aiding and abetting terrorism due to a terrorist group’s posting on the site. The specific question asks whether Twitter could be viewed as providing substantial assistance to terrorists by failing to take more aggressive steps to prevent them from using Twitter. This case was also brought by a family whose relative was killed by an ISIS attack abroad, this one in Istanbul in 2017.
We know that at least four justices voted to consider these cases (it takes four justices to agree to hear a case). This means four justices were interested in revisiting the laws that largely immunize social media companies from content that others post on their sites. The increasingly influential Justice Clarence Thomas has written about wanting to limit the protections given to social media companies under our current legal framework.
We always need to be careful about reading too much into the justices’ questions at oral arguments. Having said that, after oral arguments in the Google case, it appears increasingly unlikely that the court will use these cases to upend our current legal framework. There seemed to be interest in either kicking the case back to lower courts to apply new judicial standards or kicking this to our legislative branch to strike the proper balance on these questions.
If the Supreme Court does, in fact, shrink protections for social media companies, users can expect more aggressive content moderation, which would be the only way for companies to avoid the threat of constant litigation.
The question is how far that legal protection stretches and whether it includes decisions to recommend and amplify other users’ content.
In a social media world awash in misinformation and disinformation, more content moderation on social media may sound like good news. There is, however, an important distinction here between less disinformation and less speech. One is a goal, another is a sign of an unraveling democracy.
Ultimately, as some of the justices themselves seemed to acknowledge, we want the people closest to “we the people,” our representatives, to craft new laws for a new world. As Justice Elena Kagan jokingly pointed out in oral arguments in the Google case, “We’re a Court. We really don’t know about these things. These are not, like, the nine greatest experts on the internet.”
It’s time to update rules written before we gave new meaning to words such as tweets, reels and grams. But that should be done after input from various stakeholders. Yes, this means holding congressional hearings, drafting legislation and voting. It shouldn’t mean oral arguments and opinion drafting. Congress, take the wheel.