Last week, a bipartisan group of lawmakers in the Senate introduced legislation to prevent social media companies from allowing kids under 13 on their platforms.
In recent years, experts and whistleblowers have come forward with evidence about social media’s potential danger to users, particularly its impact on the minds of young people. And recent efforts to ban TikTok have rightly spurred calls to regulate all social media platforms.
In theory, the Protecting Kids on Social Media Act is one step in that direction.
However, even if we assume all the lawmakers co-sponsoring the legislation are doing so in good faith, a couple of things seem clear at the outset: They’re going to have a hard time passing this bill, and an even harder time enforcing it if it’s actually enacted.
First, let’s see what’s in the bill.
As NBC News reports, the bill “would set a minimum age of 13 to use social media apps, such as Instagram, Facebook and TikTok, and would require parental consent for 13- to 17-year-olds.”
A little later in NBC’s report, we get a bit more clarity on what this ban would look like in practice:
The bill would ban social media companies from recommending content using algorithms for users under the age of 18. It would also require the companies to employ age verification measures, and instructs them to create a pilot project for a government-provided age verification system that platforms could use.
Under the measure, the Federal Trade Commission and state attorneys general would be given authority to enforce the bill’s provisions.
On its face, the legislation doesn’t sound bad. But coming from a legislative body that has taken virtually no measures to curb social media use until now, this has the feel of a last-minute school project haphazardly thrown together.
Writing for Wired last week, Matt Laslo laid out some of the barriers to passing this legislation. Specifically, he mentions skepticism among Democrats and Republicans over the government’s ability — and even its authority — to enforce the bill, and its ability to establish an age verification system that actually works. (How, for example, would such a system guarantee a child gets permission from their guardian rather than any adult?)
There’s also confusion as to what constitutes a social media platform. The Protecting Kids on Social Media Act may be designed to prohibit children from using traditional, algorithm-based social media sites, but nowadays, platforms such as YouTube and Spotify — not typically known as social media — are implementing algorithmic recommendations just like the ones you find on Instagram and Facebook. There’s little reason to believe the government is currently prepared to handle the burden of regulating this rapidly expanding universe of platforms with manipulative elements.
Social media manipulation is undoubtedly an issue. But lawmakers should understand that regulation needs to happen in steps. In that vein, I agree with the conservative writer Adam Thierer, who wrote in 2022 that an outright social media ban for young people would “not be effective,” in part because we as a society “fail to fully grasp the nature of each new medium that youth embrace.”
Lawmakers need to educate themselves about the exact platforms they intend to ban, why they intend to ban them and how they wish to do so. And then they need to launch a public information campaign to let the masses know this is a worthy endeavor.
Absent these steps, any social media ban seems doomed to fail.