Like ancient civilizations lost to time, the internet is full of crumbling ruins of social networks such as Myspace, Friendster, and Google Plus. It's a sober reminder that every once-great empire has an inevitable decline and fall. So far, Facebook has managed to avoid this fate, remaining relevant for over 17 years. But Facebook has learned that longevity comes at a cost; namely, public criticism and government scrutiny of its content moderation policies. Facebook has earned an ignominious reputation for helping to facilitate the spread of harmful misinformation and conspiracy theories to an audience of 2.85 billion monthly active users around the world. To combat this—and to placate disgruntled world governments—Facebook has imposed increasingly draconian restrictions on speech on its platform, and there is every reason to believe that this is the dystopian future of Facebook and of social media at-large.
This week, Facebook debuted a widely-mocked feature to combat "extremist content" on the platform. Pop-up messages ask users, "Are you concerned that someone you know is becoming an extremist?" Another pop-up informs the user, "You may have been exposed to harmful extremist content recently," and "You can take action now to protect yourself and others." The pop-ups direct users to a support page. The feature has not been well-received, to say the least. But we should expect to see more of this, not less, in the future.
Already, algorithms comb through your Facebook posts, searching for problematic keywords or phrases. Any post on the subject of Covid-19 vaccination, for instance, includes a disclaimer about the safety and efficacy of vaccines. This isn't exclusive to Facebook, either; Twitter labels certain tweets as "disputed," and YouTube provides "context" on commonly-misrepresented subjects, such as the Holocaust. In isolation, it's hard to take issue with any one of these—vaccines are safe, and the Holocaust did happen. The problem is how loosely Facebook defines a nebulous concept like "extremism," and who decides what is or isn't "extreme."
Facebook has partnered with third-party fact-checkers to identify misinformation on its platform, but a fact-checking website like Snopes or Politifact isn't infallible. For instance, there is a very real possibility that Covid-19 escaped from the Wuhan Institute of Virology, but many fact-checkers referred to the story as a "debunked conspiracy theory," and discussion of the topic was taboo on Facebook for most of the pandemic. And Facebook still hasn't accounted for why it restricted the sharing of a New York Post article about Hunter Biden's wayward laptop in the weeks before the 2020 election—a frivolous tabloid story, to be sure, but certainly not deserving of censorship.
In an appearance last week on the Lex Fridman Podcast, evolutionary theorist Dr. Bret Weinstein—an old friend of The Standard—once again ventured far beyond his field of expertise to suggest that social media should only regulate speech that is illegal. (Nevermind that Dr. Weinstein, an atheist, would be censored in the 71 countries in the world that have laws against blasphemy. Facebook is an international platform, after all.) This may be well-and-good in a country where the people can be trusted to discern for themselves what is or isn't true. But the United States has never been that country—even the founding fathers harbored a deep distrust of the American body politic. The events of January 6th showed that there are real-world consequences of misinformation spread on social media; indeed, the Capitol insurrection was organized, in part, on Facebook.
Under the much-maligned Section 230 of the Communications Decency Act, Facebook cannot be held legally responsible for user-generated content posted on its platform. Nonetheless, Facebook does moderate such content, fearful that if they don't, the government will. (Of course, it also wants to remain advertiser-friendly.) But contrary to the boneheaded rants and empty legal threats of former President Trump—who was banned from Facebook, Twitter, and YouTube for his role in fomenting the January 6 insurrection—repealing Section 230 would result in more censorship, not less. An internet where websites are held responsible for user-generated content would be a dystopia in which a user's every post or comment is scrutinized even more than it already is. Repealing Section 230 is not the solution to this problem.
Why not declare Facebook a "public forum" and thereby subject it to First Amendment's protection of free speech? The problem with this solution is that the courts have consistently ruled against it. The First Amendment protects free speech from the government—it does not apply to a private enterprise such as Facebook, which is free to moderate its content as it sees fit. If you don't like it, according to capitalism, you can take your business elsewhere. Nevermind that there isn't really anywhere else to go; Facebook and Twitter dominate the social media space, and competing platforms with looser restrictions on speech, such as Parler, inevitably become fetid cesspools of hate speech and conspiracism.
Can Facebook be stopped? Well, sort of. The platform is falling out of favor with younger users, as Gen Z flock to TikTok and Snapchat instead. In perhaps 5 to 10 years, Facebook may be supplanted by another platform the way Facebook itself supplanted MySpace. But the problem of how to regulate speech isn't particular to Facebook. Future social media platforms will have to contend with the same dilemma. What's needed is a change in the law—or, I would suggest, an international convention—to establish a framework for content moderation that strikes an appropriate balance between "extremist content" and heavy-handed censorship.
In lieu of this, Facebook won't be stopped anytime soon.