Image generated via AI using Midjourney with the prompt, “people yelling at one another in front of the Supreme Court.”
I struggled to write this newsletter this week. I was **this** close to sending an old newsletter, slapping a “From the Archives” label on it, and being done with it. The imposter syndrome has been strong this past week, and I struggle to find creative inspiration.
However, I’m taping two interviews this week, and in prepping for them, I’ve been wanting to step back to refresh my values around freedom of speech and content moderation, as both are about the information environment online and what to expect as we go into 2024.
Please support the curation and analysis I’m doing with this newsletter. As a paid subscriber, you make it possible for me to bring you in-depth analyses of the most pressing issues in tech and politics.
The questions I got from the producers won’t surprise you. They focused a lot on whether the social media platforms are doing enough, how worried I am about disinformation, and how much we should all panic.
I found myself pushing back quite a bit at how things aren’t as simple as their questions would make them seem to be. (I plan to work in my panic responsibly phrase as much as possible. Free stickers are still available here if you want one!)
One of the reasons I started this newsletter was to help me think through some of these issues, so I figured I'd make this one an exploration of my values and how I try to balance the challenges that come with freedom of expression.
My standard approach has generally focused on the tension between protecting freedom of speech and preventing harm. However, the Knight First Amendment Institute’s Jameel Jaffer has an excellent op-ed in the New York Times looking at these cases in front of the Supreme Court and makes an excellent point:
“One striking feature of these cases is that they involve conflicts internal to free speech — not conflicts between free speech and other values, like equality or national security, but conflicts between the competing free speech claims of government, platforms and ordinary citizens.”
Yesterday morning, I quibbled with this point on Threads, but our points are not mutually exclusive. While he is correct that the cases before the court are about competing free speech claims, when evaluating these claims, you will have to consider other values to resolve them.
Let’s first look at the role of government. This features prominently in all three cases:
Can the government block people from their social media accounts?
Can the government dictate to private companies how to moderate content?
Can the government go too far in pressuring companies to moderate content when they can’t legally require them to do so?
The First Amendment is pretty clear that it’s all about preventing the government from censoring speech. It does not apply to private companies - including online platforms - researchers, civil society, individuals, etc.
I’m a little bummed I haven’t had a chance to start reading Jeff Kosseff’s new book, “Liar in a Crowded Theater: Freedom of Speech in a World of Misinformation” because in it, he focuses on how much the law protects falsehoods and why we don’t want the government to be in the business of determining what content we can or can’t see.
Given this, it would be easy to assume the answers above would be no, no, and no. However, let’s flip this around.
In his piece, Jeffer says that he and Knight think that government officials shouldn’t be able to block people from engaging in their accounts. I had an interesting conversation with Alex Howard about this on Threads, too. Both feel strongly that this shouldn’t be allowed.
I agree that I don’t think that a government official should be able to block someone from seeing their content at all. But, where it gets trickier is when it comes to preventing them from commenting. I’m not too fond of a government official stopping someone from commenting because they are saying something negative. I have more sympathy when that person is spamming the comments section over and over and over, harassing the elected official or harassing other commenters.
Just in that paragraph, I went across the entire spectrum of whose speech rights should win out, and that shifted based on bringing other values in, like harm to another speaker. Drawing these lines is hard and even harder to enforce at scale - assuming you want the social media companies to help enforce whatever the Supreme Court decides.
That takes us to issues two and three. Can the government dictate to companies, whether by law or coercion (aka jawboning), how they should moderate content?
Some would say the straightforward answer is no; the companies have their First Amendment rights to moderate content however they want.
That’s fair, but it doesn’t mean we want the government and social media companies to avoid talking at all. Remember, after the 2016 election, the complaint was that the government and social media hadn’t worked together enough to fight foreign interference. And, last I looked, the threats from Russia and China are still very real. There’s that national security angle.
This brings us to the responsibility platforms have - especially since many have said that protecting freedom of expression is one of their highest values.
Jeff puts this well in the one section of the book I skimmed while trying to find inspiration for this piece:
“Providing platforms with the discretion to moderate harmful but constiutionally protected false speech is far from a panacea, but it is better than either an entirely unmoderated Internet, or an Internet in which the government can determine what content is blocked from users. While some large platforms have substantial power over speech, they are not the government. They cannot issue fines. They cannot send police to your door. They cannot throw you in prison. They are subject to competitive pressures, through it might not seem like it given their size. Facebook was once seen as an upstart competitor to MySpace. And now TikTok is emerging as a serious challenger to the US market dominance of Facebook, Instagram and Twitter. While it is harder ot dethrone massive social media platforms than it was even a decade ago, it is possible, and content moderation practices help differentiate the companies.”
Regarding the platforms, I agree with having a strong value around freedom of expression. I will continue to defend Meta’s decision not to penalize politicians for being fact-checked and for letting former President Trump back on the platform. As I’ve written before, I think people have a right to hear what those who want to represent us have to say, and I don’t like what it says to other countries when an American platform doesn’t allow the leading GOP presidential candidate to be on.
Moreover, censorship can have real damaging consequences. One of my dad’s friends (Hi Pete!) writes me often when I do posts like this that he thinks I’m too okay with removing speech and I don’t talk enough about the dangers of censorship. I always really appreciate it when he writes me because I need this kind of pushback to ensure that I’m not straying away from some of my values too much. He’s also right that censoring too much speech can damage our democracy. I also don’t like the idea of those decisions being able to be made by platforms that have so much control over online discourse.
That said, we need trust and safety teams, and we need to moderate some content online. Most can agree that child sexual abuse material (CSAM) and terrorist content should be removed. Most aren’t going to want spam. You don’t want people feeling so harassed that they feel unsafe speaking up on a platform. You don’t want wrong information about where, when, and how to vote.
We also want researchers to be able to monitor what is happening online. Monitoring is not censorship. It’s understanding the information environment so people can more effectively counter-message, understand how information flows, and know what influences people to make certain decisions.
Do platforms potentially take down content because someone found it while doing this monitoring? Yes, but they make that call independently. Do we need more transparency in the companies' decision-making process? Yes.
Right now, it is a tech policy and a First Amendment nerd’s dream. As we saw this week with the first Supreme Court arguments, justices struggled to “define which accounts and pages should be deemed official and open on equal terms to all readers and commenters.”
The Supreme Court rulings won’t be the final say on these issues, but they will have a huge impact on how content is handled online. I increasingly think you will continue to see platforms move away from leave it up or take it down approach to more nuanced approaches such as reducing reach or removing engagement options. Some will try to reduce reach for all controversial content - like politics - and others won’t have the same virality problems that other platforms have so they can make different choices. As the Discord head of trust and safety told Semafor, “We need more scalpels and less hammers as an industry.”
I’m also encouraged to see people shifting more to counter-messaging rather than trying to take down what others are saying. That, too, is not a panacea, but it’s a start.
It’s all very nuanced, to be sure, but whatever the courts decide, we’ll enter into a new set of challenges. For instance, what should platforms do with behavior/content that might violate the rulings while those work through the courts? That might be our next round of cases …
Please support the curation and analysis I’m doing with this newsletter. As a paid subscriber, you make it possible for me to bring you in-depth analyses of the most pressing issues in tech and politics.