It’s Complicated: Free Speech and Trust & Safety - Part 1
An exploration of the many facets that go into holding speech and safety values at the same time and when one should be prioritized over the other
I don’t think a day has gone by since maybe mid-2015 when I haven’t thought at least once about the tradeoffs between free speech and what’s appropriate to say online. Frankly, my experiences with this topic - albeit in the analog world - go back to 2001. That year, David Horowitz ran an ad in a series of campus newspapers, including where I worked, the Badger Herald at UW-Madison, that caused massive protests and a national debate about free speech on campuses.
Recently, this tradeoff has been causing me some sleepless nights after I talked to the Wall Street Journal and others about Meta’s policy around 2020 election denialism in ads. I hate the rhetoric, but candidates are allowed to lie in broadcast ads, and I think there is a difference between past and future elections. But I’m getting a little ahead of myself. It’s not just Meta that’s been under scrutiny; The Atlantic recently ran an article about Substack hosting white nationalist content.
Moreover, there’s been a discussion of X and advertisers leaving the platform, and Musk and CEO Linda Yaccarino are trying to make it a free speech issue. Last week, Ben Thompson did a whole episode on Sharp Tech about free speech. My former Facebook colleague Brandon Silverman had emailed me about it before I even listened, as he was frustrated with Ben’s arguments. When I listened, much of what Ben said resonated with me (full disclosure: Ben and I both worked at the Herald when the Horowitz ad ran), but I also got where Brandon was coming from. These debates are more than just about supporting free speech; advertisers have every right to leave a platform if it doesn’t fit their business objectives.
A few weeks ago, I was trying to think of ideas for a panel discussion, and I thought about how much I’d love a deeper discussion on whether the value of freedom of speech can coexist with trust and safety work. We all dance around the topic, but rarely do we tackle it head-on. My gut says they are, but I had difficulty squaring it all. So, I decided to write a newsletter on it.
My goal in this was not to come up with a grand solution. I think we as a society are a long way away from settling on new norms. I wanted to highlight all the facets we need to consider when thinking about these issues and how they manifest themselves. My original list was 30, thanks to some crowdsourcing I did on Threads and LinkedIn. I’ve narrowed that down to 20. I’m doing ten of them today and ten of them next week, so I don’t saddle you with a really long newsletter.
What follows is a brief description of those and where I land on some of them. Do know that I am only scratching the surface. All of this could be the length of a book if I really dug into it.
Please support the curation and analysis I’m doing with this newsletter. As a paid subscriber, you make it possible for me to bring you in-depth analyses of the most pressing issues in tech and politics.
Policy vs execution: The first thing I always remind people when we’re talking about content moderation is that there is a difference between the policy that a platform might have and how it executes against it. There are a ton of facets even within this topic such as do they enforce pro-actively or reactively? Should content moderation happen at the time of posting or after the fact? When you are evaluating a platform, think about if you disagree with their policy, how they are executing it, or both. My two cents on the questions I posed: I think it’s better for a platform to have a policy even if they only reactively enforce it because being able to do something is better than nothing, and I don’t think we should have an approval process before someone posts. That would suck all the usefulness out of these platforms.
Spirit of the policy vs letter of the policy: In an ideal world, we all just want to know what the rules are. That gets complicated with content moderation when you can’t always think up every way someone might use your platform. This means some platforms employ a spirit of the policy approach, which means even if something happening is not spelled out as violating, it can still be actioned on as the policy was intended to cover incidents like this. Another approach is to hold strict to just what the policy says. Both of these have pros and cons. Yoel Roth and I talked about this one when he came on my podcast if you want to dig a little deeper.
Content vs behavior: Most public discussions on this topic tend to focus on what a piece of content says. However, that’s not the only thing platforms look at. They can also take action on the behavior of the actor. Are they spamming? Pretending to be someone they aren’t? In 2019, Camille Francois put out a paper about this Actor, Behavior, and Content framework that is worth reading. I don’t think we talk enough about what behavior is or is not acceptable.
Speech vs reach: Ever since Renee DiResta wrote this piece in 2018, it’s been quoted all over the place. Suddenly, people had a new way of parsing out people’s right to say something versus their right to have it amplified. It’s a worthwhile debate and one where I tend to like the elegance of letting someone still post, but it just not be amplified. That said, Daphne Keller, who I also respect, has warned that from a regulatory standpoint, laws against reach might violate the First Amendment, too.
Paid vs organic: Most platforms have stricter rules for what can be said in paid advertisements versus organic or free content. Applying that to political speech is tricky. Broadcast stations regulated by the FCC have to accept ads even if the candidates lie. Facebook has decided not to penalize politicians for being fact-checked. That means that even though other entities can’t run ads on things fact-checked as false, a politician can. Now, online platforms aren’t beholden to the same rules as broadcast, and many have taken different approaches - including banning political ads. I still need to do a deep dive into online political ads, but when you are thinking about your policies in this space, you need to think if you want different rules for paid versus organic and what those are.
Speech vs brand safety: The flip side of number five is what your advertisers want. Many of them do not want their content appearing next to objectionable or controversial topics. They will insist that platforms build ways to ensure that doesn’t happen. Failure to do so can cause some to stop spending on ads. But that’s not the only reason, and advertisers might stop. Maybe their ads aren’t getting the return on investment they are looking for. Maybe the platform owner is saying and amplifying horrible viewpoints. Advertisers do not need to stick around to support free speech. They have a right, just like everyone else, to go somewhere else if they want.
Under 18 vs over: Protecting kids online has rightfully been at the forefront of many conversations in Congress. Platforms have rules about kids under 13 having accounts, but that is very hard to enforce. They don’t necessarily have different policies for kids under 18 or not, but some do have specific products for kids. Recent attempts at banning apps for kids under 18 have been blocked because the courts have determined kids have First Amendment rights, too.
Truly public space vs private platform: In the early 2010s, some platforms liked to declare themselves the online town square. While it made for a nice soundbite, the truth is that they are a bit more like the stores and restaurants around the town square. I don’t have the space to get into the details of what businesses can and can’t do, but in general, they have their own First Amendment rights as well. However, a challenge for these platforms in determining what is or isn’t allowed is also if there should be different rules if you are posting something on a page or account with a lot of followers (we’ll discuss how much is a lot another time) versus in a message to one or a few folks. Similar to there being different rules if you are saying something loudly in a restaurant versus in your own backyard.
Newsworthiness vs Fairness: This is another one of those that could have an entire book written about it. But when thinking about speech versus harm, a platform must think about how to balance content. People should know what an elected official has said versus treating them like any other user. I understand the desire for fairness, but it’s just not how reality works. Political speech has long gotten many exceptions. Maybe that needs to change, but that’s a much larger discussion. On the flip side, the Supreme Court will be ruling on a case if politicians shouldn’t be able to do things that others can, like blocking people from engaging in their accounts. In Europe, they are considering different rules for media organizations where platforms would have to give 24 hour notice before removing something. Having the exact same rules for everyone online when that doesn’t exist offline is difficult.
Speaker rights vs listener rights: When people and organizations use an online platform, they take on different roles at different times. Sometimes, they are content creators. Other times, they are just consumers of content. Other times, they engage in a back-and-forth with others. Sometimes, they might be the target of content even if they are not actively engaging with the creator. All of those roles have rights that can be at odds with one another. Many people don’t post on social media because they don’t want to be attacked. That fear has silenced them. Others don’t want to consume inflammatory/spammy content. They should have the right to avoid it. We’ll get into whether or not the answer here is to give users more control over what they do or don’t see, but everyone’s perspective needs to be taken into account.
Phew, ok. This is a lot. As I mentioned, my goal here wasn’t necessarily to have answers but to demonstrate the many facets that go into these conversations. Part two will come next week. To give you a sense, here’s what the other ten are:
Foreign vs domestic
Past, Present vs future events
Praising vs condemning
Imminent harm versus other harmful content
Taking content down vs labeling or demoting
Platform decisions vs user control
Due process v scalability and speed
Bias in outcome vs bias in enforcement
Telling the rules vs giving people a roadmap on how to go just up to the line
Want to do vs resources to do
Please share with me your feedback. What do you think I got right, wrong or missed?
Please support the curation and analysis I’m doing with this newsletter. As a paid subscriber, you make it possible for me to bring you in-depth analyses of the most pressing issues in tech and politics.
This typology of tradeoffs is very good and comports with my experience. The other thing is which of these matters most at a particular time and for a particular case of content moderation.
So glad you're doing this. Look forward to reading and responding