It’s Complicated: Free Speech and Trust & Safety - Part 2
An exploration of the many facets that go into holding speech and safety values at the same time and when one should be prioritized over the other
Ack! I’m sorry this is late. I set the wrong time when scheduling this earlier this week. Apologies.
Not surprisingly, many of you are also grappling with these questions about speech and safety. A huge thank you to those of you who shared your thoughts after the first part of this series. I’ve added two bonus dilemmas at the end of the list based on what some of you shared. If you are wondering what the heck I’m talking about, read part one first and then come back.
Let’s jump in.
Please support the curation and analysis I’m doing with this newsletter. As a paid subscriber, you make it possible for me to bring you in-depth analyses of the most pressing issues in tech and politics.
Foreign vs domestic: The 2016 U.S. election and the discovery of Macedonian teenagers running fake news sites and Russian interference changed modern-day content moderation. Fact-checking programs, political ad transparency tools, and terms like “coordinated inauthentic behavior” all started with the goal of preventing unwanted foreign activity on online platforms. These inevitably started to catch domestic activity as well - which is where things got a lot more tricky. Sometimes, knowing what is foreign-based versus domestic is impossible at first. Most content from foreign entities is perfectly fine. Is it enough if a foreign entity is transparent about who is behind the messages? So many nuances to work out.
Past, Present vs future events: One of the questions at the heart of the platforms’ decision to not enforce content about the 2020 election is if past events should be treated differently than future ones - especially elections. After all, both parties have sown doubt about high-profile elections. However, people have been sworn in, and the government has continued to operate despite those doubts. On the flip side, we must help ensure future election processes hold. So perhaps it makes more sense to focus attention on the future than the past.
In a non-election-related case, a recent Oversight Board ruling tries to thread the needle on how Meta should handle content with one context in the moment of something happening versus weeks later. The decision is so convoluted I have no idea how anyone would operationalize it, but here’s what happened. There was a post showing people entering a police station in Haiti, attempting to break into a cell holding an alleged gang member, and threatening them with violence. Meta took the post down three weeks later after it was flagged to them. The Board said that while they agree the video should have been removed while the incident was happening since it took Meta three weeks to take down the video, they should have applied the newsworthiness policy instead because the threat of violence was long gone. 🧐 So, does this mean that platforms are expected to restore content after threats of violence are past? Who decides that, and how do you do that at scale?
Imminent harm versus other harmful content: This is perhaps the number one nuance that most people don’t realize and/or agree upon when it comes to handling harmful content. Many times, when platforms talk about their policies around violence, they’ll be sure to put the word imminent in front of it, which means “about to happen.” Meta has it in the Crisis Policy Protocol they created after January 6. It can be nearly impossible for a platform to determine if content might lead to violence. They can make the best guess, but that will likely lead to much legitimate speech being taken down, just in case. You can see how quickly this could turn into the Minority Report.
Praising vs condemning: It’s sad to say, but when something bad such as a terrorist attack or shooting, happens, there will be people who praise it in addition to condemning or raising awareness about it. Most platforms do not allow praise but will allow content about the others. What gets challenging is trying to determine the intent of who is posting something when they aren’t clear about what their goal is - and doing this in moments when facts are murky, and things are moving fast.
Taking content down vs labeling or demoting: I don’t think we talk enough about the next step of content moderation once it’s been decided if something violates or not - that is, what to do about it. Much of content moderation started with the binary choice of leaving something up or taking it down. However, over the years, platforms have created other levers they can pull, from labeling content to reducing the number of people who see it. This follows the speech vs reach dilemma and is one we should explore more of.
Platform decisions vs user control: Because it is impossible to design a system that makes everyone happy, platforms like Meta have been building more tools to give users control over their experience. These could be options around a chronological versus curated feed, whether or not you want to see political ads to how much potentially violating content you want to see. On one hand, putting this in the hands of users makes sense. On the other hand, there is the worry that this could exasperate people who are staying in their bubbles and not seeing various perspectives.
Due process vs scalability and speed: OK, admit it. How many of you roll your eyes or are sick and tired when I and others always ask the question, “But how does it scale?!” 😆 I get why it’s annoying, but it’s also a real challenge. This is one of the biggest challenges with the structure of the Oversigth Board. They get the luxury of taking weeks and months to look into an individual piece of content. In the real world, moderators need to do this in seconds. I think some people should take the slower route to help platforms make these decisions in the future. However, we do need to recognize that there are tradeoffs platforms have to make when moving fast and with billions of pieces of content posted every day.
Bias in outcome vs bias in enforcement: In an ideal world, we’d be able to make unbiased decisions. The reality is that we all have biases. The trick is to be conscious of them and how you do your best to take multiple perspectives into account. When it comes to content moderation, one also needs to determine which matters more: bias in the results of your decisions or how you enforce your rules. An example, When deciding on a policy, you take into account if it will impact one political party more than another. Or, you treat different entities differently than one another based on a characteristic they have. Others in trust and safety might see this dilemma differently, so if you do, please leave your perspective in the comments!
Telling the rules vs giving people a roadmap on how to go just up to the line: It’s a phrase you hear often: “Just tell us what the rules are.” People rightfully get so frustrated that platforms aren’t clear about what people can and cannot do. There’s a flip side to that. Get too detailed on what those rules are, and that tells people how to get right up to the line. You might say, “OK, so what.” The problem is people tend to engage more with borderline content. Facebook has a whole theory on this that’s worth reading.
Want to do vs resources to do: This is another one that tends to garner eye rolls. People have trouble wrapping their heads around the fact that giant companies have to make calls about where they spend their time and resources. However, it is true - and is even more true for smaller platforms. No one can do everything all at once. Priorities and hard choices must be made. Facebook executive Andrew Bosworth (known as Boz) has a brutally honest post on this.
Speech vs reach, part 2: Back to one of my favorite tradeoffs. I’m adding this one as the first bonus because content is being distributed online in a lot of different ways today. Not everything is an algorithm. Substack focuses on each writer building their own audience. People find content on Google and YouTube by searching for it versus a feed, but they do make recommendations. Exploring how a platform functions to help trust and safety issues is called “Safety by Design.”
Going alone vs working with others (especially the government): Bonus topic number two comes from my readers who don’t like content moderation very much at all and really don’t like the government playing ANY role. At least in the United States, the decision of what to do ultimately rests with each platform. They can take in all the advice from governments, civil society, academia, and others, but they make the call. That’s different in countries where they’ve passed laws requiring platforms to take certain actions. I think you need some government engagement with platforms - especially around foreign interference - but we’ll see where the Supreme Court comes down on this when they rule on Murthy v Missouri.
These are not easy tradeoffs. Everything depends on the context in which it is happening and people stack rank things differently (look at this Pew chart that shows how more people are in favor of restricting false information). I think we’ll continue to see platforms taking different approaches to these questions to find the right balance for them, their users, advertisers, and shareholders. Ultimately, it will be us who decide which platforms we choose to use individually and collectively.
Please support the curation and analysis I’m doing with this newsletter. As a paid subscriber, you make it possible for me to bring you in-depth analyses of the most pressing issues in tech and politics.