Content Moderation Through the Looking Glass
Reframing how to look at some of the debates about online content
Greetings from Scottsdale, Arizona - well, technically the Phoenix airport Delta Lounge. It’s been an exhausting but very fun two weeks as I celebrated my friends’ wedding in Greece and then met some really cool and smart people at a networking retreat for mid-career people who are trying to figure out their next thing.
A LOT has happened since my last newsletter. After a brief break when the Roe decision was leaked, Musk is back in the news tweeting away and driving integrity professionals up a wall with his seeming lack of understanding about how content moderation online works. In fact, at the Integrity Institute, 22 of us signed an open letter to him about how we think about protecting free expression and the “public square.” The Washington Post covered the letter and talked to my colleague Sagnik Ghosh in their Friday Technology 202 newsletter.
Writing this letter with my fellow Integrity Institute members got me thinking about how there are a handful of debates around content moderation where I think we are arguing about the wrong thing. So I thought for this next newsletter I would go through some of those and how I would reframe the debate.
Claims of censorship: When the Right complains about censorship online they tend to focus on being mad that something is taken down and that they want the platforms to allow more speech. What we fail to address in this conversation is that what they really disagree with are the platform's content policies and where they draw the line of what is allowed or not. I’d love to see a debate about the actual policies and where people think the lines should be drawn versus just saying allow more content because the line does have to be drawn somewhere (and it can’t just be about what is or isn’t legal).
Claims of censorship, part 2: Last week, Politico reported about a meeting with some GOP senators and Google. The Senators wanted answers on if Google was purposely marking emails from Republicans as spam. I hate it when the Right tries to make a platform prove a negative. I hate it, even more, when they don’t even acknowledge that one has to have good email practices so that their messages don’t go into spam. It’s not censorship. It’s being penalized for having bad email hygiene practices. Everyone is held to those same rules and it’s not political when they break them.
We’ll just follow the law: This is one of Musk’s favorite lines and it drives me up a wall. There is no single law. There are different ones across states, countries, and regions. Not all laws are good and will actually suppress speech. Not all government requests for information are made with good intent. Not all countries are democracies. Many laws contradict one another. Most laws also don’t tend to go into the level of detail needed to make fast decisions on every single piece of content. While I agree regulation is needed in some areas, content moderation cannot be totally outsourced to policymakers.
Platforms are purposely taking down my content: I don’t think a week has gone by over the last eleven years when I haven’t heard someone say that they are convinced the platforms to have people actively searching for content they don’t agree with in order to take it down. While platforms do have systems, tools, and people to try to find violating content they do not have people just looking for viewpoints or people they don’t like to mess with them. This work is being done at such a scale that they don’t have the time to be vindictive in this way.
Should politicians be fact-checked: When people debate on if politicians should be fact-checked on the platforms that is not what we should be asking ourselves. Politicians are fact-checked every day. The questions we need to be debating are who gets to pick who the fact-checkers are AND more importantly what - if anything - should the penalty be when they are fact-checked. Should the content be removed? Demoted? Labeled? This is what bothers people - that no one seems to be holding politicians accountable for what they say online and they want some sort of punishment that would hopefully change their behavior.
Content moderation is more than just what a post says: A few weeks ago I relistened to Vijaya Gadde and Jack Dorsey’s 2019 interview on the Joe Rogan podcast. Many times you can see them talk past one another in the conversation about Twitter’s rules. Joe and his guest host keep pressing Gadde and Dorsey about why they can’t say certain things. Gadde keeps saying it’s not about what the post says but how the behavior by which they keep saying it on the platform and when it gets to the point where Twitter thinks it’s bullying or harassment they will also take action. This is a point I don’t see debated much publicly. Someone has the right to free expression but what are the rules about how they can express that opinion? In the offline world, to hold a protest you need a permit and you can’t for instance, just park yourself in someone’s office yelling at them all day. Where are those lines online? When is something spirited debate and when is it harassment. People will have different viewpoints and tolerance levels here, but that’s a question I would love to hear more of a debate on.
Relying on user reports: Most platforms give people the ability to report posts that they think violate the rules. And, while these reports are used as one signal, they really can’t be the only way platforms look for violations as many of them are worthless because they don’t actually violate the rules or people think they violate the wrong rules. There’s also a lot of concern about these being manipulated if enough people report something but that is also not true. Platforms have guardrails in place to catch that type of activity.
Spam and bots are easy to spot: The problem with fighting bad actors is that they’re really smart. As soon as a platform figures out one way to stop spammy or bad activity they pretty quickly find a way around it. They get smarter by looking like real humans or using code words and images to get their points across. That’s why you’ll hear people in this space always say there’s no finish line because once you plug one hole a new one emerges.
These issues are going to remain the same: I worry that while we’re having all these current debates we are not paying attention to how these problems are going to evolve and become even more complicated. As more and more people move to messenger platforms as well as ephemeral and live content the questions around content moderation will change. Nick Clegg covers some of this in his Medium post about the metaverse. There’s first the philosophical question of when people’s live conversations should be monitored for potential violations and then there are the technical challenges of doing that sort of live surveillance. We need to make sure we’re talking about where these problems are going and not just how they have manifested in the past.
The scale of the problem: A lot of times you’ll see stories about how the tech platforms are failing in taking down content like the Buffalo shooter because you can just do a simple search and still find it. However, that’s paying whack a mole when the platforms need a way to scale this across billions of pieces of content. It’s not just about doing simple searches. Charlotte Willner covers this challenge well in this Lawfare interview.
These are just a few of the things about the content moderation debate that I think we really need to reframe. The answers aren’t easy nor are they black and white so that makes it hard to have these nuanced conversations, but they are happening within the platforms. So, if others want to influence that thinking they need to know how they’d answer these challenges too.
Finally, a few housekeeping items.
First, in case you missed it, you can order more free COVID tests from the govt https://special.usps.com/testkits.
Second, we’re looking for a full-time director of tech and democracy at the International Republican Institute so I can move into a senior advisor role. More info here. Feel free to reach out to me if you are interested as well.
Also for IRI we are doing a panel at RightsCon and are looking for folks who have experience navigating Internet Shutdowns, and want to share your recommendations for environments in which they’re anticipated/taking place? (No tech expertise required!) Join this private session at RightsCon by IRI and Jigsaw.
I was on an Atlantic Council panel this week called: Wartime content moderation and the Russian invasion of Ukraine. I was also on an Australian podcast called The Foil: Episode 18: Disinformation, Democracy & Elections.
Tech Policy Press: Internet Trolls Should Not Dictate the Terms of Public Exposure to Hate
Tech Policy Press: What Will “Amplification” Mean in Court?
Knight First Amendment Institute: Rereading the First Amendment
Washington Post: Elon Musk's free-speech agenda poses safety risks on global stage
Tech Policy Press: Report Reveals Facebook Provided Warning to FBI Before January 6
BBC Trending: Confessions of an election troll in the Philippines
Atlantic Council DFRLab: Op-Ed: Making the Americas resilient to information operations against democracy
University of Cambridge: Value of News to Digital Platforms in the U.K