The next generation of platforms grapple with content moderation
An open letter to those trying to navigate these complicated waters
Good morning from overcast DC, where the pollen is so thick it’s covering cars and blowing through the air in huge, yellow clouds. As the graphic below indicates - we are indeed in the pollening season of DC — my sympathies for those of you who, like me, suffer from allergies. Last night I had to go to bed early; I felt so miserable.
But I woke up this morning feeling better and thinking about what I wanted to write for this weekend’s newsletter. I debated writing about this poll about how women are exhausted and disappointed but not giving up. They just have given up on any illusion of external support. That deeply resonated with me, and I wrote a short note about it. I might write about this later, but it wasn’t the topic that stuck with me the most.
Instead, I was thinking about how so many newer platforms now grapple with the content moderation decisions that some of the bigger platforms had to deal with years ago. However, unlike the bigger platforms, they are being asked to grapple with them sooner and scale faster with less money and resources.
As I was bouncing this around in my head, three things happened.
First, I finished listening to Substack CEO Chris Best’s interview with Nilay Patel, where he struggles to answer some basic questions on content moderation.
Second, Discord released its response to the Pentagon leaks, where they outlined a summary of what they could about the recent incident, including actions taken and more details about their practices.
Third, Facebook memories reminded me that this time last year, Jonathan Haidt wrote his wildly popular piece about the decline of social media, democracy, and trust in the past 10 years.
In my newsletter about that piece, I was thinking about what might happen if Musk gets Twitter (wow, that was only a year ago) and how I didn’t think he was ready for the challenges of having a free speech policy.
During that time, this graphic was making its way around Twitter about the seemingly neverending cycle of platforms grappling with content moderation.
You'll recognize a similar pattern if you read/listen to Best’s interview with Patel.
So, I want to write an open letter to all the people working at platforms big and small, new and old, who are trying to figure out the new rules of the road as they build the next generation of platforms from someone who has gone through a version of that.
Dear Tech Employee -
I know you are probably in the middle of juggling about 500 balls, making five presentations for senior executives, and explaining to your friends for the 100th time why you can’t hang out this weekend. But I ask you to stand up, go grab some coffee, and take five minutes to read this note. I hope I can help.
Once upon a day, not too long ago, I was in your shoes. From 2011 to 2021, I spent ten years at Meta - then Facebook - working on our election work. I started as someone helping politicians and governments using the platform and ended up being someone trying to help the company figure out all sorts of thorny issues, from when politicians violate our terms to providing more transparency around political ads. I was one of the people who not only had to answer questions from our employees about what we were doing but also defended it publicly.
It’s a hard job, and it’s a lonely one. Only those who have had to do it truly get what you are going through.
And, yet, you are also pioneers. Your platforms differ from Facebook, YouTube, Twitter, or any other legacy companies. The press, civil society, and governments expect you to do as much as those companies to grapple with these problems without much care that you don’t have as many resources as they do and the fact that scaling doesn’t happen overnight.
Even if you are at one of the legacy companies, you are facing layoffs, new management, and new challenges with tools such as artificial intelligence. Your life isn’t easy either.
They want you to slow down, but if you do that and others don’t, your company might not survive to do anything. It would be easier if people agreed on what you should do, but they don’t. Doesn’t help that there’s always the threat of being dragged in front of a Congressional hearing or your employees being doxxed because people assume you are making decisions for political reasons.
As you navigate these waters, I have some tips for you to consider:
Don’t stick your head in the sand. It can seem easier to say that you’ll ban all politics or that you won’t answer hypotheticals about content moderation and just hope you can kick the can down the road a little longer when things slow down a little. It won’t ever slow down. It will only speed up, and the longer you wait, the harder it will be. Have someone start doing risk assessments of your vulnerabilities and how you might handle them.
Don’t think X problem won’t affect you. Now, every platform is different. Some problems might not manifest on yours like others. That said, I see so many platforms that say that there aren’t any politicians using them, so they don’t need to worry about political content. If you believe that I have a bridge to sell you. 2024 is going to be a HUGE year for elections. Start planning now.
Content moderation isn’t the only solution. Reactively dealing with bad content is a never-ending cycle. It has to be done, but it’s not the only way to think about the problem. Integrity Institute co-founder Sahar Massachi has a good piece on how design choices can help with these problems.
Learn how to navigate impossible tradeoffs. These are not easy problems. They will force you to make hard choices between various values, all of which you want to uphold. Free speech is an important value. But so is preventing hate and violence. Develop a set of values and frameworks for how you think through those. I like the decision by traffic light model (long-time readers know I like a good grid), but find what works for you.
Transparency and engagement are important. You can start telling your story of how you are approaching these problems early. I like how Open AI brought together a team of experts to red team ChatGPT 4 and released their paper. You may not get credit right away, but talking to others about how you approach these problems goes a long way toward trust.
You can’t do it all at once. I just gave five tips that require a lot of resources. You might be sitting there feeling MORE stressed than not. I get it. I’m here to say you can’t do it all at once, and that’s ok. Although some people think everything on the internet can be done in a flip of a switch, it's not true, and all you can do is make a plan, ruthlessly prioritize and take steps forward bit by bit.
Take time to recognize your accomplishments. With that never-ending to-do list, it can be easy to cross things off and move to the next without properly recognizing the work you and your team have done. At least once a quarter, take time to tally up all of the things you’ve done. It will be a lot more than you think. Make sure to thank those around you who helped make them happen.
You might get things wrong. I’ve lost track of how many versions of fact-checking labels Facebook is on. Companies have adapted their community standards, improved their enforcement mechanisms, and changed course numerous times in trying to figure out these problems. Make sure you are measuring and evaluating the impact of your work. Be ok saying you might have gotten something wrong and fix it. Also, be ok holding your ground where you think it’s important. No one agrees on the solutions, but you can keep experimenting to find new ways to approach the problems.
Take care of yourself. There’s something I’ve noticed over the years about trust and safety professionals - they really, really, really care about this work. They feel the weight of the world on their shoulders and are keenly aware there’s so much more they should be doing. You want to do more. You want to fix it all before going into new countries, or the next election happens. That’s just not practical. So be sure to take breaks, exercise, be with friends, sleep, or whatever your relaxation method is. This is a marathon, not a sprint; we need you in top form.
You aren’t alone. It might feel at times like no one else quite knows what you are going through. While everyone’s situation is different, there are many, many people who have been in your shoes. There are groups such as the Integrity Institute, Trust and Safety Professionals, and the Digital Trust and Safety Partnership that can give you community and support. Companies such as Duco, Trust Lab, Cinder, and Cove exist to help platforms scale their trust and safety efforts. (Disclosure: I’m an expert with Duco and friends/former colleagues with many of the founders of these companies.) If you are feeling stuck or need someone to talk to, reach out to me or any of these orgs. So many people are willing to help. For instance, on April 27th, the Prosocial Design Network is hosting an event about how small companies can work with researchers.
My perspective is but one out of hundreds, if not thousands, of people who have worked in trust and safety over the years. We made mistakes; we built a lot of stuff I’m really proud of. I hope we can all learn from each other as we move into this next phase.
When I was at Facebook, I had cards made up with the Man in the Arena speech by Teddy Roosevelt. People would keep them at their desks; it was a good reminder/motivation for the tough days. I hope it does the same for you:
It is not the critic who counts;
Not the man who points out how
the strong man stumbles, or where
the doer of deeds could have done
them better. The credit belongs to the
man who is actually in the arena, whose
face is marred by dust and sweat and
blood; who strives valiantly; who errs,
who comes short again and again, because
there is no effort without error and
shortcoming; but who does actually
strive to do the deeds; who knows great
enthusiasms, the great devotions; who
spends himself in a worthy cause; who at
the best knows in the end the triumph
of high achievement, and who at the
worst, if he fails, at least fails
while daring greatly.
Have tips to add to this list? Join the conversation on this piece at Substack Notes or reply with yours! I’ll include them in a future newsletter.
Also, this newsletter is the weekly one I do for free subscribers. I also do one during the week for paid subscribers. For $50 a year or $5 a month - less than buying me a cup of coffee each month - you’ll get access to a second newsletter each week and the archives. You’ll also be supporting me to spend more time researching and writing. Please consider upgrading today - I would really appreciate it.
What I’m Reading
🚨 Breaking News
The New York Times: Montana Legislature Passes Ban on TikTok for State Employees
🌍 Around the World
Nieman Lab: This citizen-run organization is teaching thousands of people to fact-check in Indonesia
Latam Journalism Review: There should be government funding to pay for journalism: The Digital Journalism Association’s proposal for social media platform regulation in Brazil
UOL (Brazil): Government wants to tax platforms and create agency to supervise social network
European Parliament: Social media platforms and challenges for democracy, rule of law and fundamental rights
Financial Times: India’s opposition calls for probe into fall in Gandhi YouTube views
SMEX: Iraq’s new draft law threatens freedoms and violates constitution
TechCrunch: Meta’s content review in Africa in limbo as moderation partners decry court restrictions
Al Jazeera: Obi voters in Nigeria cry fraud, struggle to keep hope alive
Coda Story: As elections near, Turkey weaponizes the law to suppress speech
Financial Times: ‘They’ve screwed the economy’: Turkey’s heartland voters tire of Erdoğan
Politico Pro: The total eclipse of Margrethe Vestager
Politico Pro: Verheyen’s media freedom report aims for strong national role
🏛️ Washington, DC
💻 The Platforms
Financial Times: OpenAI’s red team: the experts hired to ‘break’ ChatGPT
Engadget: Meta has open-sourced an AI project that turns your doodles into animations
Amazon: 2022 Shareholder Letter
The Verge: Substack Notes is a Twitter feed without the trolls — and that’s a problem
The Wall Street Journal: Amazon Joins Microsoft, Google in AI Race Spurred by ChatGPT
Politico Pro: Twitter releases more political ad data after POLITICO report
👷🏻♀️ Workplace Advice
New York Times: Your Email Does Not Constitute My Emergency
Fast Company: 7 ways to be a manager that people don’t want to quit
🏫 From our friends in academia
University of North Carolina: A review and provocation: On polarization and platforms
Cornell Chronicle: ‘One-size-fits-all’ content moderation fails Global South
Hickey, Schmitz, Fessler, Smaldino, Muric, Burghardt: Auditing Elon Musk’s Impact on Hate Speech and Bots
Calendar
This email was too long for me to include the full calendar, so I’m just sharing some new dates on when some of the tech companies’ earnings calls will be.
🚨🚨 NEW 🚨🚨
April 18, 2023 - Netflix Q1 Earnings Report
April 25, 2023 - Alphabet Q1 Earnings Report
April 26, 2023 - Meta Q1 Earnings Report
April 27, 2023 - Amazon Q1 Earnings Report
April 27, 2023 - Pinterest Q1 Earnings Report
April 27, 2023 - Snap Q1 Earnings Report
May 4, 2023 - Apple Earnings Report
Thanks for calling out the upcoming event from PDN! Fun fact: I tried to get someone from Substack to join the conversation and I very much hope they will work with the research community to help inform their ever-widening social features.
I appreciate the T. Roosevelt quote. Are society and shareholders willing to allow social media to make mistakes? I am. In baseball, a very successful player makes outs 2/3 times. Many successful politicians have lost races, from Lincoln to Obama. As a society, we have to support those platforms willing to make mistakes. As shareholders, we need to elect boards or support business policies which are supportive of those who are trying to deal with content moderation. We should all start with the assumption that this debate should have started 20 years ago, with policies enacted 15 years ago, so that we are in the second or third generation of reasonable policies. We should act that the longer we wait we will have more detrimental leaks, espousal of hate speech and harm to young people who spend more time on social media than they do with people.