It’s weird seeing the last few years of your work life come out to the public. The latest started this summer when Sheera Frankel and Cecilia Kang’s book The Ugly Truth came out, then the Wall Street Journal stories, Frances Haugen’s 60 minutes interview and testimony, the massive drop of stories the last few days and now bit by bit as the docs themselves are made public.
I find myself on an emotional rollercoaster as feelings and memories come back to me about these times. I remember the intense discussions and debates we had. The excitement when a leadership decision went the way we all hoped it would go (I’ll never forget yelling a little too loudly in an airport bar, “Yes! They picked option 3!”) and the frustration when we were faced with really hard choices and it felt like we were going in circles. Through it all, we created a support system as we tried to do as much work as we could with the resources given to us knowing that there was so much more to do.
Decision-making at Facebook is unlike anything I’ve ever seen. The amount of information, the number of people and teams involved, the number of options, the impossible tradeoffs, and sometimes the speed at which they have to be made is overwhelming.
And don’t get me into how many meetings, workplace chats, tasks, emails, comment threads, etc that you have to keep track of.
If you think you’re overwhelmed with just these docs, that’s just a taste of what it’s like inside the company. Think of the information in the docs as the equivalent of the information-gathering stage.
At this stage researchers, product managers, policy specialists, operations teams, and others will each share their perspectives about the topic at hand. It could be about writing a new policy, building a new product, addressing a problem, or reacting to some external inquiry such as from the press or civil society organizations.
The teams will start to pull together what criteria they want to weigh their options up against and then start parsing out all the various scenarios and options. I really like how the product teams do this through a traffic light type grid. You can see a version of that here when I was making a decision this summer over what coffeemaker to get.
The content and product policy team publicly publishes a lot of their docs showing their process. This is one from a recent decision about defining who a public figure is. You’ll see its 19 pages going through what the status quo is, the various options, feedback from external and internal teams, and then a slide presenting the options that would be used for discussion. This Vanity Fair story from a few years ago going into the discussion around if the phrase, “Men are scum” should be a violation goes in depth to the process too.
Ok, you have options. Now you get internal feedback. Usually, that would involve some meetings but also potentially posting to Workplace. Workplace is Facebook’s employee only, internal Facebook basically. People have profiles, there are groups, and employees post and comment a lot. Not too long ago everything was basically open to everyone if you knew where to look. As my former colleague, Adam Conner tweeted out this week, “The one thing you should understand about working at Facebook is truly how much of your time can be spent just reading stuff on the internal workplace. It can be critical to doing your job and what some people do instead of their jobs sometimes.”
The other thing about working at a place like Facebook with tens of thousands of really smart people is that they have opinions. Lots of them. And, they are not shy at sharing them. Casey Newton went into this a little bit in his newsletter Tuesday night when he talked to a former integrity worker who pointed out that many of these docs are posts and comments from Workplace.
Are these posts and insights helpful when framing up decisions? Absolutely. Does it mean that it's the path the company should go? Maybe.
This is why I worry a bit about these documents and stories. The public is getting an important and informative glimpse into the research of what is happening on the platform, but it’s not the full picture. The docs also span years worth of work and they aren’t coming out in any sort of chronological or organized fashion. The world in 2018 is VERY different from what the world was at the end of 2020.
Ok, the feedback has been gathered and now discussions need to start about weighing the options and looking at the tradeoffs to be made based on the criteria. This is where the hard work starts.
First, it is important to think about how policy, product, and operations will work together when going through options. For context the policy team writes the rules of what is allowed on the platforms, the product teams build the systems to detect violating content or behavior and the operations teams need to enforce on those content/accounts.
Making these teams work in harmony can be very difficult when you consider all the things Facebook is being asked to do. For instance, I’ve noticed that Facebook is being asked to move fast, be transparent, have policies to point users to when content is removed, and make as few mistakes as possible. When the company has lots of time to enact new products and policies you could probably achieve this. When time is short? That’s another story.
You can see a great example of this on page seven of this doc where both questions around if something could be done before there was a comprehensive policy as well as risks around overenforcement and limiting voice are presented.
What do you prioritize? Moving fast, but not being as transparent and having more false positives? Waiting until you can get more things into place but potentially letting the harm happen longer? These are impossible tradeoffs folks at Facebook and other tech companies are making multiple times a day.
There are many permutations of this decision-making scenario. There are also many questions being raised about the structure of this decision-making, who should be in the room, and what considerations - such as political ones - should even be taken into account. Something to know about Facebook decision-making is that they look at ALL the options and potential fallouts. I think it’s totally fair to ask and discuss how political fallout from decisions should be taken into account. But, I think it is a little unfair to blame the public policy team for doing its job of sharing what that political fallout might be. But I think that’s going to be a future topic of a newsletter.
After what can either be hours, months, or even years of debate a decision is made. It’ll be communicated out internally and plans will be made to enact it. A good example of what this sort of communication looks like is Samidh Chakrabarti’s post around political ads in the Fall of 2019. The Washington Post reported this week that he wrote, “[T]hat Facebook CEO Mark Zuckerberg ‘made these decisions’ on political ads and ‘concluded that further targeting restrictions would result in far too great collateral damage to get-out-the-vote campaigns and other advocacy campaigns who use these features for vital political mobilization.’” And when it comes to something this controversial there will be a lot of opinions again from employees as well as external stakeholders who are not shy about sharing if they disagree with the decision made. In fact, it could lead the company right back to the drawing board.
My goal in going through all of this is to try to give a bit of framework around understanding these documents, why they were created, and how they fit into an overall decision-making process. These are not silver bullets that were just sitting there waiting to be implemented, but they are really important insights into what is and was happening on the platform. It’s opened up many new lines of questioning and debate that I hope will be helpful in the overall discussion.
The Integrity Institute
I’m really excited that this week some of my former Facebook colleagues from the civic integrity team as well as other integrity professionals have announced the launch of the Integrity Institute. The Institute is a nonprofit community for, and of, integrity professionals who want to bring open-source solutions to protect the social internet. We want to help establish a set of best practices and bring some clarity to complex digital challenges. Our goal is to work with policymakers, academics, companies, and other organizations to solve the major integrity challenges of our time.
I’ve been helping the organization get set up and will be participating as a fellow. You can learn more about our mission and work at IntegrityInstitute.org. I’d love to share more about it if you are interested.
What I’m Reading
All the Facebook stories. I started keeping track of them in this doc that has sort of taken off on the internet. Who knew that all those years as an intern and junior staffer of doing clips would still come in handy. I’m doing my best to keep up, but if you see something I’m missing just let me know.
Vox Podcast: How Big Tech benefits from the disinformation panic. Especially the part where the guests, “discuss the role of tech giants in the spread of propaganda, why it's been impossible for researchers to agree on what disinformation even is.” Hat tip to Оlga Belogolova.