Delivering Effective Recommendations to Tech Companies
How to ensure your ideas have an impact on those writing the policies and building the products
PSA: This morning, the Senate released its AI roadmap. I’ve not read all of it yet, but the section on elections—a whole two paragraphs—is an example of how not to construct a good recommendation.
Last week, I went to my old stomping grounds at the Meta Facebook DC offices to discuss the various projects their governance team is working on.
For those unfamiliar, the governance team is the internal one that primarily liaisons with the Meta Oversight Board. During the presentation, they discussed how recommendations are the primary tool for scaling the Oversight Board’s influence on the company and how the Board’s recommendations have significantly improved, leading to more of them being successfully implemented. To give you a sense, in 2021 and 2022, the board gave roughly around 90 recommendations each year, but only 22 were implemented in 2021 and 30 in 2022. However, in 2023, there were 68 recommendations, and 61 were implemented.
This led me to ask, “What makes a good, implementable recommendation to a tech company?”
After all, there is no shortage of governments, organizations, people (like me), and others who have all sorts of ideas for what tech companies should be doing. But, all too often, I see asks that are so broad and generic, like “less hate speech” or “just hire more moderators,” that they aren’t at all implementable by companies and will be ignored.
Given the number of you who develop and/or try to implement recommendations, I thought it might be good to outline what I’ve learned from folks about how to do this well.
Before I get into that though, I want to acknowledge how I used AI in helping me to write this newsletter. The tools are getting better all the time, and they really helped me be more efficient by:
Summarizing feedback. I posted in a few groups I’m in asking for people’s thoughts on this question. Rather than trying to summarize the points myself, I threw them all into ChatGPT 4o and asked it to summarize the key points in the text. I used the bullet points as a starting point and added some of my own color commentary.
Headline ideas. I needed help brainstorming a good headline and asked ChatGPT 4o for ideas. I took one and adapted it slightly.
Text from a PDF. I wanted to include some text from a PDF, but the formatting was poor when I copied and pasted it. I took a screenshot and threw it in ChatGPT, and it spit out text I could easily add to the Google Doc I’m writing in. This quickly saved me a few minutes of typing it out myself.
Copyediting. I’ve used Grammarly to do basic copy editing for years now. So helpful.
General feedback. I asked Gemini to share some thoughts on improving this piece. This was less helpful as I would have liked to see it give me an edited piece with their suggestions.
Thumbnail image. I asked Gemini to create an image of a pile of reports.
I could have done this all in Gemini, but I wanted to try various tools to see how they worked. Plus, I know folks at multiple companies that I don’t want to show favorites. 😀Regardless, having all these at my fingertips to shave a few minutes will save me a ton of time in the long run. I also wanted to be transparent about where I am using AI.
Ok, back to our scheduled programming.
When I asked Meta about what changed in the Board’s recommendations, they pointed me to this piece, “Burden of Proof: Lessons Learned for Regulators from the Oversight Board’s Implementation Work,” that Oversight Board staff Naomi Shiffman, Carly Miller, Manuel Parra Yagnam, and Claudia Flores-Saviaga wrote for the Journal of Online Trust and Safety.
In this, they document what they learned in making recommendations and how to evaluate Meta’s implementation. We will focus on the recommendation component today, but I highly recommend you read the whole thing. Here’s what they found:
To ensure accurate interpretation moving forward, the Board incorporated the following best practices:
Create opportunities to clarify recommendation intent in writing or in conversation with Meta following the publication of a case decision.
Include an expected measure of implementation alongside each recommendation to serve as a benchmark of criteria would need to be fulfilled for a recommendation to be considered implemented.
Ensure any given recommendation only asks for one specific system change, rather than compounding multiple requests into one recommendation.
These learnings fit with my experiences and what I heard from others. Those include:
Specific yet Flexible Recommendations: Every company differs from how their products work, to the resources available to the leadership preferences. When we developed the Integrity Institute Election Best Practices, we took great care in providing frameworks rather than specific recommendations. This allowed us to provide specificity on the types of things they might need, but flexibility and guidance for how they can think through what is needed based on what works best for their organizations. We also heard a need to remember that companies often have to accommodate global policies (though I know this is a point of tension.)
Actionable Recommendations: This is a tricky one because the Oversight Board has more access to Meta than most external organizations. Still, I heard a common suggestion that recommendations would be easier to implement if they consider how different companies are structured and operate. This is where places like the Integrity Institute, Trust and Safety Professionals Association, and tech folks who are now consultants can help. Understanding all the hoops a team has to jump through to implement something and addressing those can help.
Here’s an example of all the various teams that would need to be convinced to work on implementing something:
Back and front-end engineering support.
User research to test the messages people might see to make sure they understand them.
Partnerships to work with any outside organizations or tools that would need to be utilized (and likely get a lot of traffic). Also, to get feedback from external orgs on the change.
Product policy to write any new rules needed and public policy to communicate anything to policy makers.
Legal to ensure everything follows all necessary regulations.
Multiple product orgs if the change is something that affects numerous surfaces.
Company leadership must agree to allocate the resources.
Here’s an example from an external org that told me how they were successful in helping companies think through labeling state-sponsored media:
We wrote op-eds publicly arguing for it
We gave specific use cases in which it should apply
We discussed nuance and edge cases and also design/implementation on product surfaces with the companies as they started prioritizing it
Product and Tooling Focus: Suggestions should be specific to products or applications, recognizing differences between various platforms and services. Search differs from a social media platform, a video platform, and a generative AI tool. There are hundreds, if not thousands, of different types of products where harm can surface. Even identifying which of these should be prioritized can help.
Systematic Approaches: Recommendations should favor systematic solutions over case-based approaches, aligning with the company’s broader policies and structure. The Oversight Board must grapple with this bridge in each case. While ruling on an individual one, the recommendations must address broader solutions. It’s simply not practical to make nuanced decisions about every piece of content posted on the Internet.
Scalable and Repeatable Solutions: Recommendations should be grounded in technical reality and aim for scalable, repeatable solutions. I often find myself reading a report and wondering how in the world they expect a company to scale that solution. I know an organization isn’t serious when they think the simple answer is just throwing money at the problem. One person I greatly respect said that people often think advanced technology equals magic and get upset when the company can’t perform. The best suggestions instead come from those who think more logistically about the problems.
Acceptable error rate. Perfection is a fool's errand. No company will achieve it. We need to discuss acceptable error rates and how to judge whether companies are improving. The Oversight Board’s work on evaluation can be helpful here.
Specify your audience. People inside and outside companies often conflate when an issue is technically hard or impossible to implement, whereas leadership just doesn’t want to spend the money. Another thing we did with the Integrity Institute guide was acknowledge that those on the front line have to work with what they have. We wanted to help them do their jobs while advocating for leadership to provide more resources. When making recommendations, acknowledge who you are trying to influence at the companies. Both are important. Also, as much as I love my fellow external engagement folks, we are not the ones actually making product decisions. We advocate like hell for you on the inside, but when something is really important, try to get a product person there, too. Just don’t abuse that; product managers still need to mainly focus on building things.
Public Acknowledgement: Publicly acknowledging successful implementations can motivate teams and validate their efforts. When a team puts in all the work I mentioned above to get something built and deployed, and they hear crickets or, worse, get yelled at, it incentivizes them to stop trying. A thanks now and then will go a long way. The same goes for those at the companies who are utilizing an organization's recommendations. Make sure to let them know that you are doing so.
Conflict Levers: This somewhat contradicts number eight, but conflict and public pressure can also help move the ball internally at a company. While conflict is challenging in the short run for many teams, the attention can drive progress in Trust & Safety initiatives. Just do so strategically.
Overall, the best recommendations are thoughtful, structured, and technically grounded. They consider tech companies' internal workings and constraints while recognizing technical versus resourcing hurdles.
Saying all of this is far easier than implementing any of it. I’m asking external people and organizations to understand the inner workings of tech companies when those companies aren’t exactly opening their doors.
I also did not address the various incentives external organizations might have in making recommendations, as some are constrained by their funders, the number of people involved, or a desire to simply get press headlines. This topic should also be explored.
That said, I hope this is helpful. A huge thank you to everyone who offered me their perspectives. More discussions about how we can improve the conversations between external organizations, regulators, and tech companies will be crucial if we’re going to move the ball forward, especially as things continue to speed up with AI.
Please support the curation and analysis I’m doing with this newsletter. As a paid subscriber, you make it possible for me to bring you in-depth analyses of the most pressing issues in tech and politics.