When does something become political?
The debate about how Meta handles political content flares up again
Here we go again.
Last Friday, Threads - whose parent company is Meta - announced that it wouldn’t be recommending political content. This isn’t a new announcement. Meta has been very clear since early 2021 that it would reduce the amount of news and politics in their feeds and Instagram CEO Adam Mosseri confirmed in July that the same would extend to Threads. (However, they did say that politically-themed content will be eligible for a new trends feature.)
That didn’t stop it from stirring up the hornets' nest that gets spun up anytime the company utters the words politics or news. And, as you all know, I am one of those hornets who gets spun up.
I think my post in July remains correct - that platforms can run, but they can’t hide from politics.
Many others, such as Taylor Lorenz and Judd Legum, complained that this decision would likely hurt marginalized voices.
Others made the point that Meta is in a no-win situation. People said they were too involved in politics, and now they’re mad that they are pulling back.
Luca Rossi, a European professor, makes a few good points in his thread. He notes that although the amount of political content might be small, it has high engagement rates. Thus, political content might be disproportionately recommended in a system where things get recommended more when more people engage with it. So, could make sense for a platform to try to mitigate that. However, Rossi then points out that, “When you aim at creating a space for public discourse, and you decide that some topics will have less visibility than others, you should at least offer full transparency about how your decisions are made.”
In other words, show your work.
Please support the curation and analysis I’m doing with this newsletter. As a paid subscriber, you make it possible for me to bring you in-depth analyses of the most pressing issues in tech and politics.
For me, this all goes back to the reworking of societal norms about speech online and offline we’ve been in for years. What’s allowed, how should it be amplified, what should the consequences be for what one might say, etc. It won’t be solved anytime soon - but we are living through a time that will determine decades of precedent in this area.
Since 2017, I’ve had a first-hand look multiple times at how hard it is to create these definitions. I thought it might be helpful to outline how we got to this point and all the various aspects one needs to consider when determining if something is “political.”
The conversation on this has shifted in a nuanced but important way over the years. Traditionally - even before the Internet - people focused on defining what is political regarding paid advertising. This is where the FEC’s definitions in the United States were used. It also usually starts with defining who a political person or entity is and then what transparency is required for the ads they run.
The 2016 election and the ads run by the Russian Internet Research Agency started to change all that.
Part of the goal of building the political ad transparency tools in 2017/2018 was to prevent the Russian ads from running. We had to solve for the fact they paid for them in a foreign currency, that they weren’t transparent about who they were, and that most of the ads they ran were about social issues rather than specific candidates.
This meant we had to flip things around. We could no longer start with if the entity was political because we didn’t know who the entity was.
We had to start with identifying if the ad's content was political and then put the advertiser through the flow to force them to verify their identity and where they were located. That would then require them to add a disclaimer.
And, because most Russian ads weren’t expressly political, we had to figure out how to define a social ad. We also had companies wanting to voluntarily comply with the Honest Ads Act. This is the legislation that Senators Mark Warner, Amy Klobuchar, and others introduced. They defined ads in scope as:
is made by or on behalf of a candidate or
communicates a message relating to any political matter of national importance, including—
a candidate;
any election to Federal office; or
a national legislative issue of public importance
What is a national legislative issue of public importance, you might ask? Good question. Could never get a clear answer on that from policymakers. But you’ll see it incorporated into a bunch of platforms’ policies.
At Facebook, we talked to all sorts of experts and groups - each with a different take. The only thing they all told me in common was, "Good luck. I wouldn't want to be in your shoes."
We ended up using the Comparative Agendas Project. This is the central mechanism for comparative policy process studies by academics. Each entry is coded into one of 21 major topics and 220 subtopics. The first set of social issues came from these major topics.
It was a blunt approach. This ended up catching a lot more actors than those who would traditionally be considered political. But, we were willing to make this tradeoff. Eventually, there was so much blowback from people who didn’t want to be labeled political that you’ll notice Meta uses the phrase “Social Issues, Elections or Politics.” Over the years, the company has further narrowed down what is in scope.
In the Fall of 2019, the debate about political ads flared up again. This is when Twitter decided to ban political ads, and Zuckerberg gave his big Georgetown speech about freedom of expression. Many other platforms also banned political ads. Banning them or not, these platforms still had to define what was political. Axios had a good rundown then about how hard it is to define what is political that they could basically reprint today. Here’s what they said was the bottom line:
“Without a regulatory body enforcing political ad rules, private companies have to set up their own rules around political ads. But even when they do, there are so many ways to define a political ad that those rules are hard to enforce without human oversight and judgment calls.”
Judgment calls. Those are nearly impossible to do at scale.
Want to know what else started happening? Different countries have different rules for political ads. So you’ll notice that Meta and Google break out their policies by country.
Now, we get into another twist to this conversation. Even before Meta announced that they would reduce the amount of political content in its feed in 2021, they had found a need to build a way to identify organic political content - meaning things posted that people didn’t pay to spread.
People generally feel much more comfortable with rules for political advertising but less so with general political speech.
A new twist to all of this is how platforms handle political content and their artificial intelligence tools. OpenAI, Bard, and others have said they will prohibit political use. To do that, they, too, have to define what is political. We’ve already seen the challenges they have in enforcing these rules. This will continue to evolve as we need to create guidelines for what content large language models are trained on and what prompts are allowed. Thus far, most of the conversation has been on labeling the content generated from these.
As part of my research for this newsletter, I compared how various platforms define “political” today. You can see this, plus a tab with every platform’s definition and a link to their policy. As you can see, once you get past content about a candidate, political party, referendums, etc, it’s a real mixed bag - with social issues being the most fuzzy.
I haven’t even gotten into how seemingly non-political things can quickly become political. Is anything mentioning Bud Light or Taylor Swift political? What about election officials just trying to get information out about voting? You might be tempted to say that the platforms should list who can do what. To answer that, I’d point you to my post about the complexity of building seemingly simple lists.
This debate will flare up a lot this year, and I imagine for a good bit going forward. This is a worthwhile conversation, but we should be clear about what we are trying to define and protect as we do so.
Please support the curation and analysis I’m doing with this newsletter. As a paid subscriber, you make it possible for me to bring you in-depth analyses of the most pressing issues in tech and politics.
Does anyone know whether this only applies to American politics? I am Brush. I also follow social media in the hope of getting first hand news and information about less reported countries, like Ethiopia. Will that too be ricotta?
Often the politicians will decide what makes something political but it is possible to make some assessments about issues that are likely to trigger such a reaction.