Digging Into Ten Overused Tech Tropes
Commonly used arguments about online platforms that deserve more nuance
I’ve been wanting to write this newsletter for a while. When I was at the Aspen/Columbia University AI and elections event last week, I started keeping track of some of the well-trodden talking points that annoy the heck out of me, in addition to some of the key takeaways. I wrote about the key takeaways last weekend, and today, I want to talk about the tropes.
Before I do, I want to give some caveats. I’m not saying any of these are false. All have elements of truth in them. Many are ones that we should implement a version of. However, people of influence and power sometimes simplify things too much. They prescribe something to the entire industry when it is, in fact, quite diverse. As humans, we might intellectually know there’s no silver bullet, but we desperately want one. It is harder to handle that maybe we don’t have the solutions yet.
My goal in highlighting these is to provide some context and continue to push us to move beyond the talking points into these nuances so that we can more productively spend our time. Each topic could be its own newsletter, so forgive the brevity.
A huge thank you to the many of you who contributed ideas. I had over ten to choose from, and you’ll see I combined some. Let me know what I missed or if you would characterize things differently!
Please support the curation and analysis I’m doing with this newsletter. As a paid subscriber, you make it possible for me to bring you in-depth analyses of the most pressing issues in tech and politics.
Companies know what to do, but they won’t do it.
I also heard they won’t do it because it will cost them money. They have enough money and smart people to fix the problem if they wanted, and tech is run by engineers who only care about tech.
It is true that companies allocate resources and make priorities all the time, which means that work in other areas is stopped or slowed down. This includes work in the trust and safety space. That said, the number one thing I heard from trust and safety people when asking them this question is that they wish people realized the challenges to building good content detection tools and just how bad we are as humans in doing content moderation. Many smart people have been working on these issues for years, and it’s not helpful to be told you are doing a bad job, but you should also be smart enough to figure it out.
Tech needs to hire more humans for content moderation.
Similarly, if a decision is extremely obvious for humans (or one particular human), it should always be easy to catch with automated systems / AI.
Here, I point you to the talk Dave Willner recently gave at Berkman, where he talks about how the truth is that we are just bad at content moderation and classification. (If you prefer a rough transcript, I have one from Otter AI here). Humans have different value systems and different expertise; they make mistakes, and they can only hold so much knowledge in their heads. This is where AI could be really helpful, and I’d point you to Dave and Samidh’s Tech Policy Press piece for more.
However, there are challenges to just turning this all over to machines. A human can catch a violating piece of content or behavior more often than I can count, but it is harder for a machine to do so. This is why many think we need more humans. However, this does not scale. As Alice Hunsberger outlines in her newsletter, Humans will still be needed at many parts of this process.
Tech companies are pulling back on trust and safety/content moderation
For this narrative, we have Elon to thank. When he gutted Twitter in November 2022 and started making erratic content policy decisions, the narrative immediately held that the site was going to hell in a handbasket. This, plus additional layoffs across tech, announcements by the companies that they wouldn’t remove content about the 2020 election anymore, and headlines that say companies are “surrendering to disinformation” added fuel to the fire.
Have layoffs happened and approaches changed? Absolutely. Does it mean that these companies don’t care about the work anymore? No. In talking to numerous trust and safety leaders, they said it’s not true that layoffs disproportionately hit their teams or were the first to go. In fact, at Twitter, Yoel Roth tweeted that the Trust and Safety team had only been hit by 15 percent vs 50 percent for much of the company. Most platforms are pouring resources into the elections and other issues. Just look at the comments by them at the AI/Elections event and everything they are covering in their own announcements. OpenAI is working on using AI for content moderation. These teams continue to be stretched, but they are not gone.
Content moderation decisions are all on purpose.
This is also an accusation that a decision was made for political reasons or some other conspiracy theory. Another version is the assumption that content moderation can be done at scale without mistakes.
Mistakes get made all of the time. And when you are talking about millions - if not billions - of daily decisions by humans and machines, even a few mistakes affect many users. Most of the time, when something was removed, it was a mistake - either because of human error or a bug. It wasn’t because someone was liberal and wanted to mess with conservatives. I covered this more in my everything is politics but politics isn’t everything post.
Fewer resources are being allocated to international countries/other languages.
While it is true that platforms have to prioritize what countries and languages they work in, this is another one of those tropes where the answer isn’t just more money. For the most part, one of the biggest challenges to building tools for other languages is the availability of training data in those languages. This is a gap many companies are hoping AI will help to fill. That has its challenges, too, as this Center for Democracy and Technology report outlines. What companies could do is be more transparent about the challenges of building and testing these tools in different locations and languages to set expectations amongst regulators and civil society. This is absolutely an area where more cross-sector collaboration could be very helpful. It’s also worth looking at the various tech company announcements for the upcoming international elections.
AI is turbocharging mis/disinformation.
This is one of my more recent rants. Be sure to check out next week’s podcast with Olga Belogolova, where we will dive into this. Daily, there are multiple headlines about how AI will worsen this problem. While there are absolute examples of how it’s been used to create fake content, we must separate the signal from the noise of what it could do versus what it is doing. Moreover, we must separate out how we tackle the challenges of what AI can help generate versus how that content will be distributed. Those distribution channels will not just be online - they will be on television, radio, the mail, or any other way that humans communicate. The platforms went into this more at the AI and elections event as well.
AI is going to allow for hyper-targeted messages.
At SXSW, someone outlined a pretty harrowing scenario about how AI could be used to wreak havoc on an election. Imagine getting a call from your boss saying that they know it’s Election Day, but they really need you to come in ASAP and have been told by the election officials that you can vote tomorrow instead. However, it wasn’t your boss. It was an AI version of their voice. Hyper-targeted content from a person of authority. Eeek.
Now, this could happen. But, it won’t necessarily be distributed online. As Watts said in New York, disinformation in private spaces is more dangerous than in public. Moreover, most tech companies have limits to how narrow you can target content (usually has to be at least a few hundred people). This means it will be data bought from brokers that will be used to try to do things like this 1:1. The adversary's cost of doing this is also still quite high, so while possible, it is probably unlikely to be widespread. If I were a bad guy, I’d look at a swing county like the one where my family lives in Green Bay. I’d look at the biggest employers in the area - Bellin Health and Schneider Trucking - to figure out who the top brass are and try to scale messages that way. Even still, that’s a lot of work.
OMG, this foreign adversary is doing this new thing to influence the information environment.
While there are certainly new tactics and platforms that foreign adversaries are using to sow chaos and distrust, many of them aren’t new. Just this week, there was a headline about China’s efforts to influence the US elections that both Olga and Renee DiResta debunked. David Agranovich and Clint Watts also debunked this at the Aspen event. We should absolutely research and report on these efforts, but we need to not only get the terminology correct (looking at you, NYT, as China is engaged in disinformation, not misinformation) but also give more context as to what is new or not.
Remove section 230 will solve the problems.
I’ll admit, I am surprised that many prominent people still think that this is a viable solution. Reform, perhaps. But the removal of Section 230 is going to have so many downstream implications that I don’t think many are really thinking about. FIRE has a good piece outlining the arguments here. Now, they are biased and more pro-free speech than others are. That said, I think more discussion of those downstream consequences is warranted.
Transparency is the answer.
Now, I am all for more transparency from companies. I think this and establishing more checks and balances amongst tech, government, civil society, and others are important things to keep working towards. However, this meme also comes to mind:
We have to pair transparency with the resources and abilities of people to analyze it. Drives me bonkers when people ask for more data/information and then just expect the data fairy to appear with all of the analysis and insights. That’s not how it works; it’s a lot of data. The EU’s DSA database already has over 15 billion statements of reason from online platforms for when they removed content. This is just since late September from 16 companies.
Moreover, being public about how everything works can be challenging, as then people will game or exploit it. As many of you know, reading a company’s transparency report often introduces more questions than answers. Daphne Keller and Max Levy have a great piece on getting transparency right at Law Fare.
I want to end with another tweet by Yoel before he left the service forever, “If there’s one takeaway, it’s this: What matters most in platform governance is how decisions get made. You can armchair quarterback specific choices and mistakes all day. But the real work is figuring out how to make principled decisions when all you have are bad options.”
I list all of these above not to discourage us from doing anything because there might be unintended consequences. That is going to happen no matter what. Instead, I’m trying to focus the debate more to take us away from our proverbial corners where we all have our talking points and keep lobbing them over to one another.
What did I miss? What did I get wrong? Let me know in the comments or respond to this email. These are tricky issues and I want to hear and elevate different perspectives.
Please support the curation and analysis I’m doing with this newsletter. As a paid subscriber, you make it possible for me to bring you in-depth analyses of the most pressing issues in tech and politics.
The solution to misinformation is not censorship, but more information. I want more sides of an argument, not less.
Thanks for all this critical nuance Katie. I'm curious to hear where you see design approaches, particularly when done in partnership with technologists and people who are most at risk, being a way to harness the best if the trust and safety professionals, while helping civil society to understand the nuances and tradeoffs.