Discover more from Anchor Change with Katie Harbath
Dispatches from TrustCon 2023
Themes from a week of geeking out about the online trust and safety field
Please consider supporting this newsletter. For $5 a month or $50 a year, you’ll get full access to the archives and newsletters just for paid subscribers. You’ll also support my ability to do more research and writing on issues at the intersection of technology and democracy. For instance, your support would cover travel this week to TrustCon as I paid for it myself. A huge thank you to everyone who has subscribed!
Hello from San Francisco, where I’ve spent the past week hanging out with some of my favorite people in the world - the trust and safety community.
This week two events happened. The first was Monday, when the first-ever trust and safety hackathon happened. This hackathon came together after Jeff Dunn posted a LinkedIn post with the idea that exploded. In a few short months, they organized hundreds of people in person and virtually to hack on ideas about how AI could help protect users online. If you are interested, you can see the four winning ideas here.
The second was TrustCon - a Trust and Safety Professionals Association conference. TSPA was started about three years ago and this was only the second time this conference has happened.
Well over 800 people attended this year, and many folks wanted to get in but couldn’t because it was sold out. These were mostly folks who work at or did work at tech companies in the trust and safety space, but there were also government officials there, civil society, academia, and members of the media.
The main TrustCon ballroom.
I wanted to share some themes from the conference that stood out to me and others I chatted with.
Community - For a group working on gnarly topics - the energy all three days was incredibly positive. This profession clearly needed a place to gather across the industry to support one another and share ideas. It’s also an amazing way for those laid off and abruptly pulled from their company community to stay engaged. This is also very lonely work, so having a community is important to know that there is help and support.
The community has power - I often get asked why the industry won’t come together to set standards for things like hate speech, the information environment, harassment, etc. I think it’s unlikely for the industry to come together officially, I think some standards will come from this community of trust and safety professionals. These are the front-line workers doing the day-to-day work to keep platforms safe, and the more they work together, I expect to see similar values and standards start to spread across the industry.
Professionalization of the trust and safety field - There’s starting to be an interesting split in this space from those of us who sort of fell into these jobs versus people actively choosing to pursue a career in this space. It’s a sign that this field of work is maturing and growing. The fact that associations like TSPA and Integrity Institute exist is another. I was telling people that it would be cool to map out where different people in trust and safety started their careers and where they are now. The Facebook, Google, and Twitter diaspora is spreading across various organizations. They will be taking some of those companies' standards and values with them. I think that’s another thing that will help to build more consistency across the field.
Regulatory Compliance - When I asked members of the Integrity Institute some of their takeaways, the co-founder Jeff sent me this meme:
A LOT of sessions were about how people were preparing to comply with the DSA. Numerous vendors (a topic I’ll get to here in a minute) were selling how they could help online platforms comply. There was a lot of confusion about what exactly was needed for compliance. Smaller platforms expressed concern that they were expected to have the same resources as larger platforms.
Generative AI - No surprise that artificial intelligence was a topic at the conference. Alex Rosenblatt wrote on LinkedIn, "Human review is no longer the gold standard, and in 5 years, the vast majority of content moderation will be performed by models like GPT-4.” No one thought that humans would be completely eliminated from these processes, but there’s no doubt AI will change this space by making their jobs more challenging and easier.
Elections, Elections Everywhere - Won’t lie. It warmed my elections-obsessed heart when all of the elections happening in 2024 kept coming up. People were worried about how much companies might pull back on the work. They think that platform leadership will eventually reinvest, but the question is, what happens in the meantime? Those questions, plus looking at the scale of how many elections are happening next year, caused most of us to start looking like Woody.
Transparency in decision-making - One of the first speakers at the conference from Witness made a good point. He said, “Who and how by which ‘solutions’ are identified, prioritized, designed, and deployed are just as important as the solutions themselves.” People in this field, but also many outside of the tech companies, don’t just want to hear about what the decision is by a tech company, but they want to hear the thought process behind it. What tradeoffs they were weighing, and why they prioritized what they did. Some did talk about how hard it is to scale this for every decision, but many agreed that it would be a good place to start with those that are precedent-setting or go outside the regular rules.
Diversity of online platforms plus vendors - The trust and safety space isn’t just about what’s happening on platforms like Facebook, Twitter, or YouTube. You had gaming, messaging, and dating platforms in attendance. You also had a lot of vendors who are now providing support in this space. Some have built tools to help platforms scale their content moderation. Others provide consulting services to help companies ramp up quickly. All of them had full booths at the conference during every break.
For-profit vs. non-profit - One of my favorite panels was the last one on Thursday about funding in this space. I’ve been grappling with the question of what information or tools are best open source and where that starts to cannibalize people like myself and others trying to make a living by being consultants or building tools. There’s agreement that there’s a need for both types, but I think this community will need to think about how to find that balance.
Trust and safety workers are under attack - Twitter’s former head of trust and safety, Yoel Roth, did a fireside chat on Thursday. One of his messages for folks was how to keep themselves safe from online harassment. Especially with some of the focus from Congress and the courts on this work. Yoel has unfortunately been one of the higher profile targets of this, but many in the room have had to take measures for their own security. Here are the things he recommended:
Prepare to be a target and for it to happen quickly. For Yoel, it happened in a day
You can do immediate things now to protect yourself, such as using tools like DeleteMe, TallPoppy, and BlockParty to remove information like your address from the internet.
We need to talk about this. We act like we must stop worrying about resilience when we become managers. Instead, we need to acknowledge how vulnerable and scary it is. Share resources.
Integrate employee safety into your company strategy.
Remember the mission. When it happens to you, it is easy to get caught up in fear and embarrassment. Try not to let it throw you off the mission.
If you go through being attacked, you won’t know how to ask for help or what you need. Thus, if you want to help a person under attack, be proactive and specific on how you want to provide support.
Here are a few other resources from the conference. The first are recommendations on where to go for trust and safety info. People recommended:
LinkedIn - Alice Hunsberger is a good starting point
Also, Aaron Berman did a great session on writing for tech executives. Many of his tips are in his newsletter, The Blue Owl. During the session, I mentioned the decision-making by traffic light grid to help frame up tradeoffs and wanted to link to it here.
Finally, I just had to share the conference swag. It’s so good.
I’m off to Sacramento for the weekend and then back to DC. Given how I’m just putting this newsletter out today, I won’t be doing one this weekend but we’ll get back to our bi-weekly posting schedule next week. Thanks!
What I’m Reading
Digital Trust and Safety Partnership: The Digital Trust & Safety Partnership Releases Inaugural Trust & Safety Glossary of Terms
The New York Times (Opinion) - Kate Konick: The Future of Online Speech Shouldn’t Belong to One Trump-Appointed Judge in Louisiana
Morning Consult - What’s Driving Threads’ Early Success — And Why Twitter Is in Trouble
New York Magazine - The 2024 Election Will Be an Informational Nightmare
Pew Research Center - Republican Gains in 2022 Midterms Driven Mostly by Turnout Advantage
Knight First Amendment Institute - Complaint against Texas TikTok Ban
The New York Times - ChatGPT Faces Investigation by FTC
Trusted Accounts - Bluesky Community Moderation Proposal
Twitter Blog - Freedom of Speech Not Reach: New Updates and Progress
Financial Times - Nick Clegg: Openness on AI is the way forward for tech
Financial Times - Marietje Schaake: It’s time to let go of the global internet dream
Emily McDowell's Substack - The Truth About Going Mega-Viral
Emily McDowell's Substack - The Truth About Going Mega-Viral: Part II
The Wall Street Journal - Disney's Iger Hints at Strategic Partner for ESPN, Possible Sale for Linear TV Assets