I forgot to mention in my last newsletter that I was taking last week off. I was in Turks and Caicos for my 40+1 birthday party. I’m calling it that because I was supposed to go last year with some friends for my 40th, but due to COVID, we had to push it back a year.
Before I jump into this week’s topic, I also wanted to flag that I’m doing my first Washington Post Live this afternoon. It starts at 4:30 pm Eastern with an interview with Rep. Adam Schiff, and then I’ll be coming on around 5 pm. We’ll be talking about the weaponization of misinformation, the efforts to hold social media companies accountable, and the push for more media literacy to combat the spread of conspiracy theories. Register to watch here.
This week two organizations - the Aspen Institute and Center for American Progress - released comprehensive reports with recommendations on dealing with the societal problems unleashed by the internet. They are the latest in a myriad of reports, articles, books, white papers, and speeches over the years with ideas on what policymakers and regulators should do about tech.
As a voracious consumer of this content, I appreciate their thoughtfulness about these issues. Most genuinely want to find good paths forward and do their best to provide practical ideas on how to do that. However, there is a recurring frustration I find myself having when reading these, and it’s the lack of willingness to get into the nitty-gritty details where these proposals usually start to fall apart.
Sidenote: Another frustration I have is how most of these reports focus on the United States because that’s where the funding/expertise is. But, I’m working on a future newsletter about the lack of attention internationally for these issues.
I’m going to focus on the Aspen Institute’s report for some examples. However, I want to say at the outset that I overall found this report comprehensive, and appreciated that the commissioners looked at where the government, media, companies, and others need to step up their game. It wasn’t just a long list of things that tech companies should do. It’s worth a read, and I hope it is a discussion starter. In that spirit, here are some of the things I’d like to see in future reports from any organization (full disclosure, I also hope to help write some of these reports):
Prioritization: The Aspen report has 15 recommendations in it. Most reports have more than one thing that they want companies and governments to do. But what I don’t see much on is how the authors of those reports think companies or governments should prioritize implementing the recommendations. Now, some of these should be done in parallel, but for others, choices will need to be made. I’d like to see more discussion about the principles that companies and governments should use when deciding where to focus their time, energy, and money.
Definitions: I was on a panel yesterday where they wanted feedback about a suggestion to support local journalism. The Aspen report has a similar one. I totally agree that it should but supported but how do you define who is or isn’t a journalist? How do you determine who is a legitimate news organization or not? Aspen acknowledges that answering these questions will be challenging, but I want to see more than just saying they are challenging. I want a commission or report to start putting some definitions around this. This also goes for defining who a politician is, what is a political or issue ad and who is a legitimate researcher.
Product Understanding: Because of how opaque tech companies are about how their products work, it’s tough for authors of these reports to make helpful recommendations for companies they could implement. This is where I’m hopeful that an organization like the Integrity Institute can help more. Examples include:
Product features: Most recommendations in the reports I read look at the products that should be implemented after content has been posted. Features such as labels, downranking, fact-checking, or slowing sharing. We should also be looking more at what sort of product speed bumps should be put into place to deter activity in the first place. For instance, the friction added to running a political or issue ad on Facebook made it much harder for bad actors to run misleading ads in the first place.
Understanding algorithms: I hate to break it to you, but if you think there’s an easy way for a tech company to tell you exactly why you saw a piece of content, I’ve got a bridge to sell you. I appreciate the intent of the Aspen Institute’s idea of an amplification flow tool, but it’s just not how AI works. Now you might be able to show something about which high-profile accounts did amplify a post, but that too is tricky as many don’t share a specific position but repost the same content. Regardless, lovely sentiment but nowhere near as simple as it seems. If you want to learn more about this, the Integrity Institute does have a whole deck on how the algorithms work.
A nuanced look at content moderation: For all future report authors, I beg you - please stop with the generic recommendations that tech companies should be more transparent about their content policies and give people specific explanations for why something was removed. This is the case, but there’s so much more that needs to be dug into here. If this is all the guidance given, then you’ll have platforms like Facebook who will be like, “Great, we already do that.” In reality, what people want is not only specifics of the policies, but I think we should be asking for more transparency on the effectiveness of their enforcement of said policies and what resources they’ve put into proactively finding violating content. We also have to acknowledge the inherent tension between speed and transparency. Integrity teams will often uncover new harms where there are no policies. They’ll want to move fast to take action even if there’s no specific policy to point to. The policy teams will want something, so the action is publicly defendable, and those policies take time to develop. So back to request number one - let’s have a conversation about prioritization.
Audience/Group Size: One of the hardest and more interesting conversations we need to have is when it becomes appropriate to moderate conversations. Is it cool if it’s a 1:1 conversation? What about a small group of eight to ten people? We don’t see this happening in the real world, but these happen online more and more. What should our principles be for moderating conversations in real-time that may never be heard again?
Radio and television are not easier to monitor: The Aspen report had a line that says, “Content shared on radio and television is viewable by anyone.” While technically accurate that I can easily access any station available to me in my geographical location, it does not mean that I have an easier way of monitoring what is being said across them at all times. I don’t know any public ad database to search every ad run on radio and TV, who it was seen by, and who purchased it. (Clarification: I do know about the public file that stations have to keep, but the point I was trying to make is that file doesn’t include a copy of the ad nor is it easily searchable in the way the archives that the tech companies have is. Thanks to Alex Howard for pointing this out to me.) Is there an easy way to look at what topics were discussed across talk radio on any given day? Moreover, some of these stations only cover a few thousand people. Online that would be considered microtargeting. Moreover, despite being de-platformed by Facebook, Twitter, and YouTube, Steven Bannon continues to have a vast audience not necessarily through Getter - where he live streamed turning himself into officials Monday - but through his podcast, which is also aired on television and radio through deals he has made. Let’s not only ask for transparency from the platforms but other types of media as well.
I do hope to do my part over the coming months and years to contribute to the thinking into these issues above, but it’s the next step we all need to take if we’re going to get into any meaningful regulation in this space. I don’t think it’s fair to throw this to some government agency and ask them to figure it out.
What I’m reading
Ethics and public policy in technology evening course: Stanford is offering this evening course for professionals again in early 2022. I took this at the beginning of this year and found it fascinating.
Nick Clegg Op-Ed: A Bretton Woods for the digital age can save the open internet
New York Times: Covid-19 Misinformation Goes Unchecked on Radio and Podcasts
Atlantic Council: How to get Biden’s democracy summit right