Greetings from Austin. I’m down here for SXSW with the Bipartisan Policy Center. It’s a much chiller SXSW than in the years before COVID, but I like it. I’m really enjoying getting to see people I haven’t in a long time.
While here I gave a keynote about the future of tech and elections. I spoke a little about the history of how tech has impacted democracy and what we can learn from that for the next twenty. A huge thank you to everyone that came. For those that couldn’t I wanted to share my remarks as written below. You can also see the slides I made here.
As always I appreciate any feedback. It’s a long speech so I totally understand if you don’t read all of it. But, I want to keep refining this talk so any thoughts will help. Moreover, many of you have been telling me about how you are reading the newsletter and it means a lot to me to know that you find it helpful.
Also, if you are in DC on March 23rd the R Street Institute and the Integrity Institute are doing an event: Content and Governance Online: Working Together for a Better Internet. I’ll be moderating the second panel. Would love to have you join us if you are interested.
SXSW Speech - Future of Democracy: How Tech Will Shape Elections
Thank you all for coming. I’m really excited to be here today to talk about a topic that has been near and dear to my heart for nearly twenty years - tech, democracy and elections.
My name is Katie Harbath. I’m currently the founder and CEO of a company called Anchor Change that I started after leaving Facebook nearly a year ago today. Through my company I work on a portfolio of projects at the intersection of tech and democracy at organizations that include the Bipartisan Policy Center, the International Republican Institute, the Integrity Institute, the National Conference on Citizenship and the Atlantic Council.
Prior to this I spent ten years at Facebook where I was on the public policy team. While there I built and led global teams that worked with government and political figures on how to use the social network as well as coordinated the company’s elections work from 2014 to the end of 2019. Over the course of those five years we would track at least 200 elections a year.
One evening in January 2020 I was sitting in a hotel bar in Washington DC talking with a mentor of mine about my career. I was no longer working on elections at Facebook and I was heartbroken about it. I was trying to figure out a role for myself and where I could make an impact in the long-term. Being the elections geek I am, I started thinking through the elections calendar for the next five years.
I chose this period of time because every five years the world tends to have a huge year of elections. It happened in 2014 and again in 2019. I started then doing the math as to which countries would be going to the polls in 2024. There was the U.S. Presidential. Then I realized India and Indonesia. Mexico is on a six year cycle and they last voted in 2018 so they would be up. As would the UK and the European Parliament. More digging also revealed Ukraine and Taiwan going to the polls.
This is what started my obsession about 2024. I started thinking about how different those elections could look depending on what tech regulation was passed, how threats will evolve online, new platforms being used for political communication and how the tech companies themselves invested in protecting the integrity of the elections.
I had no idea then what was in store for us over the last two years. We are all witnessing in real time that protecting democracy around the world is getting harder as Ukrainians bravely fight against the Russian invasion into their country. In the past two and a half weeks the world has taken unprecedented steps to help the people of Ukraine. Tech companies have had to balance keeping their platforms up in a country where they are some of the few not controlled by the Russian government. Yet, they also don’t want to allow for the spread of Russian propaganda.
Moreover, just last week V-Dem - a leading democracy measurement organization - found that in the last decade we’ve lost 30 years worth of democracy building and we are now back to levels last seen in 1989. Clearly, we need a new approach to building democracy in the digital age.
This is why I am here today. I want to:
Raise awareness of the electoral tsunami coming our way
Provide some history of how we got to this point and how we can learn from the past
Explain how tech companies are structured and think about these problems
Talk about how we must start preparing today to build up capacity, run simulations to understand and fill gaps and be prepared to have an always on strategy - even after Election Day.
History of tech and democracy
First, a little history and context of how we got to where we are today. When I talk to students these days many are shocked when I remind them that Facebook didn’t exist when I graduated college in 2003. My first job in DC consisted of me answering the phones, but also learning how to send out a mass email and code a website. MySpace was all the rage. People were creating blogs on blogspot (including myself) and the ipod had just come on the scene. YouTube, Facebook, Twitter, Instagram, WhatsApp, TikTok, Clubhouse, Telegram - none of those existed either.
Through 2015, much of the world was very optimistic about the impact the internet could have on the world. Politics first came online in the US with the first White House website in 1994. Google was founded four years later in 1998. In 2000, Senator John McCain was doing the novel thing of mentioning his website URL in all of his speeches and raising small dollars online during the Republican primary. This is also the first presidential campaign that saw campaigns starting to build email lists. The 2002 US midterms saw the Republican National Committee start to beta test some new ways of microtargeting television, radio and direct mail ads based on consumer data being matched with voter files. Microtargeting is a tactic used by both parties to send a tailored message to a subgroup of the electorate on the basis of unique information about that subgroup. So for instance, based on the type of car you drove or tv shows you like to watch campaigns can combine that information with voter files to determine that people who watch the show CSI are more likely to be Republican and then choose to either advertise or not during those shows depending on which voters they were wanting to reach. Those efforts were refined for 2004 which led to that election being known as the presidential election that “is notable for the creation of modern-day microtargeting, capitalizing on innovations in messaging technology.”
The 2004 presidential election - which is the first I worked on - is also well known for the one where bloggers - who were not professional journalists - had a big impact on the national conversation - culminating in the scandal involving Dan Rather and President Bush’s military record. At the RNC we were experimenting with funny online videos and websites - including having interns dressed as dolphins follow the Kerry campaign around to highlight his tendency to flip flop on issues. We didn’t have YouTube at the time so it would take the team overnight to ftp the video files to me each day where I would then edit them and create six different versions - high bandwidth and low bandwidth versions for windows, quick time and real player. We had to pay $30,000 a month just to have our own streaming server.
This was also the election where Howard Dean’s campaign revolutionized the use of tools like MeetUp and took online fundraising to a whole new level. Joe Trippi’s the Revolution will not be Televised is one of the first books talking about the impact of the internet on politics and campaigning.
The election in 2004 is an important one to study as part of the very beginning of our journey to where we are today. Many of the issues and questions we face around microtargeting and political advertising, the state of journalism, fact-checking, amplifying content, polarization, and turning up the heat on the use of outrage and anger to elicit action all were at play. Back then the big concerns online were emails that people would forward with incorrect information about the candidates - and no easy way to refute it - or what bloggers were saying to their readers.
After 2004 things just became an absolute whirlwind of change and innovation to how technology would change political campaigning and governing around the world. Facebook was started in early 2004 by Mark, but started to open up beyond college campuses in 2005 with anyone being able to sign up by 2006. Also, in 2005, YouTube came on the scene and by 2006 candidates such as George Allen of Virginia were seeing their political careers end because they were not only caught on video saying things they shouldn’t, but that video was easy to upload and distribute to the public. Then came Twitter in 2006 where anyone could share real-time thoughts in a short, easily digestible format. By the 2008 presidential campaign iphones were also on the scene and the underdog, scrappy campaign of President Obama was about to take the use of technology and data to a whole new level once again. They had access to platforms such as Facebook and YouTube that the campaigns in 2004 or even 2006 didn’t have. They had the courage to use them and try new things. It is credited as a big part of why he won.
Four years short years in politics, but long years in technology, brought even more tools and platforms for campaigns to use in 2012. Online advertising was finally starting to take off and campaigns were ecstatic at being able to run ads to more precise locations than television advertising allowed. It meant they could run ads to people in New Hampshire without also wasting money on people in Boston. This was also the time that the big shift to mobile use was happening given the iphone starting to increase in popularity. Now people didn’t just have to be in front of a computer to consume content but they were doing so while commuting, waiting at the doctor's office or in line to buy groceries. They utilized the social graphs of platforms to ask people not to phone bank strangers, but rather their friends or people they might have things in common with such as being fans of the same sports team. This social networking effect was also being used to help increase support for gay marriage across the country. It was also starting to be used to gather more information on people to append information to voter files - including by a company called Cambridge Analytica created in 2013.
Innumerable articles and books have been written about the phenomena that was the 2012 campaign and how the use of digital tools was revolutionizing politics. Overseas the Arab Spring had happened in 2011 and many - including myself - were feeling pretty darn good about the promises of technology. In fact, at Facebook we were determined to make 2016 be known as the Facebook election. It ended up being that way - but not in the way we wanted.
That’s because concerns were starting to emerge. As candidates and parties in other countries started to see what was happening in the states, political consultants from the US would go to Canada, the UK, Australia and many other places around the world to teach them on how to build and use voter files. Europe - which is well known to be much more concerned about data privacy - started to have it’s data protection authorities (DPAs) expressing concern over use of data this way in campaigning. That didn’t stop candidates across the world adopting these tactics to win their elections including Narendra Modi in India, Mauricio Macri in Argentina, Justin Trudeau in Canada and David Cameron in the United Kingdom.
In the United States we were starting to see more accusations by people on the right that these technology companies were skewed more towards helping Democrats than Republicans. Facebook was facing questions about it’s “I’m a Voter” button that we showed at the top of the newsfeed to people asking them to share that they voted. In 2012 we ran a study in Nature magazine showing that more people went to vote if their friends saw that they voted. But people on both sides of the aisle questioned how they could trust us that we were truly running it to everyone and not only a subset of people.
By the middle of 2015 the 2016 presidential race was well underway. Meerkat - a livestreaming platform introduced here at South by Southwest - was all the rage and it was declared by some that 2016 would be the Meerkat election. That didn’t exactly come true as Twitter and Facebook created their own livestreaming platforms that quickly consumed Meerkat, but it was true that it would be the first election that really utilized live video in a whole new way. There was the first ever debate hosted between Facebook and FOX in August of 2015 where Donald Trump would become the first candidate to try our new Facebook Live platform - filming himself arriving in Cleveland on his helicopter.
In December of 2015 you had one of the first major decisions platforms had to make about the content of a candidate’s post when Trump said that muslims should be banned from the United States. Calling for a complete ban of a group of people based on their religion was against our community standards, but we also didn’t feel comfortable not letting people see the things presidential candidates were saying. A tension that continues today.
Overseas you had the campaign in the Philippines in full swing where news sites like Rappler - run by Maria Ressa - were able to grow thanks to their use of social media. But you also had candidates like Rodrigo Duterte using social media to harass, intimidate and spread false rumors about people like Maria. It all started to really come into focus for myself on May 9, 2016 - the start of what I call the reckoning phase for the internet and democracy.
On May 9 I was in Manilla for election night in the Philippines. I was about to do a tv hit talking about the amount of conversation we’d seen on Facebook about the candidates when I was taking a quick look at my Facebook feed and saw a story from Gizmodo where contractors were accusing the company of suppressing conservative content. I turned to my comms colleague I was with, handed her my phone and said I really hope news doesn’t travel too fast across the Pacific. I did my final hit - the news hadn’t hit Manila yet - and then I high-tailed it back to my hotel. The next week and a half or so were spent with myself and others working with Facebook leadership to understand what had happened, answering questions from MANY angry folks on the right - including Senators - and pulling together a small group of Republicans to talk to Mark and Sheryl at headquarters.
A month later Brexit happened and many were blaming it on the Leave campaign’s use of microtargeting and the messages they shared - and amplified - on social media. In September the Economist published a cover story about post-truth politics. I remember thinking how much of a problem that was going to be for the German and French elections coming up in 2017. I also remember talking to some in Facebook about it and no one was sure how we could ever police what is true or not. In November, Donald Trump stunned the world by winning the Presidency. In many ways his campaign continued to innovate on how data and technology could be used in an election just like President Obama’s had four years earlier. But this time the internet was also being vilified for enabling the spread of fake news, foreign interference and increasing polarization.
Now companies such as Facebook were forced to figure out how to solve problems like the spread of false information on its platform (from those who do it to make money to those who do it to cause havoc or win elections), defining and combating foreign interference as well as how to bring more transparency to political ads. Problems the companies are still grappling with today. Governments and regulators weren’t stepping in to tell us how to do it and everyone across the political spectrum and world had different viewpoints of what the solutions should be.
You also saw the internet and social media continue to be used by people to organize efforts across the country. Millions attended the Women’s March the day after the inauguration thanks to it being organized on Facebook. People connected with one another to protest various travel bans and to help immigrants get legal support. Friends and former co-workers of mine raised millions of dollars online to help families at the border.
By the time the 2020 election came around the landscape once again looked very different. The year 2019 saw a huge number of elections in India, Indonesia, Israel, Australia, the European Parliament, Thailand and many other places. In 2018 you had the US midterms and an election in Brazil. Each struggled with issues of hate speech, false news and transparency online. It was the first election where Facebook and other tech companies introduced their political and issue ad transparency requirements and ad archives. Domestic interference online was more problematic than foreign interference because it was much harder for platforms and governments to put rules around. In 2019, some companies like Twitter decided to ban political ads all together. Google reduced the targeting options for ads. Facebook said that political figures’ wouldn’t be penalized if they were fact checked on free expression grounds but also had a robust political and issue ads transparency database where people could see who had paid for the ads, the range of money they spent and the age, gender and location of who saw the ads.
Innovations in how the campaigns and other political groups would use technology continued to change as well. Text messaging - which had been used by campaigns for awhile - became much more mainstream. Snapchat, Instagram and TikTok were also used more by candidates to reach younger voters. Candidates were also working with online influencers asking them to share campaign content - blurring the definition of what is considered an advertisement even more. Online advertising, email and data continued to dominate too.
2019 is also when the regulatory period started to emerge for the internet. While tech execs started to get dragged to hearings earlier than 2019 this is when you started to see actual bills introduced such as the UK online safety bill and the DSA/DMA in Europe. This is when the EU enacted the code of practice on disinformation that they announced in April of 2018 for their elections in May of 2019. Bills - lots and lots of bills - got introduced in the U.S. Congress. Australia passed legislation to make big tech companies pay local publishers. By the end of 2021 authorities in at least 48 countries pursued new rules for tech companies on content, data, and competition over the past year according to Freedom House. (And many of these are very problematic for what effect they will have on minority voices.)
As concerns continued to mount about if President Trump would accept the outcome of the 2020 election, coalitions of experts and civil society groups came together to do threat ideation exercises of what could happen and what the President might try to say or do. Then COVID hit and campaigning was moved almost entirely online. The racial justice protests of June brought more tough content policy calls by platforms and a move to start to label content. People and organizations came together to help utilize the internet to help people get the right information about how to vote. It was also used to help normalize the fact that we likely wouldn’t know a winner of the election on Election Night and that the overall process could take us to January, which we all know it did. And on that night of January 6th and the days following the platforms and other internet service providers did what many thought would never happen - they deplatformed President Trump and others involved in the riots.
Now we are entering the decentralization and disruption phase. Back in early February I wrote about how I thought the winds were changing for tech. Every day I feel that more and more. I think that the web that we know today is going to look drastically different potentially as early as the end of this year but definitely by the end of 2024. With Google’s announcement about the privacy changes it’s making to the Android platform as well as Europe’s planned regulation of targeted ads I think digital advertising is going to look and work dramatically differently. I was in Brazil recently and I wasn’t hearing a lot about concerns people had about Facebook or WhatsApp (though they do still have some) but they were talking about Telegram and how popular TikTok and Instagram Reels were. YouTube also came up surprisingly more than I was expecting.
We are moving into an era where we are no longer just focusing on the big tech platforms but the use of a lot of different ones. Good and bad actors are pushing their content out on a variety of platforms online and offline. People are sharing things with smaller groups on messenger apps versus publishing for all to see. They post content that only lives for 24 hours. They’re engaging live on platforms like Twitter Spaces and Clubhouse.
For most of us, this is just allowing us to segment our lives a bit more. For bad actors, it’s making it harder to track them and even harder to know how to combat the disinformation they are trying to spread. This is why I think more and more transparency legislation needs to be at the top of any policy makers' list. We need easier ways to monitor what is happening so we can even know what the actual problems are.
Moreover, ever since Frances Haugen came forward last Fall I am seeing regulators, journalists, and others realize how important it is to talk to the actual people building these products. At the Integrity Institute, we’ve been blown away at the interest to talk to our members.
This has all led me to wonder about how do we responsibly move into this new era? A colleague reminded me recently that we are moving to an era where Silicon Valley is moving to web3 and the metaverse, regulators are focusing on the present but we still have nearly three billion people - 37 percent of the world - that have never used the internet.
So, not only are we going to be stretched thin in trying to monitor what is happening across many more platforms, but we have to not only be thinking about the rules for future tech but how do we continue to make sure the platforms that are still used by so many across the world are safe as well? How do we help build digital resiliency for those three billion people who someday will likely get on the internet?
Tech and Elections
When I first started building out the global politics and government team at Facebook back in 2013 I had never traveled anywhere outside the United States other than Canada. Over the next six years I was lucky to visit 28 countries and work on at least one election in every country that has them around the world.
To say the experience changed me would be an understatement. I quickly realized that the United States is the exception rather than the rule when it comes to elections and politics. There’s not a strong central election commission (instead, it's a decentralized system across 8000 election officials), the campaign periods are longer, there are less regulations (like political ad blackouts) and far more money is spent on elections here. Most countries have more than two parties, many are in a parliamentary type system, some have multiple election days and cycles of three, five or even six years exist. Some have four year cycles, but few go to the polls every two years.
On top of that you have over 7,100 languages and a myriad of cultural norms, turns of phrases, and images or symbols that can all mean one thing in one country and another somewhere else. The Oversight Board recently dealt with something like this as they upheld Facebook taking down hate speech in South Africa but had previously ordered the restoration of the same word in a different context in India.
All of these factors make the challenge of protecting the integrity of these elections exponentially more challenging as you can't just build a tool such as hate speech classifiers or political and issue ad transparency and then scale it to the rest of the world. Rather, you usually need to customize some of it to fit that country's structure and norm to be effective.
Let's first take a look at how these companies are roughly structured around the globe. Most companies have split the world up into regions:
North America (which covers the U.S. and Canada)
LatAm (which includes Mexico, Central America, the Caribbean and South America)
Europe, Middle East and Africa (sometimes shortened to EMEA)
India and Asia-Pacific (sometimes shortened to APAC). For some companies India is a part of their APAC teams and for others - such as Facebook - they are separated out.
The personnel in each of these regions are usually non-product teams consisting of people from public policy, communications, partnerships and sales people. Some hubs do have large operations teams and even product teams. Some of these people will be in charge of just one country, some a few and some will cover the entire region. Those regional leaders then usually report to someone back in the United States.
It then usually varies between companies as to which initiatives are coordinated regionally vs centrally as well as where they choose to build up their support teams to work with partners.
When developing a global strategy, one needs to think about how to have a set of principles that can apply globally but that you can adopt locally. To give an example, when I was at Facebook, we had a set of principles such as making sure to provide the same information to all candidates/parties that we appalled globally, but then we had a menu of different options of what we could in that country depending on what was needed. For instance, in some countries like Russia with strict laws around foreign involvement in their elections (ironic, I know) the most we could really do was monitor for potential problematic activity or requests similar to what Apple and Google got recently. In other countries like India we would do a wide range of work from Election Day reminders to the integrity operations centers.
Setting the priority of what to do in each country is a difficult task for any company. There simply isn't enough people-power to do everything everywhere. Nor is it always right to assume that something which works well in the U.S. will work in other places. But that said, I am keeping a close eye on what the companies will do for international elections that they did for the U.S. 2020 election.
In the U.S. we saw many new efforts put into place such as labeling content, more robust voter information centers and eventually the unprecedented move of deplatforming President Trump. No question that the 2020 election was unique in many ways; however, given the fact that we are seeing many more candidates and political parties around the world adopt some of the rhetoric and tactics from last year such as Germany to Brazil and Peru, I sincerely hope we continue to see this work built upon for the rest of the world.
Now, one other important thing about this work to mention. That is the fantastic work that groups outside of the companies are doing to help protect the integrity of elections. Those include my colleagues at the Atlantic Council's Digital Forensics Research Lab to the good folks at NDI, IRI, IFES, Kofi Annan Foundation, the Alliance of Democracies, Athens Democracy Forum and many others. It may sound like corporate-speak (I said it about a bajillion times myself), but partnerships like these are essential.
What Can We Learn From the Past
It’s quite breathtaking to look at all that has happened over the last twenty years for technology and democracy. Rarely does a day go by where I think about what I might have done differently knowing then what I know now. Some of you might be sitting there thinking why should you even listen to me about the way to move forward when I clearly was a part of getting the world into this mess. While that's a very fair thought, I also think that my history and experience can help in learning some of the lessons from the past.
Before we move on to what we should do moving forward I do want to share ten themes that have emerged for me about what those past lessons actually are.
First, decisions made today might not have repercussions until years down the line. The mantra of Silicon Valley seems to be let's build it and deal with the consequences later. In April of 2010, Mark Zuckerberg announced a new philosophy of how the web should be structured. He called it open graph. Developers were given ways to access people’s networks for whatever purpose they thought valid. In the political world, this allowed those fighting for gay rights to mobilize support by showing people how those they were connected with would be affected. President Obama’s team used it to connect phone bankers to those voters with whom they had something in common. Like I mentioned above it was also used by Cambridge Analytica - a controversy that didn’t come to light until eight years later and a full three years after Facebook had even shut the functionality down.
Second, good intentions do not mean good outcomes. I don’t think that anyone in Silicon Valley had nefarious motives when they built new technologies. But I do think that we get too caught up in all the positive possibilities of tech that we fail to think about how to build it responsibly. Early on in a start up’s life they are worried about growth because that’s what the VCs care about in order to keep giving them more money. They all think they’ll deal with any problems later. Or, they hold on too tight to the idea that while their intention for what they wanted to have happened was good they fail to look at what the actual consequences are. I think this is partly what happened at Facebook with meaningful social interactions and how those changes ended up making people more angry.
Third, we have to care about the impact that new and current technologies have around the world. Far too often in the quest for growth platforms rapidly get picked up around the world but the platform is not prepared to enforce it’s community standards for those languages and cultures. I see mitigation measures built for the United States or the English speaking world and not expanded elsewhere. Technologists have to be globalists. They have to understand geopolitics. Just look at the most recent discussion about what platforms should do in the fight between Russia and Ukraine. Don’t wait until you’re forced to confront the global impact of your technology. Think about it from the beginning.
Fourth, all tech decisions impact democracy even if they aren’t specifically about politics. Whether it’s targeted advertising, your community standards, how you design your algorithm or how you design your product - it all will have an impact on the political discourse on your system. It may not happen right away (see lesson number one) but it will happen eventually and it’ll behoove you to at least think about it from the beginning.
This leads me to the fifth lesson which is that you can’t get rid of political content - even if you ban it. In the last few years we’ve seen platforms like Twitter ban political ads. LinkedIn is trying a no politics button. Platforms say the reasons for this are that people don’t want to see politics in their feeds but another reason too is that they don’t want to get dragged into these geopolitical situations. In my experience, no matter how much you try to run away from politics it will find you. People may say they don’t like it in their feed but the closer you get to an election they tend to start talking about it. Candidates and political actors will find ways around the bans. I’m all for giving people the choice, but don’t think that you’ll get out of having to make some of these tough decisions just because you say you don’t allow that type of content.
Sixth, every level of your company needs to work through potential hard questions and how you might respond to them. There is no way that anyone is going to accurately predict every situation you might find yourself in. However, making decisions such as deplatforming a sitting President of the United States is not something that should be done in the heat of the moment. Regularly do threat ideation exercises about how various platforms might be utilized in these elections, the narratives and messages that might be problematic and how foreign and domestic adversaries might push disinformation. I truly think that the online landscape will look very different in 2024 and we need to prepare for that. The types of resources that might be needed include people who understand the language and cultures, what products are needed and what policies or rules we need. I also think we need to have an honest conversation about how we prioritize different countries and elections. It’s a reality of life that most platforms or organizations aren’t going to be able to do the same level of work in every country. Nor should they have to. We should be able to divide the work, but in order to do so we first must have honest conversations about it. Think through where the gaps in your policies are that you need to fill. Start thinking about what you might do in the most unlikely of situations - because today it feels like far too often the most unlikely of situations are actually becoming reality.
Seventh, rule violators are not going to be an equal 50/50 split across the political spectrum. Congressional Republicans in particular love to demand that companies show them that they’ve taken action on an equal number of pages or accounts on the left when action is taken for those on the right. This is a false equivalency. Sometimes you will have more rule violators on one side or the other. I think that this is ok as long as you are transparent about what those rules are and people are clear that they are violating.
Eighth, more transparency is key. I fully understand that being fully transparent about rules can actually help bad actors to get right up to the line without actually violating, and while I don’t think that platforms have to be 100 percent transparent, I do think more transparency is key for people feeling like they understand what the rules are and understand when actions are taken against them. Now this does come with a trade off. To be transparent means that time must be taken to write policies and notify people of those. However, some bad activity comes to light in a very rapid fashion where you know you should take action, but you don’t have policies to point to. Perfect should not be the enemy of the good, but starting to be transparent about even that fact can be helpful in people understanding why certain things may or may not be happening.
Ninth, we need to acknowledge that online platforms do not exist in silos. The content that appears on them has a symbiotic relationship with other media and the offline world. Thus, we should not look at this in terms of just what companies like Meta, Google or Twitter should do. Rather, we need to also include the broader media ecosystem when thinking about how we improve the information environment.
Finally, this is specific to elections though I suppose it could be broadened out. Far too often roadmaps are done only six months or maybe one year out. That’s too late sometimes to properly build out the resources necessary to protect the integrity of elections. Longer term thinking is needed. Moreover, we need to stop thinking about efforts to protect democracy as something that ends after Election Day only to be picked up six months before the next one. Instead, it needs to be an always on strategy with structures in place to make sure that people don’t get burnt out.
What Should We Do Now?
This leads to the question of what should we be doing now? All of this feels so daunting that it can feel impossible. Here are some high level ideas that I think can and should be implemented right now.
Lawrence Lessig has a great framework from his book Code that Ethan Zuckerman reminded me of in his book that came out last year Mistrust: Why Losing Faith in Institutions Provides the Tools to Transform them. Lessig said that change usually comes about through a combination of law, code, social norms and markets. For us to figure out the path forward it will take work in all four of these areas by people and organizations across governments, the private sector, civil society and the media.
First, is Lessig’s law pillar - we need to figure out what the right regulation is in this space. Government is not designed to move as fast as technology is created. By the time laws are introduced, debated and passed the technology they were originally regulating is likely very out of date. For instance, I remember us arguing in front of the FEC in 2012 about why Facebook ads shouldn't have disclaimers because at the time they were really small and just on the side of the newsfeed. They agreed that it was ok a disclaimer was a click away - a ruling that made no sense only a few years later when ads moved to newsfeed and had a bigger format. So we have to think about what are the right regulations to pass that also have the flexibility to adjust and change. In Europe lawmakers are much further along in considering legislation in this space with efforts such as the European Code of Practice Against Disinformation, the Digital Services Act and Digital Markets Act. Some countries, like Canada, have already passed updated laws regulating political advertising online.
We also have a lot of work to do to bridge the gap in knowledge between lawmakers and how the technology of today works. Having had to try to figure out some of this stuff on our own at Facebook it’s very challenging to figure out what should be done not only in the US but in countries around the world as well as how countries should interact with one another. For instance, is it ok for a Canadian organization to run ads to people in the United States asking them to support trade legislation that might affect their country as well? I loved trying to figure it out and I think the lessons Facebook and other companies have learned from trying to implement some of these solutions before laws have been passed allows governments to actually take those lessons and pass better laws. Some examples of those lessons include how to define what a political or issue ad is, how to verify advertisers identity and location and what data to make transparent that doesn’t violate people’s privacy. All things I think should be decided by regulators and not companies.
That takes us to the second pillar of code, which I’m going to expand to just mean more of where the private sector can take the lead on presenting solutions. There are efforts such as the ones I discussed around political ads transparency, there’s also the efforts around labeling, combating misinformation, foreign interference, etc. But there are also efforts such as the Oversight Board that Facebook stood up in 2019 and became active in late 2020. This is an independent group of what will become 40 experts from across the globe who can hear appeals from people and organizations who have had content removed from Facebook and ask for it to be reinstated. The board can also make suggestions to Facebook about where it should change its policies. It has already announced rulings on a few cases with the very high profile question about whether or not President Trump should be reinstated being kicked down the road to early 2023.
The corporate world - and tech in general - can move a lot faster than governments or civil society can in building, implementing, scaling and iterating on many of these solutions. We need to be encouraging more companies, not less, to work on building potential solutions as well as working with researchers to study the effects of them. This will require finding ways to share data that is privacy safe. Some efforts - such as differential privacy - are being developed, but there is still work to be done. Moreover, companies need to be doing more to help where they can in getting people the right information they need in order to be a part of the political process from registering to vote to knowing where their polling place is, what documentation they need to bring and what their rights are. I’m proud of the work Facebook and other tech companies have done in this space and just like government, the private sector and civil society came together after 2016 to combat foreign interference and frankly like we are seeing today regarding what is happening in Ukraine, we need to do the same to determine how to handle domestic actors who want to disrupt democracy.
Third, we need to acknowledge that these are really hard problems that involve sometimes impossible tradeoffs. Developing new social norms to help governments, civil society, companies and the media to make those tradeoffs will be key. A friend of mine from UW, Ben Thompson, has a great column called Stratechery that is read throughout Silicon Valley and the world. He’s done numerous columns about how different companies, people and organizations will prioritize the things they are trading off when making decisions. And it’s not that people don’t want to do all of the things they want to prioritize, but sometimes they have to make the hard decision to pick one over the other. For instance, adding more friction in the election process whether that be requiring more transparency and reporting for the things a campaign or organization is doing to how people can vote brings tradeoffs of more transparency and secure voting versus less people participating because they don’t want to jump through all of the hoops that the friction to the process brings.
To help think through these tradeoffs society will need updated societal norms and the chaos we are feeling and seeing right now is the world going through that metamorphosis. Now is the time to debate and mold where those norms are going before they harden and shape the next few decades of life.
For instance, the international community needs to take a stronger role in developing a new approach to building democracy in the digital age. While there are many organizations doing great work in this space we need a bigger culture shift. The last time the world faced challenges like this was after World War II and the Cold War. We need to adjust our thinking so that we are looking at how to responsibly protect democracy for future technologies such as the metaverse, AR/VR and crypto; present issues such as mis and disinformation and helping those who have yet to ever get online.
Russia’s invasion of Ukraine and many examples before that have repeatedly shown that we need new societal norms across democracies about what behavior by governments is acceptable or not. Authoritarians are using regulation to silence voices and online tools to spread their propaganda. We shouldn’t want, nor expect, tech companies to make those decisions on their own.
Second, we need new norms about whose speech gets to be amplified - whether it’s by the media or Internet platforms. Especially when that speech comes from candidates for office or current elected officials. Even more so when those officials are spreading false information. Each of the cable networks took different approaches in covering President Trump towards the end of his presidency and last year the Plain Dealer made a very public announcement that it would not be covering one of the Ohio candidates for Senate due to the false things he was saying and a desire not to amplify that. I personally have been grappling between my long held belief that people should be able to hear and know what those who want or currently represent them in government have to say so they can make informed choices at the ballot box. I supported Facebook keeping up many of Trump and others’ posts because of that belief. However, I also see the harm this is causing and I’ll be honest, I haven’t quite figured out yet where I personally land on keeping President Trump permanently off of Facebook. I worry about companies being able to pick and choose which politicians people hear from.
Lastly, as good global citizens we need to mirror the behavior that we want to see in others and teach children. We need to take more care to not demonize people who think, look or sound different than us. We need to understand our unconscious biases, work to recognize them and do the even harder work of incorporating diversity and inclusion efforts into every aspect of our lives. We need to learn how to listen and be able to rethink our assumptions. If you haven’t read Adam Grant’s latest book called “Think Again” I highly recommend it. He talks about how intelligence is usually seen as the ability to think and learn, but in a rapidly changing world, there’s another set of cognitive skills that might matter more: the ability to rethink and unlearn. I think that’s a lot of what we as individuals and leaders need to do in order to find the best path forward.
Finally, Markets are about choice and how people’s behavior can bring about change - just like Elon Musk creating Tesla helped to popularize electric cars in a way that regulations could not. Part of the conversation we need to have about moving forward is about where people have the ability to choose what they consume versus what is recommended to them. When someone signs up to Facebook, Twitter, Instagram, etc they first make choices about who they will follow. However, then the companies come into play by using an algorithm to sort that content from the people you follow to show you the content they think you most want to see and they make recommendations to you about other things you may want to follow. This can have real benefits to people, but we also know how it can also lead people into more dangerous areas. Consumers will need to decide which tradeoffs they are willing to prioritize when it comes to convenience and giving more of their data to companies. We need to help give people the tools from early on in their life about understanding how their data is used and how they want to govern it. We need to iterate on guardrails that should be put on algorithms that decide what to show and amplify to people. This is going to require doing a lot more work thinking about definitions around authoritative information, defining news as well as what civic spaces need to be built for the public good. For instance, C-SPAN was created in the late-70s as a “cable-industry financed nonprofit network for televising sessions of the U.S. Congress, other public affairs events, and policy discussions.” One of the things I’d love to work on is thinking about what a version of this could be that is financed by tech companies and exists online to help people have a better insight not only into the federal government but their local governments as well.
There is no finish line in this work. We’ll solve some problems and new ones will always emerge as new technologies are invented. We need to start anticipating sooner how to identify and mitigate any side effects. We need more research, debate and iterating on ideas. There’s no silver bullet here, but I’m optimistic we will figure it out.
That said, we need to turbocharge our efforts to be ready for future elections. In fact, I think this year is our best opportunity to enact regulation and other new norms and rules in time for the next three years of elections. This year we’ll have elections in the United States, Brazil, Philippines and Kenya among many others. In 2024, the future of the Internet is on the line as the world will see for the first time ever not only a US presidential election, but elections in India, Indonesia, Ukraine, Taiwan, Mexico, potentially the UK and the European Parliament - all around the same time. It’s a huge geopolitical moment for the world where the governments of all those major countries could all change in a very short period of time. Companies, researchers, regulators, the media, the international community and civil society all need to start working together now to start building, learning and iterating on the tools we’ll need to keep those elections free and fair. If we fail, it could set us back decades.
Technology has brought so many good things to our lives. These past two years so many of us would have been a lot more lonely, depressed and bored if we didn’t have technology to keep us connected to friends and family, allow us to keep doing our jobs, watch way too much Netflix and order groceries and many other supplies without ever leaving our homes.
I think technology still has the possibility to bring positive change and engagement to our civic lives. Do we have more work to do on mitigating the harms? Absolutely. But we can also do more to help connect people to their local governments, to restore trust in institutions and bring transparency to the process. We can use the internet and power of friends and influencers to get more people to not only register to vote but actually go and vote in more elections than just for President. Speaking of voting we can find ways to safely expand options for people to vote while protecting against fraud. We’re seeing more young people, people of color, women and many others wanting to run for office. We’re seeing people use the web to shine a light on problems we as a society have ignored for way too long and to organize to demand change. We need to help figure out the future of journalism - especially local journalism - to adapt to the way people consume news now. We need to start teaching kids in elementary school how to not only be productive members of their communities but how to be good digital citizens. We need to do this not only in the US but around the world.
If it all sounds really daunting - it is. Everywhere you look there’s a ton of problems that need to be solved and a ton of ideas on how to solve them. As Glennon Doyle said in her book, Untamed, sometimes you just need to take the next right step, then the next and then the next. That’s my philosophy as I continue to look at how I can contribute to finding solutions. I encourage you to do the same. We need to make sure we are discussing the actual problems and acknowledging the hard tradeoffs working on these issues. I also ask you to be willing to work with people you don’t agree with. I often find that we have more in common with where we want the world to go than not, we just disagree on how to get there. It’s time for all of us to work on finding more common ground than finding ways to shut people or ideas out.
Thank you for giving me the time today to share with you some of the history about democracy and the internet as well as where we go from here. We barely scratched the surface on all of these but I hope you found it interesting and helpful. Excited to take your questions.
What I’m reading
The Washington Post - American views on regulating social media companies aren't as partisan as you may think
Verfassungsblog - The DSA's Industrial Model for Content Moderation
TJ McIntyre - Tech Law & EU Sanctions (Twitter Thread)
Wall Street Journal - TikTok Struggles to Find Footing in Wartime
Atlantic Council DFR Lab - Divisive ad campaign targets candidates ahead of Colombian presidential primary