Discover more from Anchor Change with Katie Harbath
There’s a lot to be concerned about, but we must be careful
“It was the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness, it was the epoch of belief, it was the epoch of incredulity, it was the season of light, it was the season of darkness, it was the spring of hope, it was the winter of despair.”
― Charles Dickens, A Tale of Two Cities
Last week, Morning Consult released a poll that said, “Half of Americans expect misinformation spread by AI to impact who wins the 2024 election — and one-third say they'll be less trusting of the results because of artificial intelligence.”
However, that poll also showed that no generation feels “very familiar” with AI.
The Axios story accompanying this information concluded that this lack of trust most likely stems from an overall distrust in tech and institutions. I’m sure that’s true, but I think there’s another reason.
All people hear about every day in the news is how things are going wrong or might go wrong.
Tech influencers like Tristan Harris say 2024 will be the last human election.
Headlines scream that Democracy isn’t ready for its AI test and that Disinformation Researchers Raise Alarms About A.I. Chatbots.
I could go on and on and on.
You hear less about the positive use cases of this technology. In that same Morning Consult poll that, “Of those who have used AI to complete a task, 64% said they felt what the AI produces is better quality than what they could do on their own.”
Or that AI will also help companies do content moderation better. Or that while tech company layoffs have impacted trust and safety teams, it doesn’t mean they’ve stopped the work altogether.
This leaves people with a lopsided perspective into a very complicated world where technology has both positive and negative attributes - rather it’s how the industry builds and deploys that technology, how it’s regulated and how well people are educated on said technology that can shift the outcome.
This doesn’t mean bells shouldn’t be rung, and concerns voiced. Heck, my mantra since January 2020 has been about all the elections next year and my worries that we aren’t ready to deal with so many elections at once.
But I am worried that pushing the panic button on everything might inadvertently contribute more to the decline in trust in our institutions and electoral processes rather than make us more resilient. I worry it could negate the positive benefits of pre-bunking and other work to educate the public.
I’m worried that we all get pushed into our proverbial corners and just shout at one another about how the other is responsible for the decline of democracy rather than working together to figure out a new path forward.
I’m worried that as we get more polarized, focusing on what content should or should not be allowed, people will keep getting hurt. Yoel Roth this week has a powerful essay in the New York Times about the harassment and risk tech employees face when they are personally called out. It’s one of the ultimate impossible tradeoffs tech executives face - follow the directive of a government or risk your employees’ safety. And those doing the front-line work, whether at an election office, company, academic institution, NGO, or think tank, have to weigh being a public part of the conversation with the potential safety risks to them and their family.
The media plays a big role in this as well. Their job is to hold those in power to account. That means naming names. But by doing that, you are also putting people at risk.
I, unfortunately, don’t have the answers to how we balance this, but I do think the first step forward is to hold ourselves accountable to debate these issues responsibly and pragmatically as we go into the next 16 months.
Some of the ways I hope to do this include:
Drawing as clear lines as possible about where I’m speculating what could happen versus what we are actually seeing happening. Speculation and red teaming are important to identify and close gaps - but that doesn’t mean those scenarios will actually happen.
Don’t prescribe any one tactic, company, or incident as having an outsized impact on our overall environment without acknowledging that human beings are complex and get inputs from a lot of places - all of which shape their worldview.
Tech companies do care. And even if we might not think the leaders of these companies care (cough, Musk, cough), there are many employees on the frontlines who do. And, frankly, most executives do care. I guarantee people are working on these problems even if they aren’t ready to make them public yet. (Though I sure wish platforms talked about their plans sooner rather than later to alleviate some anxiety.)
Recognizing that we are in the middle of reshaping our societal norms around speech and how we hold people accountable for that speech. I think it’s good we have this debate out in the open. Reasonable people can disagree on what the right path forward is.
Trying really hard to recognize the panic bait and not take it. Or at least examining all sides of the bait before jumping in.
Some say AI is going to destroy the world. Some say AI is going to save us all. The truth is going to be somewhere in the middle, and it will be important for us to keep that in mind so that when we really need to hit the panic button, people will listen, and we won’t find ourselves living the Aesop’s fairy tale of the boy who cried wolf.
Please support the curation and analysis I’m doing with this newsletter. As a paid subscriber, you make it possible for me to bring you in-depth analyses of the most pressing issues in tech and politics.