Aspen & Columbia University AI and Elections Event Key Tech Takeaways
Below are my key takeaways from the tech companies attending an event hosted by Aspen Digital and Columbia University on March 28, 2024.
Full Transcript: https://docs.google.com/document/d/10P43yBH7Be_nEVQAXMJHTFXCBqAlmaDB
NOTE: The transcript was generated by Otter ai and may contain mistakes. Check any quotes to the video.
Anna Makanju - Vice President of Global Affairs, OpenAI
Part of the reason OpenAI didn't open source its model is out of concern on how it could be used to interfere with elections. "But open AI, you know, a relatively young company, this is something that's been top of mind for us for years, in fact, GPT two, which was several years ago, and you know, quite, you know, embarrassing compared to what exists now at the time, it was state of the art, it could produce paragraphs that were texts, like a human could write. And even then, we thought, Oh, well, like the possibility for this to be used to interfere with democracy and electoral processes, very significant. And so we made a decision then not to open source it."
Important to distinguish between the companies that build the tools that generate AI content - like OpenAI - and those that distribute content. The problem sets and how you approach them are different.
OpenAI recently took down a bunch of state actors using their tools.
Yasmin Green, CEO, Jigsaw (Google)
GenZ is looking for social signals to determine if something is fake or not. "Well, I'll tell you, who reads the comments. Gen. Z … But headline, comments, and then the article. Why would they be doing it in that order? Because and this is, according to them. And this research that we did, they want to know if the article is fake news. … I think increasingly ... they're looking for social signals about how to situate the kind of the information, the claims and the relevance to them" Google did a study on this last year.
With people looking for social signals to determine their trust in content need to pay attention to synthetic accounts in addition to synthetic content. "In addition to synthetic content, we have synthetic accounts, we have accounts that are going to be we talked about this earlier, but that you know, these human presenting chatbots. ... Where do you think people are gonna go to check that They're gonna go to other people in the social spaces, the signal. So we need to invest in humans and also invest in, in ensuring that the human presenting chatbots and not do not have an equal share of influence that."
We need new mental models to think about authority and authenticity in the AI age. "I think this is interesting thinking about this tension between authority and authenticity. You know, those are the mental models that we have from, from the last decade of search, and social media. It's like, if it's coming from an institution that I trust, or even Google search, you know, I'm, I'm, there's a lot of trust there. So the stakes are really high, you better get it right. Or if it's social media, if it's coming from my friend, there or my social network, then I trust them. Of course, Gemini via AI is neither of those. It's not authoritative. It's not summarizing what the internet says, and giving you this destination of something that's authoritative. And it also sounds like a human, but it's not a human that you know. So I think we're in an we don't have mental models to deal with, with generative AI output."
Clint Watts, General Manager, Microsoft Threat Analysis Center
People are mostly watching video so adversaries are putting video content out. "So you know, our monitoring list in 2016 were Twitter or Facebook accounts, linking to Blogspot. In 2020, it was Twitter or Facebook, a few other platforms, but mostly linking to YouTube. And today, if you go to it, it's going to be all video."
The simplest manipulations will travel the furthest on the internet. "And what I would say in 2024, there will be fakes, some will be deep, most will be shallow, and the simplest manipulations will travel the furthest on the internet. So in just the last few months, the most effective technique that's been used by Russian actors has been posting a picture and putting a real news organization's logo on that picture."
AI generated fake content is more dangerous in private rather than public domains. "The place to worry is private settings. When people are isolated, they tend to believe things they wouldn't normally believe."
The medium matters a lot when looking at AI. AI generated audio is the one to be worried about. "Video is the hardest to make. Text is the easiest. Text is hard to get people to pay attention to video people like to watch. Audio is the one we should be worried about. AI audio is easier to create because your dataset is smaller. And you can make that on a lot more people. It takes a much smaller data set and you can put it out and there's no contextual clues for the audience to really evaluate."
The most effective content is not fully synthetic but a mix of real and fake. "And that kind of comes to the other thing to look for is there's a intense focus on fully synthetic AI content, the most effective stuff is real, a little bit of fake and then real. Blending it in to change it just a little bit. That's hard to fact check. It's tough to like chase after. So when you're looking at it private settings and audio with a mix of real and fake. That is, that's a powerful tool that can be used."
Context and timing are important. People are more likely to believe fake content when news is breaking. "People immediately, you know, rush to things and when you're feared, or there's something you've never seen before, you tend to believe things that you wouldn't normally believe." "if people know the target, well, they're better at it deciding whether something is fake or not, if you've seen it over and over again, but if you don't know the target Well, or the context, well, they are not as good at it. So there's always the presidential candidate, presidential candidate will be a deep fake, and it will change the world and make her heads explode. Probably not. But if it's a person who's working at election spot, somewhere out in a state, and a deep fake is made, or maybe they're not even a real person. It's these contextual situations that we have to be prepared for in terms of response."
We need to raise the cost for the adversary to interfer versus raising the cost on yourself to protect democracy. "I think the key point is that you have to raise the costs on the adversary at some point, rather than raising the cost on yourself to function as a democracy."
David Agranovich, Director, Global Threat Disruption, Meta
The threat landscape has gotten larger and more diverse. "The days of a network of fake accounts on Facebook and network of fake accounts on Twitter, somewhat, you know, closed ecosystems are gone right now. I think the largest number we've ever seen is 300 different platforms implicated in a single operation from Russia, including local forum websites, things like next door but like for your neighborhood. I as well as more than 5000 just web domains used by a single Russian operation called doppelganger that we reported on last quarter. So what that means is the responsibility for countering these operations is also significantly more diffuse right platform companies don't just have responsibility to protect people on their platforms like the work that our teams do, but also to share information."
Operations are increasingly more domestic and commercialized. "The second big trend to think that we've generally been seeing is that these operations are increasingly domestic and increasingly commercialized. It's their commercial actors who sell capabilities to do coordinate what we call coordinated inauthentic behavior. Disinformation, for hire something unknown Maria's organization has written a lot about the Philippines, in the commercialization of these tools, democratizes access to sophisticated capabilities that used to be basically nation state capabilities, and it conceals the people that pay for them, it makes it a lot harder to to hold the threat actor accountable by making it harder for teams like ours or teams in government to figure out who's behind it."
The AI techniques we fear are being used more by scammers and spammers right now. "I think we've generally seen AI, I would say cheap fakes or shallow fakes are not even AI enabled. But just things like Photoshops are repurposed content from other events, mainly being used by those sophisticated threat actors, Russia, China, Iran. But where we do see AI enabled things like deep fakes or text generation being used by scammers and spammers. ... What I what we should all be alert to is the tactics and techniques that the scammers and spammers use being adopted by more sophisticated actors over time. So if you want to look to see what's coming, that's where I would be looking to see where things are coming."
Watermarking can help identify AI generated content across encrypted platforms. "There's some really exciting, I think, integration between some of the technical standards that we've talked about things like steganographic watermarking, that can be programmatically carried through on platforms, and ensuring that robust and reliable encryption remains in place for people all over the world so that their communications can't be spied on by governments or particularly in authoritarian regimes. ... Because there is there is a world in which we can, and I think it's really important to retain fundamental encryption standards, while still making sure that we are doing our due diligence and our responsibility to protect the broader information environment."