Leading up to the 2024-2025 U.K. general elections, democratic authenticity was under threat.
The reason? Facebook ads.
According to The Guardian, this past January, Facebook displayed over 100 video advertisements, powered by AI deepfakes, to impersonate U.K. Prime Minister Rishi Sunak. These ads, which cost $13,872 for citizens to broadcast to target Britain’s Labour Party, were reported to reach 400,000 viewers, despite their blatant disregard for Facebook’s anti-misinformation policy. They held the additional title of being the first pieces of media to counterfeit Sunak’s image on a nationwide scale.
Not only did these ads misinform U.K. voters about their incumbent prime minister and his reactions to current events, but they also falsely portrayed his colleagues: Another fake ad showed BBC newsreader Sarah Campbell announcing a business scandal involving Sunak, defaming his image and demonstrating the need for caution and knowledge surrounding technology, specifically AI’s involvement in politics. With upcoming elections determining governments in India, the European Union, Mexico, the United States and more, AI will be a potential danger to voters around the globe, who will have to face the all-consuming struggle of discerning political fact from fiction.
In fact, according to a survey completed by the Wilson Center, half of the respondents could not differentiate between real videos and their deepfake counterparts; the majority of this half consisted of people from older generations. These “older generations” are the most vulnerable population when it comes to AI-powered misinformation, especially in a fast-moving, education-lacking climate.
AI’s political presence will have a direct relationship with, and likely contribute to, the increase of electoral misinformation. Given that growing distrust in democracy is already an emerging problem, with most Americans believing political candidates’ agendas are rooted in self-interest, it is imperative to restore lost faith by ensuring elections are fair and solely determined by a voted outcome.
But the broader conversation surrounding technological misinformation cannot just concern the future because technology has already had a hand in manipulating history. For example, in 1992, the Slovenian government utilized its power to erase thousands of residents who identified as ethnic non-Slovenes from their registries, rendering them obsolete from privileges like citizenship and employment. Originally published in 1903, The Protocols of the Elders of Zion used the publicity afforded by Russian journalism to masquerade as “minutes from a Jewish conspiracy to control the world.” Stalin edited malleable photographic records to hide his effort to execute disloyal allies during his Great Purge in August of 1936.
What AI has in common with digital deletion, publishing and photo editing is not just its computational backing, but its potential to be a power grab when made too accessible. Technology, when used in the wrong ways, can promote the opposite of the ideals it was created with, such as connection and freedom. Unfortunately, these historical examples show the decline is evident, and AI only seems to be accelerating its pace.
The consequence of this historical precedent is why AI’s power to falsify important information must be monitored now. We have seen what happens when technology has unregulated influence in even our own elections: In the 2016 scandal, Russian President Vladimir Putin ordered an influence campaign that digitally spread political propaganda to undermine the Democratic Party’s entire agenda.
Fast forward eight years later, and we can see AI still has the potential to continue its harmful role while feeding off Americans’ new reliance on the internet that stemmed from the COVID-19 pandemic. According to staff writer Jeffrey Fleishman of the LA Times, the current electorate is not only politically polarized but also dependent on digital, often AI-powered, sources for information which can be prone to “exploit divisive issues such as race and immigration.” This means that as our country gets more divided, AI will only take advantage of this chance to use people’s vulnerabilities and beliefs against them.
It is imperative to fund organizations like the Electronic Privacy Information Center that advocate against this type of disinformation, as well as efforts dedicated to helping voters become educated regarding AI and its capabilities, in order to remain informed citizens in an age of technological change. We must remember that while technology should be democratized for the public, it should never deceive us in our own journey to a better, healthier democracy.