The student news site of The Archer School for Girls

The Oracle

The Latest
The student news site of The Archer School for Girls

The Oracle

The student news site of The Archer School for Girls

The Oracle

Instagram Feed
Email Subscription

Column: What if I told you AI was democracy’s new rival?

Photo credit: Allie Yang
Pictured is a question I typed into ChatGPT, a well-known AI chatbot, about its role in a democratic society. While the bot responds vaguely with a positive connotation, I argue AI rather threatens democracy’s foundation if used to misinform people and advance selfish political agendas.

Leading up to the 2024-2025 U.K. general elections, democratic authenticity was under threat.

The reason? Facebook ads.

According to The Guardian, this past January, Facebook displayed over 100 video advertisements, powered by AI deepfakes, to impersonate U.K. Prime Minister Rishi Sunak. These ads, which cost $13,872 for citizens to broadcast to target Britain’s Labour Party, were reported to reach 400,000 viewers, despite their blatant disregard for Facebook’s anti-misinformation policy. They held the additional title of being the first pieces of media to counterfeit Sunak’s image on a nationwide scale.

Not only did these ads misinform U.K. voters about their incumbent prime minister and his reactions to current events, but they also falsely portrayed his colleagues: Another fake ad showed BBC newsreader Sarah Campbell announcing a business scandal involving Sunak, defaming his image and demonstrating the need for caution and knowledge surrounding technology, specifically AI’s involvement in politics. With upcoming elections determining governments in India, the European Union, Mexico, the United States and more, AI will be a potential danger to voters around the globe, who will have to face the all-consuming struggle of discerning political fact from fiction.

In fact, according to a survey completed by the Wilson Center, half of the respondents could not differentiate between real videos and their deepfake counterparts; the majority of this half consisted of people from older generations. These “older generations” are the most vulnerable population when it comes to AI-powered misinformation, especially in a fast-moving, education-lacking climate.

AI’s political presence will have a direct relationship with, and likely contribute to, the increase of electoral misinformation. Given that growing distrust in democracy is already an emerging problem, with most Americans believing political candidates’ agendas are rooted in self-interest, it is imperative to restore lost faith by ensuring elections are fair and solely determined by a voted outcome.

But the broader conversation surrounding technological misinformation cannot just concern the future because technology has already had a hand in manipulating history. For example, in 1992, the Slovenian government utilized its power to erase thousands of residents who identified as ethnic non-Slovenes from their registries, rendering them obsolete from privileges like citizenship and employment. Originally published in 1903, The Protocols of the Elders of Zion used the publicity afforded by Russian journalism to masquerade as “minutes from a Jewish conspiracy to control the world.” Stalin edited malleable photographic records to hide his effort to execute disloyal allies during his Great Purge in August of 1936.

What AI has in common with digital deletion, publishing and photo editing is not just its computational backing, but its potential to be a power grab when made too accessible. Technology, when used in the wrong ways, can promote the opposite of the ideals it was created with, such as connection and freedom. Unfortunately, these historical examples show the decline is evident, and AI only seems to be accelerating its pace.

The consequence of this historical precedent is why AI’s power to falsify important information must be monitored now. We have seen what happens when technology has unregulated influence in even our own elections: In the 2016 scandal, Russian President Vladimir Putin ordered an influence campaign that digitally spread political propaganda to undermine the Democratic Party’s entire agenda.

Fast forward eight years later, and we can see AI still has the potential to continue its harmful role while feeding off Americans’ new reliance on the internet that stemmed from the COVID-19 pandemic. According to staff writer Jeffrey Fleishman of the LA Times, the current electorate is not only politically polarized but also dependent on digital, often AI-powered, sources for information which can be prone to “exploit divisive issues such as race and immigration.” This means that as our country gets more divided, AI will only take advantage of this chance to use people’s vulnerabilities and beliefs against them.

It is imperative to fund organizations like the Electronic Privacy Information Center that advocate against this type of disinformation, as well as efforts dedicated to helping voters become educated regarding AI and its capabilities, in order to remain informed citizens in an age of technological change. We must remember that while technology should be democratized for the public, it should never deceive us in our own journey to a better, healthier democracy.

Leave a Comment
More to Discover
About the Contributor
Allie Yang
Allie Yang, Columnist
After serving on Archer's yearbook, Hestia's Flame, for a year as a staffer, Allie Yang joined the Oracle as a senior reporter in 2022. She became a columnist in 2023. Her column discusses aspects of rising technology such as AI, social media, and more.

Comments (0)

As part of Archer’s active and engaged community, the Editorial Board welcomes reader comments and debate and encourages community members to take ownership of their opinions by using their names when commenting. However, in order to ensure a diverse range of opinions, the editorial board does allow anonymous comments on articles as long as the perspective cannot be obtained elsewhere, and they are respectful and relevant. We do require a valid, verified email address, which will not be displayed, but will be used to confirm your comments. Because we are a 6-12 school, the Editorial Board reserves the right to omit profanity and content that we deem inappropriate for our audience. We do not publish comments that serve primarily as an advertisement or to promote a specific product. Comments are moderated and may be edited in accordance with the Oracle’s profanity policy, but the Editorial Board will not change the intent or message of comments. They will appear once approved.
All The Oracle Picks Reader Picks Sort: Newest

Your email address will not be published. Required fields are marked *