A deep dive into deepfakes

A deep dive into deepfakes

In the new era of technological advancement where artificial intelligence (AI) has the capacity to generate everything from essays to works of art, “deepfakes” have emerged to further blur the lines between what is artificially generated and what is real.

The term “deepfakes” was coined as a portmanteau (a blending of two or more words), used to combine the words “deep learning,” a method of machine learning that teaches AI how to process information in a way that mimics that of the human brain, and the word “fake.” Essentially, the term comes from the way deep-learning technology is used to trick an audience into believing the falsified content of a person or voice is real.

Deepfakes are a specific type of synthetic or manipulated media. They are created with AI platforms, which can replace one person’s voice or face with that of another to make it seem as though someone has done or said something they didn’t do. Although deepfakes are occasionally seen in the form of pictures, fabricated videos and audio recordings are more common.

Deepfakes require a significant amount of data to accurately recreate a person’s face or voice. They are often made with a Generative Adversarial Network, an encoder network that is used to analyze the source content (i.e., a person’s face) and then extract crucial features. These features are then inputted into a decoder network, which in turn generates new content (i.e., a manipulated face). This process is often repeated multiple times until the AI produces the desired result of its user. When it comes to video deepfakes, this process needs to be repeated per frame so that the effect is consistent — making them significantly more challenging to create.

Deepfakes became more prominent in 2017 when a Reddit user created a forum entitled “Deepfakes.” On this forum, they posted videos using face-swapping technology to insert celebrities into pre-existing pornographic videos. However, since then, deepfakes have gained popularity and are seen much more frequently. Some well-known examples of deepfakes include a photo of Pope Francis in a puffer jacket and a popular TikTok channel (@deeptomcruise) dedicated to videos of Tom Cruise dancing and giving speeches.

Out of 53 Marlborough students surveyed, more than 70% of respondents were familiar with the concept of deepfakes, and 60.8% of them had encountered artificially generated disinformation before.

“I’ve seen quite a lot of deepfakes of political figures usually depicted saying or doing something ridiculous,” Wynter Williams ’25 said.

Media and personal information breaches have been a concern since before deepfakes emerged in 2017. There have been growing concerns around invasions of privacy in media since 2005, when Google began processing information on an individual basis to better personalize advertisements. Most social media companies rely on similar systems of personalized content to keep users scrolling. However, with the rise of social media, which allows these users to access others’ personal information and images, the use of deepfakes has gained more popularity. Now, citizens’ privacy can be manipulated by both social media companies and other individuals.

Currently, some social media platforms are facing scrutiny for their mismanagement of deepfake videos propagating false information. In May 2019, Facebook was confronted with heavy criticism for a doctored video of Nancy Pelosi slurring her words as though intoxicated, but the company refused to take down the video. Since deepfakes encourage user engagement, and federal laws grant immunity to social media networks and CEOs from being considered the publishers of the media that appears on their platforms, social media companies like Facebook are given no incentive to filter or control the propagation of deepfake videos or audios.

However, deepfakes are more than simply falsified videos of celebrities. The misuse of deepfake technology has resulted in significant risks to individuals’ security and their trust in digital media as deepfakes are often used to manipulate the identities of well-known figures, including politicians like Pelosi or other people in an actual community, such as classmates or coworkers.

“Eventually, deepfake technology will become sophisticated enough to build a Hollywood film on a laptop without the need for anything else,” said Victor Riparbelli, the co-founder of Synthesia, the world’s leading platform for AI video creation.

Political Impact

The use of robocalls and other deepfake propaganda by political candidate supporters has led many to nickname the 2024 election year “the AI election.”

The increase in artificially generated content of political figures will play a large part in our elections from here on out, according to political expert Callum Hood, the Head of Research for the Center for Countering Digital Hate (CCDH).

“You can generate as many of those [deepfakes] as you want very, very quickly,” Hood said in an interview with Politico.

Even political candidates and their campaigners are able to utilize AI deepfakes to generate images almost indistinguishable from real photos. According to generative AI expert Henry Ajder, these false creations can sow doubt into the electoral process. 

“The question is no longer whether AI deepfakes could affect elections, but how influential they will be,” Ajder said in an interview with The Associated Press.

Through these and other acts of election tampering, deepfakes can impact presidential elections in many ways, even before the election cycle truly begins. Using deepfakes to campaign, for example, can spread false information and raise doubts surrounding the truth of information shared via television.

One particularly influential deepfake ad was created by the Republican National Committee criticizing Joe Biden’s re-election campaign, specifically targeting his foreign and domestic policies. The ad included visuals of regional banks shutting down across the United States, Taiwan being invaded by China, San Francisco overrun with criminal activity and more acts supposedly brought about by Biden’s re-election. However, it is important to note that this ad was specifically marked as using artificially generated images. Still, many were shocked by how realistic the advertisement looked.

“I think it’s well put together,” Annalie Quigley ’26 said. “If broadcasted or gone viral, I think it could actually make a change to people’s perspective on Biden.”

This is the first widely viewed political campaign ad created entirely through AI-generated images, and many believe it has marked the beginning of an era of increased political misinformation.

“I’ve seen videos of Joe Biden and Donald Trump on Instagram that were made from AI but looked real,” Quigley said.

Common political deepfakes include Trump posing with Black and Latino voters. Cliff Albright, the co-founder of the campaign group Black Voters Matter, said in a BBC interview that these deepfakes are designed to show Trump as being popular in communities that have been the targets of an increasing amount of misinformation campaigns since the 2020 election.

“There have been documented attempts to target disinformation to black communities again, especially younger black voters,” Albright told the BBC.

Deepfakes can also impersonate political candidates’ voices, including presidential nominees, which can be used to manipulate the public into following the instructions of those people. In January 2024, more than 20,000 New Hampshire residents received a robocall from an AI-generated impersonation of Biden telling them not to vote in the primary election, which led to much confusion among voters. They also expressed feeling that the deepfake was an attempt to control the primary, and many have compiled lawsuits against the creator of the robocall, Steve Kramer, as well as the companies involved, Lingo Telecom and Life Corporation. Biden’s campaign manager Julie Chavez Rodriguez spoke out against the undermining of the electoral process.

“Spreading disinformation to suppress voting and deliberately undermine free and fair elections will not stand, and fighting back against any attempt to undermine our democracy will continue to be a top priority for this campaign,” Rodriguez said in an interview with CNN.

One of the biggest concerns regarding the usage of deepfakes in the upcoming 2024 presidential election is their ability to discourage people from voting on impulse. Currently, the spread of AI deepfakes has greatly affected the 2024 general election in India as there was a deepfake video of Prime Minister Narendra Modi that was widely disseminated on WhatsApp, an instant messaging app. As there are several different languages spoken in India, the fake video of him speaking was easily altered to different languages based on each region to reach a wider audience. Another fake recording of a candidate in the 2024 Slovakian presidential election Michal Šimečka was released just days before the polls opened. The deepfake was manipulated to make it look like he said that he rigged the election, which led many people to not vote. Šimečka then lost the election as there was not enough time to deem the recording as AI-generated. Many American voters fear that what happened in both the Indian and Slovakian election will occur during the 2024 U.S. presidential election, indicated in a recent poll by The Associated Press, which showed that 6 in 10 adults believe AI will be used to spread false information during the upcoming election.

Impact in Schools

Beyond misinformation, deepfakes can be — and often are — used to cause harm to underaged students, most commonly girls, by generating fake images of them.

As the photo editing software required to create deep fakes becomes more accessible, students nationwide have begun to create and spread inappropriate falsified images of their underaged female classmates, causing distress among teenage girls throughout the country.

The phenomenon has begun to occur in schools across the U.S. and has left school administrators unsure on how to handle the harmful possibilities of the new technology.

In 2024, major Southern California middle and high schools, including Laguna Beach High School, Fairfax High School and Beverly Vista Middle School, dealt with issues regarding the circulation of fabricated explicit images. Los Angeles School officials have opened investigations to crack down on which students are responsible for the dissemination of these images, while Beverly Vista Middle School, according to a message sent out by the principal Dr. Kelly Skon, intends to enact harsher restrictions to mitigate the propagation of the inappropriate images.

“Any student found to be creating, disseminating, or in possession of AI-generated images of this nature will face disciplinary actions, including, but not limited to, a recommendation for expulsion,” Skon said in an email sent out to parents on Feb. 21, 2024.

Due to the recent and rapid rise of deepfakes in educational environments, students and their parents haven’t been previously informed on how to properly report a falsified sexual image. In February, the Federal Bureau of Investigation officially declared that it is illegal to create altered sexual photos of underaged people, stating that those who come across these images or videos should immediately report sexual abuse and child sexual abuse material to the police as it is now required by law. 

Most victims of this illegal act have been teenage girls in an attempt by their classmates to harass or bully them. Victims have spoken up and reported that they have dealt with intense trauma and serious mental health issues, which has led to sexualized deep fake imagery being labeled as a form of abuse and cyberbullying by people around the country.

In 2023, an anonymous non-Marlborough student found out that one of her male friends created nude deepfake images of her. Although she was able to stop them from spreading throughout her school, the creation of the deepfake had a long lasting psychological impact on her.

“I remember being so terrified that one of my friends created fake sexual images of me because he wanted to embarrass me,” the student said. “It is hard to know who to trust knowing that software programs like the one he used are easily accessible and can greatly affect your life.”

Initially, several victimized teenage girls from the anonymous student’s school came together in hopes of creating more restrictions on the technology used to create deep fakes. Due to the difficult nature of removing the ability to create these altered images after people have already learned how to make them, the girls decided to focus on finding a software that could easily detect when an image has been altered.

“Realistically, we know that getting rid of deepfake-creating technology entirely is basically impossible,” the anonymous student said. “We are more focused on finding easy ways to figure out that an image is a deepfake or has been altered in any way. We have since figured out that artificial intelligence can figure out if an image is fake.”

All around the country, school administrators continue to try to find ways to prevent these images from spreading as many underaged girls and young women have expressed their fears around this abuse potentially happening to them.

“I worry about someone potentially creating a deepfake of me,” Williams said.

Leave a Comment
Donate to The UltraViolet

Your donation will support the student journalists of Marlborough School. Your contribution will allow us to purchase equipment and cover our annual website hosting costs.

More to Discover
Donate to The UltraViolet

Comments (0)

All The UltraViolet Picks Reader Picks Sort: Newest

Your email address will not be published. Required fields are marked *