Special Projects / Topics

AI deepfakes spread dangerous misinformation

The number of deepfake videos and images made by various artificial intelligence tools spreading across media outlets in a matter of seconds is increasing each year. In 2023, deepfakes are starting to look more convincing compared to an actual image of someone. Many people have a hard time determining if online images or videos of celebrities are real or an AI deepfake. 

Deepfake images and videos of politicians can spread misinformation, which can potentially have catastrophic effects on society. (Photo illustration courtesy of UC Berkeley)

One of the main dangers associated with AI-generated deepfakes is that many political figures have been victimized. In these deepfake videos of politicians, they are seen spreading misinformation or hate speech. When seen by large audiences, these videos can lead people to believe false information and gain an inaccurate view of the politician.  

Allie Funk, a researcher who specializes in AI-generated deepfakes, believes that this issue can be detrimental to the nation and the world. “Internet freedom is at an all-time low, and advances in AI are actually making this crisis even worse,” Funk said. “It’s going to allow for political actors to cast doubt about reliable information.”  

Not all AI experts believe deepfakes are a threat. Some researchers believe people will not be easily persuaded into believing everything they see online. Many people are becoming aware of what deepfake media looks like due to the rise in popularity of different AI tools. 

With the rise of artificial intelligence, researchers believe deepfakes are making it harder to distinguish between real and fake photos and video content. (Image courtesy of End Time Headlines)

Despite this growing awareness, according to a report published by the American Psychological Association, deepfakes spread via social media are often unimpeded by factual corrections due to the “echo chamber” effect. According to the report, behavioral modeling shows that “rapid publication and peer-to-peer sharing allow ordinary users to distribute information quickly to large audiences, so misinformation can be policed only after the fact [if at all]. ‘Echo chambers’ bind and isolate online communities with similar views, which aids the spread of falsehoods and impedes the spread of factual corrections.”

The main factor that deepfake experts can agree on is that people need to know what online information is accurate and what is not. Past situations have shown how deepfakes can promote propaganda that results in uninformed opinions being generated and discrimination being perpetuated throughout society. 

A professor at UC Berkeley’s School of Optometry, Hany Farid, said many historical figures in the past have purposefully manipulated photographs as well. “Stalin manipulated photographs,” Farid said. “Hitler did it. Mao did it. There’s power in visual imagery. You change images, and you change history. Over half of the content you see online is either generated by bots or is simply not true.”  

Leave a Comment

Your email address will not be published. Required fields are marked *