Repeated Exposure to Deepfake Images Increases Belief in False Claims, NTU Study Finds

A new international study led by Nanyang Technological University, Singapore (NTU Singapore), has revealed that repeated exposure to deepfake videos makes individuals more likely to believe their claims, even when they are false. The study, which involved 8,070 participants from eight countries, highlights the growing risk of misinformation in the age of artificial intelligence (AI).

The Power of Repetition in Deepfake Misinformation

Deepfake Research Kim Kardashian Vladimir Putin

The study specifically examined viral AI-generated videos featuring well-known public figures, such as Kim Kardashian, Mark Zuckerberg, Vladimir Putin, and Tom Cruise. Interestingly, researchers discovered that participants who had previously encountered these deepfakes were more likely to believe their false claims when exposed to them again. This finding suggests that repeated exposure to misinformation can increase its perceived credibility.

This effect is due to the illusory truth effect, a psychological phenomenon. Repeated exposure to information makes it feel more familiar. As a result, people perceive it as more believable, regardless of its accuracy. Researchers also found that individuals who rely primarily on social media for news are at greater risk of falling for deepfakes. Since they encounter such content more often, they are more likely to believe it.

Cognitive Ability Does Not Provide Protection

Surprisingly, the study found that higher cognitive ability did not protect individuals from believing repeated deepfake claims. This contradicts previous research suggesting that analytical thinking can help counter misinformation. Instead, the study suggests that mere repetition can override critical thinking skills, making even intelligent individuals vulnerable to deepfake deception.

Implications for Governments and Tech Companies

Assistant Professor Saifuddin Ahmed, who led the research at NTU’s Wee Kim Wee School of Communication and Information, emphasised the urgent need for policymakers and tech companies to address the dangers of deepfakes.

“As deepfakes become increasingly common, there is a pressing need for governments, tech companies, and media outlets to collaborate on solutions that mitigate their impact,” said Asst Prof Ahmed.

He also suggested that policymakers could use the illusory truth effect to their advantage, employing repetition to spread accurate information—such as public health messages—to counteract misinformation.

Singaporeans Among the Least Deceived

The study also revealed significant national differences in susceptibility to deepfakes. Respondents from Singapore were the least likely to believe false deepfake claims, followed by those in Vietnam and the Philippines. Researchers attribute this to higher digital literacy levels and proactive government efforts to combat misinformation.

However, the study found that many Singaporeans who did not believe the deepfakes were still uncertain about their accuracy, reflecting a general scepticism toward online information. While scepticism can help prevent misinformation, it can also make individuals more vulnerable to future manipulation, as repeated exposure may gradually erode their doubts.

A Growing Threat in the AI Era

With the rapid advancement of AI-generated content, deepfakes are becoming more sophisticated and widespread. This study underscores the need for stronger digital literacy initiatives, fact-checking mechanisms, and AI-driven detection tools to combat the spread of misinformation.

As deepfakes continue to blur the line between fact and fiction, ensuring the public is equipped with the knowledge to recognise and challenge deceptive content is more critical than ever.

Stay tuned to us at Adam Lobo TV for more news like this.

Leave a Reply