Do deepfake videos challenge the integrity of online information?
Deepfakes – highly realistic synthetic audio-video content depicting actual individuals – may potentially increase the dissemination of disinformation online, but research on the phenomenon and how users are affected is sparse. A new research project spearheaded by the IT University’s Human-Centered Data Science research group tackles the issue.
Imagine one morning logging on to your social media platform of choice to find a video of the Prime Minister on the rostrum in parliament declaring war on a neighbouring country. The video is a fake – a hyper realistic synthetic audio-video depiction of a real person based on machine learning and artificial intelligence – but many people may take it at face value. Imagine the societal consequences of a part of the population believing that the country is at war.
What not long ago seemed like a futuristic pipedream – technology capable of creating realistic fake audio-video representations of real people – is a common occurrence today. The advent of deepfake technology has prompted a host of new legal, ethical, sociological, and psychological problems that researchers are only now starting to engage with.
With the current gap in research, it is for instance unknown how effective deepfakes are as tools of disinformation when placed in context. We also do not know which cognitive factors make them successful in disseminating disinformation. This uncertainty prohibits development of effective countermeasures to protect the integrity of social networks.
That is where a new ITU research project funded by Meta to the tune of 100 000 USD comes into the picture. The Investigating Persuasiveness of Contextualized Disinformation Across Media-project undertaken by ITU’s Human-Centered Data Science research group (HCDS), which focuses on the human factors in the conception, use, and understanding of data, aims to close this gap.
Fighting disinformation online
By conducting online surveys in which participants are presented with deepfake-videos without their knowing, the researchers seek to find out if deepfakes, in the context of deceptive social media posts, influence participants’ beliefs, attitudes, and willingness to share significantly more than material generated from simpler technical means. Similarly, the project also wishes to investigate if covariables stimulus fluency, stimulus vividness, feeling of familiarity towards the stimulus, and emotional intensity of the stimulus correlate with a person’s likelihood of being deceived.
The goal of the project is to reveal the impact of deepfake technology on users of online information systems. By comparing disinformation generated by deepfake technology, and placed in context, to content created with simpler technology, the project will uncover the extent to which recent technological advances enable fundamental different challenges to online deception.
The project also aims to understand the cognitive factors that determine the persuasive power of online disinformation. As such, the project’s outcomes will serve towards the fortification of online social networks against the emerging forms of deepfake-based disinformation that threaten democratic stability.
“We are very interested in finding out if deepfake-videos are more effective at spreading misinformation than more conventional types of content,” says Assistant Professor at ITU’s Digital Design department, Aske Mottelson, who is project PI. “We are looking at the effect on the human mind and in doing so trying to create a better understanding of the technology and better grounds for developing policies.”
Theis Duelund Jensen, Press Officer, tel: 2555 0447, email: firstname.lastname@example.org