Could you spot the fake? One-third of Britons can't tell AI from reality

Next story
  • UK-wide survey finds 67% of the public is deeply concerned about the lack of regulations around AI-generated political content
  • Nearly half (48%) of those surveyed wouldn’t trust political content they saw on social media in the run up to an election
  • Only a third of Britons say they could spot whether videos (33%) or images (34%) had been manipulated by AI

BOURNEMOUTH, UK – April 30, 2024 – Today, ESET, a global leader in cybersecurity, has released the findings of a UK-wide survey into the threat of rampant AI-generated misinformation ahead of the election. Nearly four in ten Britons have stumbled upon deepfakes online, and distressingly, many are none the wiser until it's too late. The research found that the proliferation of AI-generated misinformation is eroding public trust in the diplomatic process, with almost half of respondents (48%) reporting that they would not trust political content they saw on social media in the run up to an election.

Imagine logging onto your favourite social media platform only to find a video of a leader (political or any kind) saying something they never actually said. This isn’t the plot of a dystopian novel; it’s a reality faced by 36% of the British public who have encountered deepfakes – sophisticated AI-generated videos that mimic reality with terrifying accuracy.

What's more worrisome? More than a third of us admit we're not sure if we could tell authentic content from AI-generated fakes when it comes to videos (34%), images (34%), and audio (38%).

In January alone, more than 100 deepfake video advertisements impersonating Rishi Sunak were paid to be promoted on Facebook, raising the alarm about the risk that AI poses before the general election.

In the UK, recent legislative actions have addressed the misuse of deepfake technology, particularly regarding the creation and distribution of non-consensual deepfake imagery. This has been included in amendments to the Criminal Justice Bill as part of a broader effort to combat online harms, which criminalises the sharing of deepfake intimate images.

According to a separate ESET study, nearly two thirds of women worry about falling victim to deepfake pornography. 10% have already been victim to it, know someone who has, or both. Of those who have sent intimate images of themselves to others in confidence, 33% say they had their images misused, with 28% having their photos posted publicly without permission.

The research shows that our democratic processes, built on the informed consent of the governed, are now under threat from these easy-to-create and increasingly convincing digital doppelgangers. 67% of the public say they are concerned by the lack of regulation around AI-generated political misinformation, and are holding the government (63%) and big tech companies (53%) to account for stopping it.

Jake Moore, Global Cybersecurity Advisor, ESET, commented: “Recent advancements in AI-powered tools have led to a resurgence in the use of deepfakes, making it easier than ever to manipulate various forms of media, including images, videos, and sound. Using this technology on well-known and political figures is extremely easy to create and can be made in little or no time at all. Currently deepfakes look extremely good but there can often be a noticeable factor such as a strange head movement, a flicker of light or blurred parts of the face that gives the game away. However, as this technology advances, distinguishing between them has become increasingly more challenging especially in the midst of an election or when there is a more powerful narrative or agenda to prove.”

Summary of findings:

  • Over a third (36%) of respondents have been exposed to deepfake content online, showing just how prevalent this issue already is
  • Respondents aren’t confident they could spot video (34%), audio (38%) and imagery (34%) that had been manipulated by AI
  • Nearly half (48%) of those surveyed wouldn’t trust political content they saw on social media in the run up to an election 
    • This rises to 61% of respondents aged 55+

  • Over two-thirds (67%) are concerned about the lack of regulation and fact-checking around AI-generated content
    • 65% are concerned about AI-generated political information becoming more common
    • 65% are concerned about not recognising what information is AI-generated
    • 57% are concerned about its ability to change their opinion on matters
    • 56% are concerned about its impact on affecting their voting choices

Methodology:

The research was conducted by Censuswide, among a nationally representative sample of 2,016 UK respondents, aged 18+. The data was collected between 18th April 2024 – 23rd April 2024. Censuswide abides by and employs members of the Market Research Society and follows the MRS code of conduct and ESOMAR principles. Censuswide is also a member of the British Polling Council.

About ESET:

For more than 30 years, ESET® has been developing industry-leading IT security software and services to protect businesses, critical infrastructure, and consumers worldwide from increasingly sophisticated digital threats. From endpoint and mobile security to endpoint detection and response, as well as encryption and multifactor authentication, ESET's high-performing, easy-to-use solutions unobtrusively protect and monitor 24/7, updating defences in real-time to keep users safe and businesses running without interruption. Evolving threats require an evolving IT security company that enables the safe use of technology. This is backed by ESET's R&D centres worldwide, working in support of our shared future. For more information, visit www.eset.com or follow us on LinkedIn, Facebook, and Twitter.