New research from VerifyLabs.AI has found that non-consensual deepfake imagery is the top concern for many Britons, particularly among younger demographics, amid increasing threats from AI-generated content.
Deepfakes and public concern
A survey of 1,000 UK adults conducted by VerifyLabs.AI revealed that 35% fear deepfake nudes or videos of themselves or their children more than any other consequence of AI-generated content. The concern is notably higher among 16 to 34-year-olds, with 50% of respondents in this age group identifying such material as their primary worry.
The study indicated that more than one in three respondents (36%) are also worried about the impact deepfakes could have on their family and friends. These findings point to serious emotional and psychological risks associated with the malicious use of deepfake technology, especially when it targets individuals or their loved ones.
Financial risks associated with deepfakes remain a prominent fear. According to the research, more than half of those surveyed (55%) cited uses for scams and fraud as their greatest concern. Almost half (47%) highlighted sophisticated business fraud, including blackmail, criminal activity, and the potential loss of life savings, as their leading worry. A further 44% are apprehensive about AI-generated content facilitating unauthorised access to personal or sensitive information.
Appetite for detection tools
The research also identified strong demand for practical solutions to address deepfake threats. Over half of respondents (57%) said they would use a deepfake detection tool if provided by a trusted provider such as their bank or employer. This indicates a broad desire for technology to safeguard everyday digital interactions.
Despite evident concerns, the survey found that 10% of participants are unsure what constitutes a deepfake call, demonstrating a need for greater public education on the forms and risks of audio-based deepfake scams.
Deepfake detection made accessible
Coinciding with these findings, VerifyLabs.AI has announced the launch of its Deepfake Detector suite, offering individuals and professionals the means to identify AI-generated threats within images, video, and audio content.
The company states that advanced detection technology previously reserved for government and large enterprise use is now being made widely accessible, aiming to enable people to manage their online safety and address digital impersonation or fraud risks both at home and in the workplace.
"Not all deepfakes are bad, a meme or a bit of satire can be harmless fun but when they're used to mislead, scam, abuse, or incite hate, they can be devastating for the people targeted. VerifyLabs.AI, gives people the opportunity to take back control of their own online safety and easily identify when things are not quite what they seem. Whether it's used to support compliance, confirm someone's identity, or check for signs of fraud, it puts the power of deepfake detection directly into people's hands, taking away the control of the criminals," comments Nick Knupffer, CEO of VerifyLabs.AI.
The VerifyLabs.AI Deepfake Detector operates at up to 98% accuracy, using pattern analysis to detect subtle indicators left by AI-generated content. The tool examines media frame by frame, word by word, and pixel by pixel, working to determine whether output has originated from a human or machine.
The system searches for typical hallmarks such as unnatural lighting, facial inconsistencies in video, overly polished writing or robotic voice patterns, which may go unnoticed by human observers.
The Deepfake Detector is currently available as a browser extension and as an app for iOS devices, with an Android version scheduled for future release.