IT Brief UK - Technology news for CIOs & IT decision-makers
Moody smartphone dual face ai romance warnings uk backdrop art

AI deepfakes erode trust and reshape UK dating apps

Wed, 18th Feb 2026

AI-generated images, messages and deepfakes are reshaping behaviour on UK dating apps. A new survey suggests growing mistrust is pushing some users towards chatbots and AI companions instead of human-led online dating.

A nationally representative poll of 2,000 UK dating app users, commissioned by identity verification and anti-fraud provider Sumsub, found 84% believe AI content has made it harder to trust people or date successfully. That is up sharply from 64% in 2025 in the same research series.

Concerns are rising alongside adoption. The survey found 32% use AI tools such as ChatGPT for coaching or to write messages to potential partners. Another 36% said they have used an AI companion instead of dating apps, highlighting a market where AI is both a risk and a coping mechanism.

The poll also found that 30% say their dating experience has been negatively affected by receiving AI-generated content. While users often link harm to scams and manipulation, the data also points to wider effects, including confusion over what is authentic and how much editing is acceptable in profiles.

Identity fraud remains a central driver of anxiety. Sumsub's 2025 Identity Fraud Report ranked the dating sector joint top for identity fraud rates at 6.35%, tied with online media, based on analysis of more than 4 million fraud attempts. It also pointed to online romance scams that cost victims more than GBP £100 million last year.

Deepfake anxiety

Deepfakes and synthetic media have become a focal point. The survey found 81% worry deepfakes will become more common, and 28% are not confident they can spot deepfakes or AI-manipulated profiles.

That lack of confidence sits alongside direct experience of deception. Some 61% said they have been deceived by fake profiles, or know a friend or family member who has, suggesting fraud affects perceptions beyond individual victims.

Sumsub linked the shift to the wider availability of consumer AI tools that can generate realistic text and images at low cost. It said this has improved the quality of deception in romance scams and catfishing, making moderation and user judgement less effective.

At the same time, AI is moving into mainstream product design on dating platforms. Sumsub pointed to features such as Tinder's AI chat feature "The Game Game" and Hinge's Convo Starters prompts as examples of automated tools shaping the user experience.

Editing norms

Attitudes to AI-enhanced self-presentation are mixed. The survey found 54% are open to, or already using, AI to edit or create images of themselves for dating profiles, but many draw a clear line.

While 60% believe some AI-altered content should be allowed on dating platforms, 42% said they have zero tolerance for any image alterations. The results point to fragmented expectations that could complicate policy decisions for app operators, especially where rules on misleading content were designed for conventional photo editing.

The poll also tested views on AI's broader social impact in dating culture. It found 73% think AI tools like Grok risk normalising the objectification of women online. Separately, 81% said dating platforms should share responsibility-alongside law enforcement or government agencies when relevant-for fraud, scams and malicious content on their services.

Verification pressure

Users appear to be taking more responsibility for their own safety. The survey found 67% are proactively taking steps to verify online matches, such as cross-checking photos, requesting video calls, or validating social media presence.

The findings also point to growing pressure on platforms to clarify what is permitted and to detect malicious uses of AI. Sumsub argued that basic checks are becoming less effective as scammers adopt more sophisticated methods, a dynamic being watched beyond dating as identity verification spreads to other sectors facing account takeover and impersonation risks.

Sumsub, which provides identity verification and anti-fraud services, said it advises organisations including Interpol and the UN on deepfake and AI fraud detection. It described online dating as a high-risk environment where the balance between user experience and safety measures is increasingly difficult to maintain.

Nikita Marshalkin, Sumsub's Head of Machine Learning, said platforms need to respond without penalising ordinary users who are adopting AI tools that are increasingly common across consumer products.

"Platforms have a clear responsibility to protect users without restricting how they choose to engage online," Marshalkin said. "The response from the dating industry is going to be watched very closely by businesses in other sectors who are waking up to how basic verification checks can't compete with the increasingly sophisticated methods scammers use today.

"Users can't be blamed for using AI features offered to them, nor can they be expected to manage the resulting wave of AI content without support. A blanket ban isn't the answer, but without exhaustive governance and improved user awareness around deepfakes and misleading content, online dating will soon become more trouble than it's worth."

Demand for AI features remains strong, but users want clearer rules on altered media and stronger accountability for harmful content as AI-generated deception becomes more common across major platforms.