IT Brief UK - Technology news for CIOs & IT decision-makers
Executive face digital mask biometric scan blurred cityscape

AI deepfakes force firms to rethink trust & security

Thu, 29th Jan 2026

Artificial intelligence is reshaping the cyber threat landscape, as security and data protection specialists warn that deepfakes and AI-driven attacks are undermining traditional trust models around identity and authentication.

As organisations mark Data Privacy Day, industry figures are highlighting how AI tools have expanded both the sophistication and reach of criminal activity, and are urging companies to reassess how they verify users, secure systems and educate staff.

Deepfake threat

Marco Ramilli, Chief Executive Officer at identity verification firm identifAI, said AI-enabled deepfakes are eroding confidence in long-standing security methods, including biometrics.

"Cybercriminals are now able to deploy more sophisticated attacks on a much grander scale than we have ever seen, largely driven by AI. Today, deepfake technology can surpass even the most advanced biometric tools, completely reframing the trust we have in what was once thought to be the most robust method of user authentication," said Marco Ramilli, CEO, identifAI.

Security teams have reported more frequent attempts to impersonate senior leaders through synthetic media. These incidents often seek to trigger urgent financial transfers or the release of sensitive data.

Executive impersonation

Recent high-profile cases have involved attackers cloning the voices or likenesses of chief executives and other executives. These incidents have raised concerns among Chief Information Security Officers who oversee defences against fraud and social engineering.

Ramilli said the range of deepfake formats now in circulation has increased the challenge for corporate security.

"In recent years, there have been a number of high-profile deepfake attacks targeting CEOs and other senior executives, putting this issue more firmly in the lap of the CISO. Whether it is voice cloning, deepfake imagery, or synthesised video content, it is clear that we can no longer trust what we see. In response, businesses must emphasise the importance of using multiple means of verification when it comes to securing their organisation's perimeters, both from a technological and physical point of view," said Ramilli.

Security specialists are advising firms to combine technical controls with procedural checks. These measures include secondary approvals for high-value transactions and independent verification of unusual instructions from senior leaders.

Staff training

Companies are also reviewing staff training as attackers adopt more convincing impersonation techniques. Phishing and business email compromise scams are increasingly supported by voice and video fabrications that can appear legitimate to untrained employees.

Ramilli said organisations should embed a sceptical mindset and structured response procedures into their workforce.

"Employees should be educated on the advancing attack vectors, and policies put in place to respond to them. Take the example of a voice or video call from a CEO, requesting that money be transferred to a new account immediately. This may look or sound legitimate, but requesting other means of verification following the call is an easy way to overcome what may be a phishing attack that uses voice-cloning or deepfake technology. Advanced deepfake detection tools can also greatly assist with reducing these threats, but creating a culture that challenges assumptions is also crucial," said Ramilli.

Regulatory focus

At the same time, regulators are tightening oversight of how technology providers manage data and security. Authorities in major markets are updating privacy and cybersecurity rules in response to growing concerns about data misuse and systemic vulnerabilities.

Vinod Kashyap, Chief Product Officer at Digital Envoy, said regulatory initiatives are pushing organisations to take a broader view of cybersecurity and privacy risk.

"There is growing appreciation of cybersecurity's real-world relevance, especially from a privacy standpoint. Government regulators are imposing tougher responsibilities on a wider range of tech providers to ensure the digital services that underpin consumers' daily habits and activities can better protect both systems and the personal data flowing through them. At the same time, it's clear that sensitive information is at increasing risk as AI amplifies both the sophistication and scale of cyberattacks, and the technology becomes more deeply integrated into how we live and work. Stepping up technical security measures, by implementing tools that support User and Entity Behaviour Analytics (UEBA), Identity and Access Management (IAM), for example, is going to be essential to prevent misuse. This should also include smarter threat monitoring tools, such as Endpoint Detection & Response (EDR/XDR) solutions, that can quickly spot potential signs of foul play, including behavioural shifts and anomalies," said Vinod Kashyap, Chief Product Officer, Digital Envoy.