IT Brief UK - Technology news for CIOs & IT decision-makers
Story image

AI & cybersecurity trends to watch for in 2025

Yesterday

Mike Britton, CIO at Abnormal Security, has provided his predictions for security and artificial intelligence (AI) trends in 2025, including developments in multi-factor authentication (MFA), social engineering attacks, generative AI, and deepfake technologies.

Britton predicts that the enforcement of MFA by major platform providers like Google will lead cybercriminals to adapt their strategies. "In 2025, cybercriminals will be forced to adapt their attack techniques as major platform providers like Google begin to enforce MFA," he says. Britton notes that while other major providers, such as Microsoft, may be reluctant to mandate such security measures due to user adoption concerns, it is crucial in today's threat landscape. "Many vendors have been hesitant to mandate security measures like this because they worry that users will find them too complex, and that this could slow adoption. But in today's threat landscape, defences like MFA are tablestakes, and should be 100% mandatory," Britton explains.

However, the implementation of more MFA solutions is anticipated to result in cyber attackers developing additional bypass techniques. "With more MFA solutions in place, we'll likely see more attackers utilise additional MFA bypass techniques, like session hijacking, launching MFA fatigue attacks, and exploiting single sign-on," Britton adds. He emphasises that although MFA can enhance security significantly, a layered security strategy with complete visibility and control across the cloud application ecosystem is necessary. "MFA can greatly enhance security, but it's not a silver bullet. Organisations will need a layered security strategy that should absolutely include MFA, but should also deliver complete visibility and unified control across the cloud application ecosystem," Britton advises.

Turning to social engineering, Britton suggests a shift towards targeting SaaS environments. "The proliferation of SaaS applications will add fuel to the fire for social engineering attacks," he remarks. This shift could see an increase in cybercriminals impersonating legitimate SaaS services to conduct attacks. "Whereas traditional social engineering saw attackers impersonate trusted contacts via email, we'll likely see increasing impersonations of legitimate SaaS services, like DocuSign and Dropbox," Britton continues.

He warns that these attacks, which involve creating genuine SaaS accounts to send phishing messages, are difficult to detect. "In these attacks, cybercriminals create genuine accounts on SaaS services and trigger notifications from the platform that prompt targets to view a file. Because these messages originate from real accounts, with safe-looking links and no malicious attachments, they typically slip past undetected," Britton says. Organisations are advised to vet SaaS vendors and rigorously assess their security measures, while also employing proactive measures to mitigate these risks. "While it will be important to rigorously vet SaaS vendors and assess their efforts to reduce malicious impersonations, there's only so much customers can control. Organisations shouldn't rely exclusively on the vendor's security practices, and should stay proactive about exercising their own due diligence to protect the business from cybercrime," he advises.

In the realm of AI, Britton suggests the initial enthusiasm for generative AI will give way to more practical assessments of its value. "In 2025, organisations will continue to heavily adopt generative AI, but as we move past the initial hype phase, organisations will begin to realise that simply adopting it will not be enough. Now, they will need to undergo a dedicated effort around understanding how it can deliver results," he comments.

This understanding involves evaluating which work processes could benefit from AI integration to enhance efficiency and achieve desired outcomes, such as cost savings or greater user engagement. "To do this, businesses should start by identifying which parts of their workflows are highly manual, which can help them determine how AI can be overlaid to improve efficiency. Key to this will be determining what success looks like. Is it better efficiency? Reduced cost? Stronger user engagement? This will allow the business to measure ROI and more accurately determine AI's potential," Britton explains.

Britton also notes the increasing presence of deepfakes and their potential impact. "While the 'Year of the Deepfake' is probably still a couple years away, in the year ahead, we're going to steadily see more incidents of malicious deepfake activity," he predicts. He identifies legal proceedings and forensic processes as areas where deepfakes could be particularly problematic, due to the ease with which evidence can be manipulated. "Some of the most immediate and concerning use cases we could see may involve the use of deepfakes in legal proceedings and forensics, as CCTV footage and other evidence become much more easily manipulated," Britton warns.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X