AI drives 80 percent of phishing with USD $112 million lost in India
Artificial intelligence has become the predominant tool in cybercrime, according to recent research and data from law enforcement and the cybersecurity sector.
AI's growing influence
A June 2025 report revealed that AI is now utilised in 80 percent of all phishing campaigns analysed this year. This marks a shift from traditional, manually created scams to attacks fuelled by machine-generated deception. Concurrently, Indian police recorded that criminals stole the equivalent of USD $112 million in a single state between January and May 2025, attributing the sharp rise in financial losses to AI-assisted fraudulent operations.
These findings are reflected in the daily experiences of security professionals, who observe an increasing use of automation in social engineering, malware development, and reconnaissance. The pace at which cyber attackers are operating is a significant challenge for current defensive strategies.
Methods of attack
Large language models are now being deployed to analyse public-facing employee data and construct highly personalised phishing messages. These emails replicate a victim's communication style, job role and business context. Additionally, deepfake technology has enabled attackers to create convincing audio and video content. Notably, an incident in Hong Kong this year saw a finance officer send HK $200 million after participating in a deepfake video call bearing the likeness of their chief executive.
Generative AI is also powering the development of malware capable of altering its own code and behaviour within hours. This constant mutation enables it to bypass traditional defences like endpoint detection and sandboxing solutions. Another tactic, platform impersonation, was highlighted by Check Point, which identified fake online ads for a popular AI image generator. These ads redirected users to malicious software disguised as legitimate installers, merging advanced loader techniques with sophisticated social engineering.
The overall result is a landscape where AI lowers the barriers to entry for cyber criminals while amplifying the reach and accuracy of their attacks.
Regulatory landscape
Regulators are under pressure to keep pace with the changing threat environment. The European Union's AI Act, described as the first horizontal regulation of its kind, became effective last year. However, significant obligations affecting general-purpose AI systems will begin from August 2025. Industry groups in Brussels have requested a delay on compliance deadlines due to uncertainty over some of the rules, but firms developing or deploying AI will soon be subject to financial penalties for not adhering to the regulations.
Guidance issued under the Act directly links the risks posed by advanced AI models to cybersecurity, including the creation of adaptive malware and the automation of phishing. This has created an expectation that security and responsible AI management are now interrelated priorities for organisations. Company boards are expected to treat the risks associated with generative models with the same seriousness as data protection or financial governance risks.
Defensive measures
A number of strategies have been recommended in response to the evolving threat environment. Top of the list is the deployment of behaviour-based detection systems that use machine learning in conjunction with threat intelligence, as traditional signature-based tools struggle against ever-changing AI-generated malware. Regular vulnerability assessments and penetration testing, ideally by CREST-accredited experts, are also regarded as essential to expose weaknesses overlooked by both automated and manual processes.
Verification protocols for audio and video content are another priority. Using additional communication channels or biometric checks can help prevent fraudulent transactions initiated by synthetic media. Adopting zero-trust architectures, which strictly limit user privileges and segment networks, is advised to contain potential breaches. Teams managing AI-related projects should map inputs and outputs, track possible abuse cases, and retain detailed logs in order to meet audit obligations under the forthcoming EU regulations.
Staff training programmes are also shifting focus. Employees are being taught to recognise subtle cues and nuanced context, rather than relying on spotting poor grammar or spelling mistakes as indicators of phishing attempts. Training simulations must evolve alongside the sophistication of modern cyber attacks.
The human factor
Despite advancements in technology, experts reiterate that people remain a core part of the defence against AI-driven cybercrime. Attackers are leveraging speed and scale, but defenders can rely on creativity, expertise, and interdisciplinary collaboration.
"Technology alone will not solve AI‑enabled cybercrime. Attackers rely on speed and scale, but defenders can leverage creativity, domain expertise and cross‑disciplinary thinking. Pair seasoned red‑teamers with automated fuzzers; combine SOC analysts' intuition with real‑time ML insights; empower finance and HR staff to challenge 'urgent' requests no matter how realistic the voice on the call," said Himali Dhande, Cybersecurity Operations Lead at Borderless CS.
The path ahead
There is a consensus among experts that the landscape has been permanently altered by the widespread adoption of AI. It is increasingly seen as necessary for organisations to shift from responding to known threats to anticipating future methods of attack. Proactive security, embedded into every project and process, is viewed as essential not only for compliance but also for continued protection.
Borderless CS stated it, "continues to track AI‐driven attack vectors and integrate them into our penetration‐testing methodology, ensuring our clients stay ahead of a rapidly accelerating adversary. Let's shift from reacting to yesterday's exploits to pre‐empting tomorrow's."