IT Brief UK - Technology news for CIOs & IT decision-makers
United Kingdom
Generative AI raises cyber risk in machine learning

Generative AI raises cyber risk in machine learning

Wed, 29th Apr 2026
Sean Mitchell
SEAN MITCHELL Publisher

Heriot-Watt University has published research warning that businesses using generative AI in machine learning systems could increase their exposure to cyber-attacks, data breaches and bias. The study was led by Professor Michael Lones of the university's School of Mathematical and Computer Sciences.

The paper examines how generative AI is being used to design, build and run machine learning systems across sectors including finance, insurance and healthcare. It argues that adding large language models can introduce hidden risks that are difficult for organisations to detect, secure or explain.

Machine learning systems have long been used to identify patterns in data and support decisions, from spam filtering and product recommendations to fraud detection and insurance claims processing. The research argues that the recent push to insert generative AI into these systems has outpaced understanding of the trade-offs involved.

Use cases

Lones reviewed four main uses of generative AI in machine learning workflows: as a component within a machine learning pipeline, to design and code pipelines, to create synthetic training data, and to analyse outputs. Each use carries risks, the study found, and those risks increase when large language models are used repeatedly within the same system.

The paper also highlights concerns about so-called agentic models, which can use external tools autonomously to complete tasks. In such cases, interactions between different AI elements may become harder to predict and harder for developers and businesses to monitor.

One central concern is that large language models can make mistakes, generate false information and reach poor decisions. Because these systems are often opaque, it can be difficult to assess when they are wrong and why they have produced a particular result.

Compliance pressure

That creates a problem in regulated sectors, where companies may need to show that an automated system is reliable and explain how it reached a decision. The study points to medicine and finance as areas where transparency and accountability are especially important.

Bias is another issue raised in the research. Hidden flaws in generative AI-supported systems could lead to unfair outcomes for underrepresented groups, particularly where machine learning tools are used in decisions that affect people's health, finances or livelihoods.

Cost pressure is part of the backdrop to this trend. Many organisations are exploring generative AI as a way to cut spending and automate more work within data and software processes, but the study warns that those savings may come with new technical and legal risks.

Expert view

Lones set out his view of those trade-offs in comments published alongside the research.

"Machine learning developers need to be aware of the risks of using Gen AI in machine learning and find a sensible balance between improvements in capability and the risks that might come with that," said Professor Michael Lones, School of Mathematical and Computer Sciences, Heriot-Watt University.

"Given the current limitations of generative AI, I'd say this is a clear example of just because you can do something doesn't mean you should," said Lones.

He also warned against adding too many layers of generative AI to the same workflow.

"If you have Gen AI working in a number of different ways within your machine learning workflows or system, then they can interact in unpredictable and hard-to-understand ways.

"My advice at the moment is to avoid adding too much complexity in how we use Gen AI in machine learning, particularly if you're in a high-stakes sector that affects people's lives and livelihoods," added Lones.

The study adds to a wider debate over how companies are adopting generative AI tools faster than governance, testing and compliance frameworks are evolving. Businesses in heavily regulated industries have faced increasing scrutiny over how automated systems make decisions, handle data and affect customers.

For firms using large language models to support machine learning, the issue is not only whether the technology works, but whether errors, bias or security weaknesses can be identified before they cause harm. The paper suggests that challenge grows as AI systems become more complex and less transparent.

Lones said the concern should matter not only to developers and businesses, but to the public as well.

"In areas like medicine or finance, there are laws about being able to show that the machine learning system is reliable, and that you can explain how it reaches decisions.

"As soon as you start using LLMs, that gets really hard, because they're so opaque. It's important for the general public to be aware of the limitations of GenAI systems.

"Companies will deploy these systems to do things like cut costs, and this may improve the experience end users get, but it may also have negative consequences, such as bias and unfairness," added Lones.