Half of UK tech staff turning to risky ‘shadow AI’
Half of UK technology workers are using unauthorised artificial intelligence tools at work every week, as pressure to meet deadlines pushes staff towards what specialists describe as "shadow AI", according to new research from STEM workforce consultancy SThree.
The data indicates that these tools are now embedded in day‑to‑day activity despite concerns over data security and regulatory scrutiny.
SThree's STEM Workforce Report, based on a survey of more than 5,000 professionals in several major economies, found that 50% of UK tech workers use unapproved AI tools at least once a week. Almost a quarter, 23%, use such tools daily.
Four in five respondents, 81%, said they recognise the risks of using unapproved AI applications. Two in five, 39%, said they rely on them to speed up their work. Seventeen per cent said they would struggle to meet deadlines without them.
Security worries
The report highlights concern over the impact of these practices on corporate systems and sensitive information. Seventy‑nine per cent of UK tech professionals said they believe the use of unapproved tools presents a serious or moderate threat to data privacy and security.
Despite this, 51% said they believe the risks outweigh the rewards, yet many continue to use the tools. Respondents cited frustration with official systems. Twenty‑two per cent described approved tools as inefficient. Twenty‑seven per cent described shadow AI tools as faster and simpler.
Researchers said the results suggest a growing gap between formal AI deployment inside organisations and the tools that staff actually use under time pressure.
Rakesh Patel, Managing Director - UK and Rest of Europe at SThree, said workers often turn to unapproved systems because they feel constrained by existing processes.
"AI has become a double-edged sword for the UK's tech workforce - vital for productivity, yet risky when used outside secure systems. The rise of 'Shadow AI' isn't about carelessness but rather a response to pressure. When official tools are too slow or restrictive, people naturally look for faster ways to get the job done, putting companies at risk.
"The challenge for employers is to bridge that gap by giving teams access to secure, efficient AI that keeps pace with modern workloads. Productivity shouldn't come at the expense of security - but right now, too many workers feel forced to choose between the two and there is real risk of more data breaches to come," said Patel.
Regulatory backdrop
The findings come as policymakers in the UK, European Union and United States place greater focus on the security and governance of AI systems. The Alan Turing Institute has set out a new mission in the UK that focuses on cyber‑resilience and safe AI adoption. The EU's AI Act is moving through its final stages, while US agencies have issued guidance on AI risks in critical infrastructure and national security.
The survey positions the UK experience within a wider international context. Respondents were drawn from six countries that SThree said together account for around half of global R&D spending and international patent filings. These markets include the UK, US, Netherlands, Germany, UAE and Japan.
The research suggests that as formal regulation advances, informal AI use in workplaces is already widespread. It also suggests that many organisations lag in rolling out approved tools that match the pace and ease of consumer‑grade generative AI products.
Workplace tensions
The report points to a developing ethical and operational dilemma inside technology teams. Staff are aware of the risks of uploading sensitive material to external systems. They also face tight deadlines, lean staffing and expectations for rapid delivery.
This tension is visible in the figures on dependency. Nearly one in five UK tech workers said they could not meet their deadlines without unapproved AI tools. This indicates that some workflows and performance expectations now assume the use of such software, despite formal bans or restrictions.
SThree said the findings underline the need for employers to review policies, training and tooling. It also said organisations will need to align internal systems with regulatory requirements, while recognising that many staff already rely on AI as part of their routine work.
Patel said organisations face a growing risk of data exposure, but also an opportunity to reshape how AI is embedded in everyday tasks as oversight regimes evolve.