IT Brief UK - Technology news for CIOs & IT decision-makers
Story image

Less than 15% of organisations have a Gen AI security policy

Tue, 5th Sep 2023
FYI, this story is more than a year old

It's hard to tell what is happening quicker: organisations adopting Chat-GPT or banning it for potential security issues. 

While the productivity gains and benefits of the technology are clear, do organisations have a policy or training for staff to use the technology successfully? 

According to new research by CybSafe, 1000 office workers were asked, 'Are you aware of any measures taken by your company to inform and support employees on emerging cybersecurity threats related to generative AI?' 

The research found that only 9% of respondents stated they had a policy, while only 4% stated they have a policy but don't know where to access it. Some 56% of respondents said they do not have a policy, while 14% stated they do not know if they have a policy. Furthermore, 10% stated they have access to general information, and only 7% of respondents said they had training on the topic. 

The research also found that 64% of Generative AI-using office workers have entered work information into a generative AI tool, with 38% admitting to sharing data they wouldn't casually reveal to a friend in a bar.

Previous research conducted by CybSafe found that a staggering 1 in 10 workers remembers all their cyber security training, calling into question how effectively employers are getting the message across regarding the importance of security compliance online. 

"The emerging changes in employee behaviour also need to be considered," says Dr Jason Nurse, CybSafe's director of science and research and current associate professor at the University of Kent.

"If employees are entering sensitive data sometimes on a daily basis, this can lead to data leaks. Our behaviour at work is shifting, and we are increasingly relying on Generative AI tools. Understanding and managing this change is crucial," he says.

"Generative AI has enormously reduced the barriers to entry for cyber criminals trying to take advantage of businesses. Not only is it helping create more convincing phishing messages, but as workers increasingly adopt and familiarise themselves with AI-generated content, the gap between what is perceived as real and fake will reduce significantly," Nurse says.

"As Generative AI infiltrates the workplace, it's building a cyber-superhighway for criminals. Half of us are using AI tools at work, and businesses aren't keeping pace," he says. 

"We're seeing cybercrime barriers crumble, as AI crafts ever more convincing phishing lures. The line between real and fake is blurring, and without immediate action, companies will face unprecedented cybersecurity risks."

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X