IT Brief UK - Technology news for CIOs & IT decision-makers
Story image

Security challenges of generative AI in software development

Today

Legit Security has released a report highlighting the significant security challenges in using Generative AI (GenAI) for software development, despite its high adoption rate among developers and security teams.

The report, titled "Use and Security of GenAI in Software Development," presents insights gathered from over 400 security professionals and software developers across North America, revealing that both developers and security teams are facing critical security challenges associated with GenAI usage.

Liav Caspi, Co-Founder and CTO at Legit Security, commented, "As generative AI transforms software development and becomes increasingly embedded in the development lifecycle, there are some real security concerns among developers and security teams. Our research found that teams are challenged with balancing the innovations of GenAI and the risks it introduces by exposing their applications and their software supply chain to new vulnerabilities. While GenAI is undoubtedly the future of software development, organizations must be mindful of its new risks and ensure they have the appropriate visibility into and control over its use."

The survey indicates that generative AI is significantly altering software development processes by automating tasks and enhancing efficiency and productivity. According to the report, 88% of developers are using GenAI within their development organisations, pointing to a broad shift towards AI to meet the increasing demands of tight deadlines and complex projects.

However, despite these efficiencies, security concerns remain prominent. Legit Security's prior research has shown that LLMs and AI models often contain bugs and vulnerabilities that can lead to AI supply chain attacks. The recent survey corroborates these findings, detailing a number of critical concerns.

Among the survey's key findings, 96% of the professionals reported that their companies are deploying GenAI-based solutions for application development. Notably, 79% stated that all or most of their development teams are using GenAI regularly.

The use of code assistants in development, a feature of GenAI, is of particular concern, with 84% of security professionals worried about unknown and potentially malicious code being introduced into development projects.

There is also a growing consensus on the necessity for improved security measures. A remarkable 98% of respondents believe security teams need better oversight over GenAI-based solutions, with 94% expressing a need for more efficient management practices in GenAI application.

Security apprehensions persist, with 85% of developers and 75% of security professionals expressing concerns over the potential pitfalls of relying heavily on GenAI solutions.

Another aspect brought out in the report is the fear among developers regarding the potential loss of critical thinking due to reliance on AI. While 8% of developers expressed this concern, it is shared by only 3% of security professionals.

Despite these challenges, sentiment towards GenAI remains positive, with 95% of those surveyed predicting greater reliance on GenAI in the software development field in the coming five years, and none anticipating a decrease.

The findings emphasise the critical role of GenAI in the evolution of software development while also highlighting the need for enhanced security practices and collaboration between developers and security teams in order to manage the emerging risks effectively.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X