IT Brief UK - Technology news for CIOs & IT decision-makers
Flux result c47fc794 21dd 4fb9 9c40 8c1333595464

Lineaje survey finds AI code confidence outpaces visibility

Thu, 23rd Apr 2026 (Today)

Lineaje has published survey findings showing a wide gap between companies' confidence in securing AI-generated code and their actual visibility into it. The research is based on responses from 100 cybersecurity attendees at RSA Conference 2026.

The survey found that 86% of respondents have already integrated AI-generated code into their workflows. Yet only 17% said they had full visibility into that code, while 89% believed they could secure it.

That mismatch points to weak oversight as AI-generated software moves further into mainstream development. More than half of respondents, 51%, said adoption is officially outpacing their ability to maintain oversight.

Governance Concerns

Security leaders identified AI governance as their leading challenge for 2027. The rise of agentic AI and autonomous systems followed closely behind, suggesting a shift from questions of adoption to questions of control.

The findings also show how limited visibility remains across many organisations. Some 45% of respondents said they had only partial visibility into their code, while 35% said they had virtually none.

Those figures suggest many businesses are operating without a complete view of software assets created or influenced by AI tools. In practice, that makes it harder to enforce policy, track provenance, and spot hidden exposures across development environments.

"Confidence without visibility is a false sense of security. The findings reveal that while enterprises are racing to embrace AI-driven speed, they are doing so with a significant blind spot," said Javed Hasan, Chief Executive Officer and Co-Founder of Lineaje.

"To bridge this 'confidence gap,' organizations need more than manual oversight; they need an autonomous policy orchestrator that provides a complete AI Bill of Materials. Only by embedding governance directly into the development workflow can enterprises ensure their agentic AI applications are secure-by-design," Hasan said.

Trust Plateau

The survey also suggests that trust in AI has stopped rising. Seven in ten respondents said their trust in AI had not increased since the previous year, including 21% who said it had declined.

That points to a more cautious phase in the market. Rather than treating AI as a broad solution to software and security problems, organisations appear to be weighing its operational risks more closely.

Lineaje presented the latest results as part of a three-year shift in priorities. In 2024, its research found that 84% of organisations had not yet implemented a Software Bill of Materials. In 2025, attention turned to transparency, with 88% of leaders looking to AI as a route to better software supply chain visibility.

The latest survey marks another shift: from software component risk to the governance of AI-generated code and autonomous systems. It reflects broader industry concern over how to monitor code produced by generative tools and apply policy consistently as software creation becomes more automated.

Control Platforms

The survey found broad support for a more centralised approach. Nine in ten respondents said a unified platform for governance, security, and policy compliance was essential.

That demand comes as companies try to manage both AI-generated code and broader agentic AI applications. A single control layer is increasingly seen as a way to bring together inventory, policy enforcement, and risk management, especially where development teams use multiple AI tools across different workflows.

Lineaje's recently launched UnifAI product is aimed at that requirement. It is described as an autonomous AI policy orchestrator designed to provide a central view of AI systems, map an AI Bill of Materials, and apply security guardrails in real time.

The findings add to a growing cybersecurity debate over whether organisations are introducing AI into software development faster than they can govern it. The central message is that confidence in security remains high even as direct visibility remains low, leaving many companies exposed to risks they may not yet be able to measure.

Among the clearest figures in the data is the extent of that divide: 89% of respondents said they were confident they could secure AI-generated code, but only 17% said they had full visibility into it.