
Legit Security unveils AI features to improve app vulnerability fixes
Legit Security has introduced new artificial intelligence features to its Application Security Posture Management platform aimed at enhancing vulnerability prioritisation and remediation in software development.
The latest AI-powered functionalities are designed to allow security teams to more efficiently identify and address vulnerabilities across the application security lifecycle. The updates focus on advanced discovery for code-to-cloud correlation, improved precision in issue prioritisation and scoring, and AI-assisted remediation.
Liav Caspi, Co-Founder and Chief Technology Officer of Legit Security, said, "While AI enables developers to write complete applications in seconds, security has taken a backseat. With AI allowing faster development, the code generated is often susceptible to exploitable vulnerabilities, bugs, and security risks. In addition, organisations' understanding of the governance of code and logic they create has dropped dramatically."
"This has become a pressing issue, with the European Union and United States introducing new compliance requirements to address AI. We are solving this challenge by leveraging AI within our ASPM platform to rapidly find, fix, and prevent vulnerabilities."
The platform is intended to enable organisations to identify exploitable weaknesses and misconfigurations, while also enforcing better application security practices throughout the software development process. Legit Security uses AI to help organisations prevent vulnerabilities from entering software releases, aiming to reduce both time and cost associated with application security.
According to the company, Legit Security's platform is the only ASPM solution that applies AI throughout the entire application lifecycle, encompassing discovery, prioritisation, and remediation. Users have control over how AI is deployed within the platform, allowing them to choose when and where the technology is used based on organisational risk tolerance and policy preferences.
The AI-powered discovery functionality now provides consolidated code-to-cloud correlation, broadening oversight across more development pipelines and increasing discovery accuracy. This enables organisations to automate the detection of malicious models and insecure AI implementations, gaining insight into where AI-generated code is used within their environments.
Regarding prioritisation, the platform leverages AI to enhance risk assessment by providing a more precise and explainable risk score. The system evaluates dozens of risk factors to deliver contextual scores that surpass conventional formula-based calculations. Legit Security has also extended its AI-based secrets scanning to reduce false positives and better prioritise critical security issues within code.
For remediation, the new AI features provide actionable guidance to development teams, including embedded code suggestions and integration within developer workflows such as pull-request checks. This is designed to help developers validate code more efficiently and address security findings at speed.
The ASPM platform offers organisations real-time visibility into their software development processes, including asset inventories, ownership, security controls, vulnerabilities, and the relationships among these components. Legit Security states that this comprehensive approach supports developer productivity while enabling teams to manage the security implications of incorporating AI into application development.
Use cases highlighted by Legit Security include securing customer-facing applications with AI enhancements, helping fast-moving teams to generate and validate secure code through AI, and safeguarding AI-generated code and applications during development.
For code discovery, Legit Security's AI-driven code-to-cloud capabilities aggregate data from various scanning tools in a vendor-agnostic manner. AI then correlates results and conducts code analysis, aiming to enhance the identification of business risks and improve the depth of contextual information available to organisations.
With its prioritisation feature, security teams can deliver targeted, contextual insights, including the detection of false positives in AI-produced findings, exposure of sensitive information, and risk scoring that highlights the most urgent issues to address.
Caspi added, "With AI allowing faster development, the code generated is often susceptible to exploitable vulnerabilities, bugs, and security risks. In addition, organisations' understanding of the governance of code and logic they create has dropped dramatically. This has become a pressing issue, with the European Union and United States introducing new compliance requirements to address AI. We are solving this challenge by leveraging AI within our ASPM platform to rapidly find, fix, and prevent vulnerabilities."