Dans cette section
Navigating the Challenges of GenAI Adoption for Modern AppSec Teams
Generative AI (GenAI) is everywhere, with virtually all businesses looking for ways to implement and integrate it into daily workflows. Its ability to increase productivity and allow developers to work faster has led to many excited early adopters. In fact, recent research sponsored by Snyk and conducted by Enterprise Strategy Group (ESG) indicates that 92% of organizations say their developers are using GenAI tools to generate code for their software.
Despite its benefits, GenAI can introduce serious security challenges. One of the biggest issues is the common belief that GenAI code is “secure enough” to be deployed when in reality, AI-generated code is inherently insecure.
Additionally, the speed of development makes it difficult for security teams to keep pace. This increase in security gaps or unknown vulnerabilities can further put an organization at risk for breaches or other cybersecurity threats.
The following data from our report with ESG highlights the challenges AppSec teams face with GenAI adoption and why they need a modern security solution and approach that supports development at scale.
The rise of GenAI in developer workflows
Development teams can speed up their workflows with GenAI, allowing them to focus on more high-value tasks. However, this shift introduces complex security risks that legacy tools and processes weren’t built to handle. Traditional security solutions and practices can’t keep up with the rise in GenAI vulnerabilities and adversarial AI — or developers' overreliance on insecure AI-generated code.
However, despite these risks, the vast majority of organizations are choosing to adopt GenAI to remain competitive. Ninety-two percent of organizations indicate that their developers use GenAI tools to generate application code for their software.

The GenAI footprint within these organizations is also likely to continue increasing, with survey respondents indicating that the percentage of developers using GenAI tools will increase by 52% in the next 12 to 24 months.

With almost half of developers expected to use GenAI tools within the next two years, security must also scale and evolve. Legacy tools and solutions can no longer support the speed and scale at which development teams move, resulting in inefficiencies and security gaps. This could ultimately lead to organizations limiting the use of AI or continuing to be at risk for serious threats.
Why organizations need guardrails for GenAI
As development teams embrace AI to move faster, security teams must evolve just as quickly. Although 71% of organizations said AI had increased developer productivity, more than half said it also increased the number of vulnerabilities. This isn’t surprising, as developers often believe that AI-generated code is more secure than human-written code, making it easier for vulnerabilities to make their way into codebases, applications, and projects without the proper guardrails.

Developers also face pressure to work faster. Sixty percent of organizations said developers were under pressure from upper management to increase their productivity and get to market faster. Because this increased pressure often results in developers bypassing security measures and quality checks to save time, it leaves organizations vulnerable to more serious threats.
However, while organizations are aware of increased vulnerabilities and other risks associated with GenAI adoption, many aren’t strengthening their security posture in response. Instead, 67% of organizations say they have increased their risk tolerance in the software development process due to the adoption of GenAI — a shift that could further normalize insecure coding practices and leave organizations more exposed. To counterbalance this growing risk, organizations need security guardrails that can integrate seamlessly into AI-assisted development workflows.
How Snyk modernizes security
A key part of addressing the challenges of GenAI adoption is implementing security guardrails and solutions designed for modern AppSec teams. Without tools that match developer speed and make security an easy choice, organizations will be at an increased risk for vulnerabilities and threats.
By implementing Snyk, a leader in AI security, AppSec teams can continue to adopt GenAI while ensuring security needs are met. With Snyk’s DeepCode AI, developers can secure AI-generated code in seconds. Trained on hundreds of thousands of open source repositories — not customers’ code — DeepCodeAI provides top-of-the-line application security.
Snyk secures developers’ code as they work without switching contexts or tools. As the tool finds issues, it generates fixes that developers can implement with a single click, helping them build fast and stay secure.
To learn more about securing GenAI code and how Snyk can help modernize your AppSec team, book a demo today.
Sécurisez l’IA avec Snyk
Découvrez comment Snyk vous aide à sécuriser le code généré par IA de vos équipes de développement tout en donnant une visibilité et un contrôle totaux à vos équipes de sécurité.