It’s safe to say that AI coding assistants are no longer considered an “emerging” technology.
In our recent survey of 250+ AppSec stakeholders, we found that 87% of organizations have adopted tools like GitHub Copilot, Cursor, or Claude Code to some extent, and over a third are already at widespread or full adoption.
At StackHawk, we are also all-in on AI coding assistants, but we know (from personal experience and from our customers) that the productivity boost comes with tradeoffs—as most things do.
Our survey found that over half of respondents (53%) view AI coding assistants as a moderate or significant security risk. And when asked about their biggest challenges for 2026, “keeping up with rapid development velocity and AI-generated code” was the number one “significant challenge” cited.
Here’s the thing: the conventional narrative about why AI development creates security risk is misguided.
The Constant Narrative (And Why It Misses the Point)
For years, influencers, vendors, and practitioners have gone head-to-head on this question: is the code that AI coding assistants produce inherently more vulnerable?
Ambulance chasers and doomsdayers say yes: AI coding assistants generate vulnerable code, developers accept it without reviewing properly, and vulnerabilities ship to production. So don’t ditch your tools and add more checks for code that came through those IDEs.
We don’t disagree, but this is only a fraction of the story.
AI assistants trained on millions of repositories have actually internalized common patterns, including secure ones. For routine implementations—input validation, standard authentication flows, common API patterns—AI-generated code is often more consistent than what a junior developer might write from scratch.
The “AI writes insecure code” narrative misses that human-written code was never a security gold standard. It’s also trained on all the open source scanning tools out there. We can expect to see AI coding assistants get much, much, much better at vulnerability detection (or prevention from the get-go) as time goes on.
But what’s more interesting than the quality of AI-generated code is the impact it has on AppSec programs as a whole: how the heck do we keep up with velocity using existing tools and processes?
The Risk Dimensions That Actually Matter
Our survey data points to a set of compounding challenges that go far beyond “AI writes bad code.” Here’s what’s actually breaking.
1. Developers Have Less Context
When writing code line by line, developers have an innate intuition about how it works, what it impacts, and what the business impact may be. They understand the authorization rules and data flows because they had to think through the logic and implement it.
That changes when developers shift to reviewing AI-generated code. They’re asking a different question: “Does this work?” Not “Is this secure?” Not “What are the authorization implications?” Not “How does this interact with our authentication system?”
AI-assisted development means less time spent in the codebase. Developers understand features at a functional level but may not trace the security implications. That knowledge gap compounds. Six months later, nobody quite remembers why a particular API endpoint exists or what data it can access. This isn’t a training problem you can solve with secure coding guidelines. It’s a structural shift in how software gets built.
2. Manual Processes (And Existing Tools!) Can’t Keep Pace
When development velocity increases 5-10x, everything downstream breaks. Security reviews, architecture approvals, asset documentation, attack surface tracking, any process that relies on humans keeping pace with development is now permanently behind.
But it’s not just manual processes. Your tools have a glaringly critical math problem.
When code volume increases 5-10x, so do findings from code security tools. The same AppSec team that was already drowning in alerts now faces an impossible backlog. Our survey found that half of AppSec teams spend more than 40% of their time just triaging and prioritizing findings—and 71% cited alert fatigue as a moderate-to-critical challenge.
And legacy DAST? It was never built for modern development velocity. Weeks of manual configuration, specialized security engineers, testing production environments after code already shipped—too slow, too late, too operationally expensive. If legacy DAST couldn’t keep up with human-paced development, it definitely can’t keep up now.
3. New Attack Surfaces Are Emerging Faster Than Ever
Meanwhile, only 30% of AppSec stakeholders are “very confident” they know 90%+ of their attack surface. The rest are working from incomplete inventories that grow more incomplete with every AI-assisted sprint.
But AI’s not only accelerating how code gets written, it’s changing what gets built. Our survey found 77% of organizations are now building LLM/AI components directly into applications: chatbots, RAG systems, AI-powered features.
These introduce entirely new vulnerability classes—prompt injection, context poisoning, guardrail bypasses—that traditional AppSec tools weren’t designed to detect. SAST can’t find prompt injection vulnerabilities. It can’t validate whether your RAG system properly segregates customer data.
“Understanding and securing new AI/LLM attack surfaces” was the second most-cited significant challenge for 2026. Organizations are building faster, with less context, into an attack surface that’s expanding in dimensions their tooling doesn’t cover.
What Actually Needs to Change
The old AppSec model assumed developers had intimate knowledge of their code. Security tools focused on helping them find issues in code they understood. Review processes assumed developers could trace security implications because they’d built the underlying systems. Manual discovery methods could keep pace because development moved at a human speed.
That model breaks when code ships faster than humans can document, developers accept implementations they didn’t author, and attack surfaces expand in dimensions existing tools don’t cover.
The organizations getting this right aren’t trying to slow down AI adoption. That ship has sailed. They’re building security programs that match the new reality:
- Visibility first. You can’t secure what you don’t know exists. When developers ship faster than documentation can track, you need automated attack surface discovery from source code, not quarterly surveys or manual spreadsheets.
- Runtime validation. When developers have less context about the code they’re shipping, you need testing that validates how applications actually behave, not just how code looks statically. Runtime testing catches the authorization bypasses, business logic flaws, and API security gaps that cause actual breaches.
- Intelligence over volume. The answer to 5x more code isn’t 5x more findings to triage. It’s smarter prioritization that connects vulnerabilities to business risk, so finite AppSec resources focus on what actually matters.
Do you need to replace your existing tools to accomplish this? Maybe, maybe not. But you absolutely must be thinking about how to build the intelligence layer that makes them effective when development has fundamentally changed.
The Path Forward
Our full research—including detailed survey findings and a practical playbook for building intelligence-first AppSec programs—is available in The 2026 AppSec Leader’s Guide to Survival in the AI Era.
It covers what’s changed, why traditional approaches are breaking, and what modern AppSec programs need to look like in practice. No product pitches—just the guidance you need to build programs that actually scale with AI development.

