StackHawk


The 2026 State of AI-Era AppSec: Key Findings from Our Survey

Payton O'Neal   |   Jan 22, 2026

Share on LinkedIn
Share on X
Share on Facebook
Share on Reddit
Send us an email

AI-driven development has moved from emerging trend to operational reality in record time. But how are AppSec teams actually adapting? What’s working, what’s breaking, and where are organizations investing?

To find out, we surveyed 250+ AppSec stakeholders in December of 2025 through an independent third-party provider. Respondents ranged from individual contributors to C-level executives, with 71% serving as decision-makers over application security. The majority support mid-to-large development organizations, with 84% of AppSec teams consisting of 4 or more people, and they spanned industries—from technology and financial services to healthcare and manufacturing.

The findings paint a picture of an industry at an inflection point: AI adoption is nearly universal, AppSec tools are abundant, but the foundational functions needed to excel in this new environment—visibility, prioritization, and risk-based measurement—remain elusive for most organizations.

Based on those survey results, we put together a state of AppSec playbook: The 2026 AppSec Leader’s Guide to Survival in the AI Era. 

Keep reading for a rundown of the survey results, or download the full report.

AI Development Is the New Normal

AI adoption has crossed the tipping point. 87% of organizations surveyed have adopted AI coding assistants such as GitHub Copilot, Cursor, or Claude Code to some extent. More than a third (35%) reported widespread or full adoption, meaning AI-assisted development is already embedded into standard workflows, not confined to pilot programs or select teams.

The “should we adopt AI coding assistants?” debate is over. The question now is how to secure development environments where AI is a default participant.

Balancing velocity and security is the #1 challenge. When asked about their biggest challenges for 2026, “keeping up with rapid development velocity and AI-generated code” was the most frequently cited significant challenge. With their existing headcount and tooling, AppSec teams are struggling to keep pace with how fast new code, new applications, and new attack surface gets created.

The perception of risk is mixed. The AppSec market has debated over whether AI-generated code is more inherently insecure or not. From our responses, it’s clear that we haven’t aligned on a clear answer. About half of the respondents (53%) view AI coding assistants as a moderate or significant security risk, while the other see AI assistants as neutral, low risk, or even a security benefit. 

This split suggests the industry hasn’t reached consensus on how to think about AI-assisted development risk on organizations as a whole, but risk isn’t simply that AI writes vulnerable code. It’s that the shift from writing code to reviewing code fundamentally changes what developers know about their applications. This context gap compounds over time. Applications grow more complex while the humans responsible for them understand less about how they actually work.

Testing Tools Are Abundant, Intelligence Is Scarce

AppSec tool adoption isn’t the challenge. 94% of organizations use at least one application security testing tool, with the majority using two or more categories. The most common: Software Composition Analysis (56%), API Protection (51%), and Dynamic Application Security Testing (48%).

But despite the tooling (plus penetration testing, which 84% of respondents run regularly), risks still make their way to production.

And there are more risks than ever. The survey asked which risks teams focus on through automated testing versus manual efforts like pen testing and bug bounties. The familiar suspects are there, but so are some new ones that didn’t exist two years ago: AI/LLM-specific risks like prompt injection and data leakage are already on the radar for 35% of teams.

The results also reveal interesting methodology trends. Authorization and access control issues top both lists—61% through automated testing and 57% through manual. This makes sense: broken access controls consistently rank among the most exploited vulnerabilities, and teams are throwing both approaches at the problem.

But the divergence is telling. API-specific vulnerabilities see significantly more automated coverage (53%) than manual (42%). Business logic flaws were the only category where manual testing outpaced automated.

Triage consumes half of AppSec’s time. Despite this tool investment, 50% of respondents report their teams spend 40% or more of their time triaging and prioritizing findings—determining what’s real and what matters before any actual remediation work begins.

This is a math problem that doesn’t scale. When AI development increases code volume 5-10x but AppSec headcount stays flat, the triage burden becomes unsustainable. Alert fatigue was cited as a moderate to critical challenge by 71% of respondents.

The Accountability Gap Is Growing

Boards are asking harder questions. 73% of respondents report their board or executive leadership has asked about application attack surface or risk posture in the past 12 months. Nearly a quarter (24%) face these questions frequently, with detailed inquiries about security practices and tooling. 

But teams are reporting activity, not risk. The most commonly reported metrics tell a different story:

The top metrics—scans performed and vulnerabilities found—measure activity. The metrics that would actually answer board questions about risk posture and attack surface coverage sit lower on the list.

There is a clear gap between what AppSec teams are focusing on and what boards are asking. They want to know: “What’s our risk posture? How is it trending? Are our security investments working?” AppSec teams answer: “We fixed 500 vulnerabilities and ran 10,000 scans.” These aren’t the same conversation. The gap stems from not having the underlying intelligence infrastructure to connect security activity to business risk.

Visibility remains a challenge. Only 30% of respondents are “very confident” that they have visibility into 90% or more of their application attack surface.

When asked how they discover APIs and application components, 37% use manual spreadsheets and quarterly surveys, and 42% rely on external attack surface management or production monitoring tools.

The fragmentation suggests most organizations don’t have a single, reliable, continuous method for understanding what they’re protecting, but more and more, organizations are being required to report test coverage metrics to executives (41% do). But if you’re only measuring coverage against an incomplete inventory, those numbers are misleading. You can achieve “90% test coverage” while leaving significant portions of your actual attack surface completely untested—because you didn’t know those applications existed.

Where Teams Are Investing in 2026

AI/LLM security is a top priority. 77% of organizations are building LLM or AI components into applications: chatbots, RAG systems, AI-powered features. And many are already running multiple applications with AI features in production or have AI integration as core to their business model. To address this expanding attack surface, 82% of organizations have a specific strategy for securing LLM/AI applications: 41% are using dedicated LLM security testing tools, 27% have comprehensive AI security programs with dedicated resources, and 14% are taking a red team approach.

Investment is increasing across the board. Organizations are investing in breadth (more coverage), depth (AI-specific security), and maturity (better metrics and training). When asked about 2026 investment priorities, the initiatives seeing the most growth (moderate or major increases) include:

The biggest challenges ahead. Respondents rated the significance of various AppSec challenges for 2026. The issues rated as moderate to critical challenges by the highest percentage of respondents:

The pattern: speed, complexity, and visibility. Organizations are trying to move faster, with more tools, against a larger and less-understood attack surface.

What This Means for AppSec Leaders

The survey data points to an industry at a crossroads. The old playbook (comprehensive static analysis, manual asset tracking, activity-based metrics) was designed for a world where humans wrote code at human speed. That world is gone.

Three shifts define the path forward:

  • From testing-first to visibility-first. You can’t secure what you don’t know exists. When only 30% of organizations are confident in their attack surface visibility, and AI development creates new applications faster than manual processes can track, automated discovery is no longer optional.
  • From static testing to runtime testing. When half of AppSec time goes to triage, something has to change. Clear insights about what’s real, what’s exploitable, and what poses actual business risk is crucial, and runtime testing is emerging as the leading way to achieve those goals. Plus, static tools are missing new LLM risks and business logic flaws that pose the greatest risks.
  • From activity metrics to risk metrics. Boards are asking about risk posture and ROI—not scan activity. Closing that gap requires connecting security findings to business context—mapping vulnerabilities to application criticality, exposure, and data sensitivity, then tracking risk reduction over time.

Get the Full Playbook

This survey reveals the state of AppSec in the AI era. But knowing the challenges is only half the battle.

The AppSec Leader’s Guide to Survival in the AI Era provides a practical framework for building intelligence-first AppSec programs—covering visibility, runtime testing, prioritization, and measurement. It’s the playbook for adapting your program to the realities this survey reveals.

More Hawksome Posts

Discover the Best API Discovery Tools in 2026

Discover the Best API Discovery Tools in 2026

APIs power today’s software, but with AI tools accelerating development, many organizations don’t even know how many APIs they have—or how secure they are. Shadow, zombie, and rogue APIs can quietly expand your attack surface, leaving critical vulnerabilities unchecked. This guide breaks down what API discovery is, why it matters more than ever in 2026, and how to choose the right tool to secure your entire API landscape.