What Is the AI-DLC, and How Is It Changing AppSec?
The software development lifecycle has been reinvented before. Waterfall gave way to Agile. Agile evolved into DevOps. Each shift changed how teams plan, build, and ship software — and each time, the security model had to catch up.
We’re in the middle of maybe the biggest one yet. AI is helping developers write code faster and fundamentally changing every stage of the lifecycle — from planning and requirements, to code generation, to testing and deployment. AWS formalized this shift as the AI-Driven Development Lifecycle (AI-DLC) in mid-2025, and since then, Microsoft, CircleCI, and others have published their own takes on what the AI-native SDLC looks like.
Most of that conversation has focused on velocity: shorter sprints, faster code generation, more autonomous agents. Less of it has focused on what this acceleration means for security teams trying to keep pace.
What is the AI-DLC?
The AI-DLC isn’t a single framework — at least not yet. Today, it’s more of a set of emerging ideas about what happens when AI moves from tool to teammate, from helping to acting, across the development lifecycle.
The core thesis: traditional SDLC processes were designed for human-driven, sequential work. Planning took days. Development took weeks. Testing happened at defined checkpoints. Even when it was happening continuously and multi-threaded, dependencies were still more or less linear.
AI compresses and subverts those gates. Code that used to take a sprint gets generated in hours. Requirements get drafted by LLMs. Test cases get auto-generated. The boundaries between phases start to blur.
Different Definitions of AI-DLC
- AWS‘s version defines three phases—Inception, Construction, and Operations—where AI leads the work and humans validate it.
- CircleCI frames it as the SDLC becoming a feedback loop rather than a linear flow, with AI participating in every stage simultaneously.
- Microsoft is betting on spec-driven development, where agents execute against structured specifications with human oversight at checkpoints.
The specifics vary, but the direction is consistent: AI is moving from assistant to active collaborator, and the lifecycle is getting faster, less linear, and harder to secure with the tools most teams have today.
We’ve been learning from and sharing with our customers. The below are early observations from our own experience and from the security and development teams we work with every day.
What’s Changing, Phase by Phase
Planning and Requirements
AI tools now draft requirements documents, user stories, and technical specifications from a business prompt. What used to take a product owner and an architect a week of meetings can happen in an afternoon.
- People: Product managers and architects are shifting from authors to editors. Instead of writing specs from scratch, they’re reviewing and refining AI-generated drafts. The skill set is changing — the ability to evaluate and pressure-test AI output is becoming as important as the ability to create it.
- Process: Planning cycles are compressing. Some teams are moving from sprint planning to what AWS calls “bolts” — shorter work cycles measured in hours or days. The handoff from planning to development is getting shorter, and in some cases disappearing entirely as AI-generated specs feed directly into code generation.
- Technology: LLM-powered planning tools, AI spec generators, automated story creation, and tools like GitHub’s Spec Kit that make specifications executable artifacts rather than static documents. Or just AI chatbots doing good work.
Code Generation
This is where most of the AI-DLC conversation lives today. Developers using Copilot, Cursor, or Claude Code generate code significantly faster than they did by hand. The productivity gains can’t be ignored. Full REST APIs, database integrations, and auth implementations are all generated from prompts.
- People: The developer’s role is shifting from writing code to directing it and reviewing it. Junior developers can produce more code faster, but the review burden on senior engineers is increasing. Understanding what AI-generated code actually does — not just that it compiles and passes tests — is a new and under-appreciated skill gap.
- Process: Traditional code review workflows weren’t designed for the volume of AI-generated code. Pull requests are getting bigger and more frequent. Some teams are leaning on AI-assisted code review to keep up, which creates an interesting loop: AI writing code that AI reviews.
- Technology: AI coding assistants, agentic coding tools that can execute multi-step tasks, automated scaffolding, and increasingly autonomous agents that can build and iterate on entire features from a prompt.
Testing
With AI coding assistants, the line between code generation and testing is blurring. AI is now natively generating test cases, running regression suites, and doing automated code review while code is being written. With the recent launch of Claude Code Security, we can see how the paradigm is shifting from write code and detect vulns to write secure code and validate it.
- People: QA roles are evolving from writing test cases to defining test strategies and reviewing AI-generated coverage. The question is shifting from “did we write enough tests” to “are we testing the right things.”
- Process: Testing is becoming more continuous and less phase-gated. AI can generate and run tests as code is written, rather than waiting for a dedicated testing phase. The challenge is that speed of test generation doesn’t automatically mean quality or completeness of test coverage.
- Technology: AI-generated unit and integration tests, automated QA platforms, AI-assisted code review tools, and SAST tools that scan during development. Functional testing is getting a lot of AI attention. Security testing, less so.
Deployment and Operations
CI/CD pipelines are getting smarter. AI can optimize deployment paths, predict failures, and automate rollbacks. Some teams are moving toward continuous deployment where code ships multiple times a day with minimal human intervention.
- People: DevOps and platform engineering teams are becoming orchestrators of AI-driven pipelines rather than manual operators. The human role is increasingly about setting guardrails and handling exceptions rather than managing routine deployments.
- Process: Release cadences are accelerating from weekly or biweekly to multiple times per day. The window between “code written” and “code in production” is shrinking to hours or minutes. Any process that depends on a human checkpoint between development and production is either getting automated or getting skipped.
- Technology: AI-optimized CI/CD, automated deployment orchestration, predictive monitoring, self-healing infrastructure, and increasingly autonomous pipeline management.
API and Attack Surface Expansion
This isn’t a traditional SDLC phase, but it’s one of the most significant changes in the AI-DLC. And the AI-wary have pointed it out from the get-go. AI-assisted development generates new APIs, endpoints, and integrations faster than any manual process can track. And with that, more tech debt and exposure that humans will never be able to keep up with manually.
- People: Nobody owns the complete picture of the application attack surface. Developers are creating APIs faster than architecture teams can review them. When you add LLM integrations — calls to external AI services, vector database connections, RAG pipelines — the people who understand the full scope of what’s deployed and exposed are increasingly rare.
- Process: API documentation, inventory management, and architecture review processes were built for a cadence where new endpoints were created weekly, not hourly. Most teams are still discovering APIs after deployment through production scanning or manual inventory.
- Technology: Auto-generated APIs, LLM integrations (LangChain, vector DBs, MCP servers), microservice proliferation, and a growing ecosystem of AI agent-to-agent communication that creates machine-to-machine attack surface most teams aren’t tracking yet.
The New Burden on AppSec
Every shift in the development lifecycle has created new pressure on security teams. The AI-DLC is no different, except the pressure is compounding faster than we’ve ever seen.
More code means more findings, but SAST tools don’t distinguish between theoretical and exploitable. When code velocity doubles, your triage backlog doubles with it. Developers accepting AI-generated code understand it less deeply than code they wrote by hand, so they need more guidance to evaluate and fix findings — which means more work for security teams that didn’t get additional headcount.
Meanwhile, attack surfaces are expanding faster than they’re being documented. New APIs, endpoints, and LLM integrations ship before the security team knows they exist. And the tools most teams rely on — scheduled production scans, legacy DAST that runs on a weekly cadence — were built for a release cycle that no longer exists. When code ships multiple times a day, security testing that doesn’t run in the pipeline doesn’t run at all.
And as is the case with emerging technologies, organizations respond non-linearly. For some, technology lags, but solid processes are already there, while others go all-in on solving technology problems but don’t adapt their cultures, hiring, or standard operating procedures (SOPs).
How AppSec Needs to Adapt
The AI-DLC doesn’t require a new security model from scratch. But it does require three shifts.
Discovery has to start at the source. If your first view of a new API is when it shows up in a production scan, you’re behind. Discovery needs to happen from source code so you’re mapping APIs, endpoints, and LLM components as they’re committed, not after they’re deployed.
Testing has to move into the pipeline — and test at runtime. Static analysis catches code-level patterns, but it can’t tell you whether a running application enforces authorization correctly or is vulnerable to prompt injection through an LLM integration. Runtime testing in CI/CD fills that gap — including testing aligned to the OWASP LLM Top 10 for AI-specific risks like sensitive data disclosure and improper output handling.
Program visibility has to be continuous. AppSec leaders need to see which applications are being tested, how frequently, what’s covered, and where risk is trending. Not scan counts and ticket volumes — actual program intelligence that answers “is our security keeping pace with development?”
The AI-DLC is still taking shape. But the security implications are already here: more code, more APIs, more attack surface, and less time to deal with all of it. The development lifecycle is forever changed. We need to work together as an industry to make sure security not only adapts but stays ahead of that change.