StackHawk

A Developer’s Guide to Writing Secure Code with Claude Code

Matt Tanner   |   Jun 23, 2025

LinkedIn
X (Twitter)
Facebook
Reddit
Subscribe To StackHawk Posts

The rise of AI-powered development tools has changed how we write code, but it’s also introduced new security challenges. While tools like Claude Code can speed up development, using them can also introduce vulnerabilities that can be easily missed. In this guide, we’ll show you how to combine the power of Claude Code’s AI-powered terminal agent with StackHawk’s dynamic application security testing (DAST) to create a secure development workflow that works well whether you’re using AI to augment your development workflow or full-on automating your latest app from the command line.

What is Claude Code and How Does It Generate Code?

Claude Code is an agentic AI coding tool that lives directly in your terminal, understanding your entire codebase and helping you code faster through natural language commands. Developed by Anthropic, Claude Code has quickly gained adoption among engineers at major tech companies and represents a fundamental shift in how developers interact with AI assistance. You’d be hard-pressed to find a developer who isn’t experimenting with Claude Code or an equivalent terminal-based agent somewhere within their workflow. The rapid adoption of these platforms and their impact on codebases across the globe is massive.

How Claude Code Works

Claude Code leverages Anthropic’s most advanced models, including Claude Opus 4 and Claude Sonnet 4, to provide intelligent coding assistance through several key features:

Agentic Task Execution: Claude Code can complete complex, multi-step tasks end-to-end, from reading GitHub issues to writing comprehensive implementations, running tests, and submitting pull requests—all through natural language commands.

Deep Codebase Understanding: The AI maps and explains entire codebases in seconds, using agentic search to understand project structure and dependencies without requiring manual context selection.

Terminal-Native Integration: By living directly in your terminal, Claude Code integrates seamlessly with your existing command-line tools, git workflows, and CI/CD pipelines without the compatibility issues that plague IDE extensions.

Multi-File Coordination: Claude Code’s understanding of your codebase and dependencies enables it to make powerful, coordinated edits across multiple files that actually work together.

Autonomous Workflow Management: Claude Code’s most advanced capability allows it to handle entire development workflows autonomously, executing commands and managing complex tasks while keeping developers informed of progress.

The Double-Edged Nature of AI Agent Development

While Claude Code’s capabilities are impressive, the speed and convenience of AI-generated code come with inherent security risks that developers must be aware of. Because of how these platforms work, you have the following issues that can quickly create security issues deep within the code:

Optimization for Functionality Over Security: AI models are trained to generate working code that solves the problem. When developers ask for “quick” or “minimal” solutions, security considerations often take a backseat to functionality.

Pattern Replication Without Context: AI learns from vast datasets of existing code, including code that may contain security vulnerabilities. Without specific security guidance, AI can replicate insecure coding patterns found in its training data.

Lack of Security Awareness: Unlike human developers who can apply security knowledge contextually, AI lacks a true understanding of security implications. It generates code based on patterns rather than security principles.

Implicit Trust in Generated Code: The sophistication of AI-generated code can create a false sense of security, leading developers to trust the output without a proper security review.

System-Level Access Risks: Unlike browser-based AI tools, Claude Code operates with direct terminal access, potentially able to modify system configurations, environment variables, and execute commands that could have security implications.

Of course, the extent to which security is an issue is highly dependent on the prompts given to the platform as well. For instance, explicitly prompting with security in mind is a way to at least get closer to ending up with a secure application. That said, it takes expertise and knowledge to prompt in this way, sometimes at odds with the “anyone can code” mantra these platforms are pushing.

Why AI-Generated Code Creates New Security Challenges

Even when prompted correctly, with security in mind, there are concerning patterns in how AI coding tools handle security requirements. When security isn’t explicitly mentioned or when developers prioritize speed over security, AI tools consistently generate vulnerable code.

Common Security Issues in AI-Generated Code

Often, the generated code is closer to a proof-of-concept application, especially when prompting is minimal. This means that issues that developers would normally see in early-stage applications tend to appear within these apps. Those who are unaware of these requirements may launch applications into the wild that are extremely vulnerable to basic and sophisticated attacks. These common issues include:

Missing Input Validation: AI often generates endpoints and functions without proper input sanitization, leading to injection vulnerabilities.

Weak Authentication Implementation: When asked to create authentication systems without specific security requirements, AI may implement overly simplistic or flawed authentication mechanisms.

Inadequate Error Handling: AI-generated code frequently lacks proper error handling that prevents information disclosure to attackers.

Insecure Defaults: AI tends to use permissive configurations and defaults that prioritize ease of use over security.

Omitted Security Headers: Web applications generated by AI often lack essential security headers like Content-Security-Policy, X-Frame-Options, and others.

Command Injection Vulnerabilities: Claude Code’s terminal-native approach introduces unique risks where AI agents can be susceptible to prompt injection attacks that could lead to unintended command execution.

Again, knowing these issues exist would allow the developer to prompt the agent further to fix these or ask for these requirements in the initial prompts. However, it’s hard to fix what you are unaware of. These issues even pop up in traditional, non-AI-generated applications, but they also usually come with some dedicated cycles for security review. Inherent trust in the AI code is commonly seen, but should not be the case.

The “Security by Afterthought” Problem

One of the biggest challenges with AI-powered development is that security becomes an afterthought rather than being built into the foundation of the application. This happens because:

  • Developers focus on getting working code quickly
  • AI responds to immediate functional requirements rather than to holistic security needs
  • Security requirements are often implicit rather than explicit in development requests
  • The rapid pace and trust in AI-powered development can bypass traditional security review processes
  • Claude Code’s autonomous capabilities can compound security issues across multiple files and systems

It’s not that security being an afterthought is unique to AI-generated applications and code. The issue is the sheer volume and velocity at which AI-generated code can be deployed. Months of work can be done in hours, potentially even by non-technical or extremely junior developers. Although this is great for companies that want to expedite development, the risk of releasing a business-critical system, especially those with sensitive data that can be easily breached, is a real risk. This is where adding security testing into the puzzle makes a lot of sense.

Why DAST is Critical for AI-Generated Code

Dynamic Application Security Testing (DAST) has become essential for validating the security of AI-generated code. Unlike static analysis tools that examine code at rest, DAST tests running applications to identify vulnerabilities that only manifest during execution. This approach to testing brings some very powerful value props to those using AI to build applications.

The Unique Value of DAST for AI Code

Runtime Vulnerability Detection: DAST identifies security issues that emerge when AI-generated code interacts with real data, user inputs, and system resources – scenarios that static analysis cannot fully simulate.

Black-Box Testing Approach: DAST doesn’t need to understand the code structure or AI generation patterns. It evaluates security from an attacker’s perspective, testing how the application actually behaves under malicious conditions.

API-First Testing: Modern applications are API-driven, and AI tools excel at generating API endpoints. DAST tools like StackHawk are designed to test APIs comprehensively, to ensure AI-generated endpoints are secure.

Validation of Security Controls: DAST can verify if security measures suggested by AI (authentication, input validation, etc.) actually work as intended in the running application.

Terminal Workflow Integration: DAST tools can integrate directly into the same command-line workflows that Claude Code uses, providing seamless security validation without disrupting development velocity.

Why Traditional Security Testing Falls Short with AI Code

Although traditional testing methods should still be deployed in certain scenarios, layering a modern DAST solution into the security stack is necessary. The traditional methods of testing tend to fall flat in the age of AI because of:

Speed of Development: AI accelerates development to the point where traditional security review cycles can’t keep pace. DAST automation helps bridge this gap by providing immediate security feedback.

Volume of Generated Code: AI can produce large amounts of code quickly, making manual security review impractical. Automated DAST scanning scales to match AI development speed.

Subtle Logic Flaws: AI-generated code may contain subtle security logic flaws that are hard to catch in code review but become apparent when testing the running application.

Integration Vulnerabilities: Security issues often emerge at the integration points between AI-generated components and existing systems – something DAST is good at identifying.

Autonomous Workflow Validation: Claude Code’s ability to execute multi-step workflows autonomously requires testing that can validate the security of these complex, coordinated operations.

With DAST being a critical component of the AI-coding stack, finding your way through the multitude of solutions can be tough. Of course, at StackHawk, we have built one of the most comprehensive DAST and API security platforms on the market.

Why StackHawk is the Best DAST for AI Development

StackHawk is a new generation of DAST tools designed for modern development practices and AI-powered workflows:

Developer-Centric Design

Unlike traditional security tools built for security teams, StackHawk is designed for developers who are creating and deploying AI-generated code. It provides feedback in terms that developers understand and can act upon quickly.

Comprehensive API Testing

StackHawk tests REST, SOAP, GraphQL, and gRPC APIs – the types of services AI tools generate. This coverage ensures AI-generated endpoints are tested for vulnerabilities.

CI/CD Integration

StackHawk is designed to run in DevOps pipelines, so you can run security testing every time AI-generated code is committed. This ensures security is up to date with AI-accelerated development.

Fast Feedback Loops

Traditional DAST tools were designed for periodic scanning of production applications. StackHawk provides feedback during development so you can catch and fix AI-generated vulnerabilities before they hit production.

Terminal-Native Compatibility

StackHawk’s CLI-first approach aligns perfectly with Claude Code’s terminal-native workflow, allowing security testing to be initiated through the same interface where development occurs.

Want to see exactly how StackHawk can be injected into your Claude Code workflow? Let’s look at a step-by-step tutorial on how it can be done in minutes!

Using Claude Code and StackHawk Together

To create a more secure application, we will use a combination of insights from StackHawk and security-informed prompting. We will use a workflow that will essentially use StackHawk to test the application, feed the results into Claude Code, let the Claude Code agent fix the security issues, and then retest with StackHawk to ensure that the fixes created actually solve the underlying issue.

To do this, we will need:

  • Claude Code installed and configured in your terminal
  • A StackHawk account and active license

The steps below can be used with any type of application or API; however, in this example, we will use a simple Node.js application to show how the workflow would work. You can even begin to use this workflow from the onset of an AI-automated project to make sure that the application is secure as it evolves.

First, let’s create a StackHawk app for our application. Once you’ve created an account and logged into StackHawk, your next step will be to create an app by either connecting your repository, such as GitHub or Bitbucket, or manually creating an app. For simplicity in this tutorial, we will forego adding our repository (although it’s highly recommended to do so to get the full functionality of StackHawk). For that, let’s click the link to manually set up an app.

Next, a modal will appear, walking you through each step. In the first screen, you’ll be prompted to download StackHawk. Since I’m running this locally, I will download the appropriate installer for my OS. However, if you want to run this on a different OS or in CI/CD, you can check the downloads page to get the right version. Once you have it downloaded and installed, click Initialize Scanner in the bottom right.

After this, we will initialize StackHawk by running hawk init in a terminal. Once successfully completed, click App Details in the bottom right corner of the modal.

After it has been initialized, we can finally put in the details for our app. Here we will set our Application Name, Environment, and the base URL of our app or API. Once you’ve filled in the details, click App Type in the bottom right.

In StackHawk, depending on what type of application we want to test, we can select the type. Here, we can choose from a single-page app, static site, API, or other options.

In our case, we will select API since the code we will test with Claude Code is a NodeJS API. We will then need to select the type of API we will be testing. Since our Node.js project contains REST APIs, we will pick REST / OpenAPI.

Lastly, we will add the path to our OpenAPI spec, since this is the best way to get accurate testing with StackHawk when testing REST APIs. I can either supply a URL path if the OpenAPI spec is hosted and available at a particular URL, specify a file path within the project where the tests will run, or skip this configuration section. In this case, I will specify where the OpenAPI spec is within my project so that StackHawk can pick it up to assist with testing the API. Once you have the app config dialed in, click Create App in the bottom right corner.

Now, we need to download or copy the generated stackhawk.yml file and add it to our project.

In my case, I’ve added it in the root directory of my app in VS Code. However, you can use the IDE of your choice if you’re not using VS Code.

Once you’ve added the stackhawk.yml file to your project, you can return to StackHawk and click Finish to exit the modal. From here, you’ll now see the app is created, and StackHawk is waiting for you to run your first scan.

Now, heading over to your project terminal where Claude Code is available, we can do the initial scan of the app. With your application already up and running, in the terminal, run:

hawk scan

hawk scan

This will run the first round of tests to get the base state of the application and detect any security vulnerabilities in the API.

Once the scan is completed, in the case of my application, I can see that a SQL injection vulnerability has been introduced in the code.

If I scroll to the bottom of the console output, I’ll see a link to the complete scan report.

Clicking on this, I’ll be taken to the StackHawk app to explore things further in a more comprehensive report.

Clicking on the vulnerability, you can see more details about the cause and steps for remediation. Of course, the nice part is that using Claude Code, we don’t need to worry about manually fixing this and can rely on Claude Code to make the correct fix. Optionally, you can take any of the remediation advice here and feed it into Claude Code; however, it’s generally pretty good at figuring it out on its own.

Back in your terminal with Claude Code, I’m going to feed in the SQL injection finding as context to my prompt and ask Claude Code to fix the issue. In this case, my command will look like this:

StackHawk has found a SQL injection vulnerability I would like you to fix. Here are the details on it:

1) SQL Injection

   Risk: High

   Cheatsheet: https://github.com/OWASP/CheatSheetSeries/blob/master/cheatsheets/SQL_Injection_Prevention_Cheat_Sheet.md

   Paths (1):

     [New] GET /search?name=name' AND '1'='1' -- 

Please analyze the code, identify the vulnerability, and implement a secure fix following OWASP best practices.

Then, let Claude Code get started on the fix.

Although I won’t go into depth here, once Claude Code has performed the fix, you’ll want to run any tests to ensure that there has not been any impact on functionality. With your functionality confirmed and tests still passing, let’s rescan the application to make sure it is, in fact, fixed.

Back in StackHawk, on the Findings screen, click the Rescan Findings button at the top right.

This will bring up a modal that will show a command you can run in the terminal to rescan the application. Copy this CLI command.

Back in your terminal, paste in the command and press Enter to run it. This will kick off another scan of the application and help you to identify if the previous finding has now been remediated.

In this particular instance, we can see that Claude Code’s changes have fixed the SQL injection vulnerability that StackHawk found earlier.

By clicking on the link at the bottom of the terminal output, like we did previously, we can then see the updated report. Just as we saw in the terminal output, StackHawk is now showing that the SQL injection vulnerability has been fixed and marked as such.

If you are still finding that the fix did not remediate the vulnerability, you can let Claude Code know, allow the agent to try and fix it again, and then rescan until it’s successfully remediated.

Of course, in this case, we only remediated a single vulnerability. If there are multiple that are of concern, you can feed in a larger prompt containing multiple vulnerabilities you’d like to remediate at once, if doing everything in one fell swoop is your thing. That being said, there can be a benefit to tackling vulnerabilities one at a time or clustering related ones together so that code changes are more targeted and the underlying agent is less likely to get confused.

At this point, you’ve seen how StackHawk can be used with AI agents to automate security fixes for your applications. Easy, right?

Conclusion

The combination of Claude Code’s AI-powered terminal capabilities with StackHawk’s modern DAST platform creates a powerful workflow for building secure applications in the age of AI-powered development. By understanding the security challenges inherent in AI-generated code and implementing robust testing practices, developers can harness the productivity benefits of AI while maintaining strong security postures.

The key is recognizing that AI tools like Claude Code are powerful accelerators that require appropriate security guardrails. StackHawk provides those guardrails by offering comprehensive, automated security testing that scales with AI-accelerated development cycles.

As AI continues to transform software development, the organizations that successfully combine AI productivity with robust security practices will have a significant competitive advantage. Start implementing these practices today to build more secure software while taking full advantage of the incredible capabilities that AI-powered development tools provide.

Ready to get started? Sign up for a free StackHawk trial and start experimenting with Claude Code to begin building more secure applications with AI-powered development today.

FEATURED POSTS

A Developer’s Guide to Writing Secure Code with Cursor

While AI coding tools like Cursor revolutionize development speed, they often generate code with SQL injection, weak authentication, and missing input validation that can expose your applications to attack. Discover how StackHawk’s DAST platform integrates seamlessly with AI-powered workflows to automatically detect and help fix security vulnerabilities before they reach production

Secure AI Coding with Cursor & StackHawk

AI-powered development tools like Cursor can generate entire applications in hours, but this velocity creates a hidden security debt that many developers are only beginning to recognize. This guide shows how to integrate StackHawk’s runtime security testing directly into your AI development workflow, ensuring your rapidly-built applications are secure from day one

Security Testing for the Modern Dev Team

See how StackHawk makes web application and API security part of software delivery.

Watch a Demo

StackHawk provides DAST & API Security Testing

Get Omdia analyst’s point-of-view on StackHawk for DAST.

"*" indicates required fields

More Hawksome Posts