StackHawk

Secure Code in the Age of AI: Challenges and Solutions

Share on LinkedIn
Share on X
Share on Facebook
Share on Reddit
Send us an email
Joni Klippert Blog Image

The pace of AI-based technologies is growing faster than any technology we’ve yet to see. It has sparked curiosity into the elements of how AI and Large Language Models (LLM) tools can help drive innovation and improve efficiencies industry-wide and has offered some playful observations, such as asking ChatGPT to describe the value of StackHawk as if it were The Godfather.

Playfulness aside, the software industry is rapidly adopting new solutions that incorporate generative AI to drive innovation and provide more value to customers at an accelerated rate. Recently, the R&D and Marketing teams at StackHawk wrapped our own AI-focused hack-a-Thon, with a focus on driving innovation into our everyday practices utilizing various AI and LLM solutions. It was a pretty exciting week, and I’m looking forward to sharing our learnings at a later date (intentional teaser, stay tuned). However, as the excitement continues to unfold, we are very mindful of ensuring secure code, after all, we should be, we’re a security company.

Security implication of using AI

As our news feeds fill with new solutions and company pitches backed by GenAI and LLM-based technologies, we continue to proceed with caution as we see evidence that using GenAI in code introduces more security vulnerabilities for many organizations. In a recent survey published by Snyk , they shared the following:

“On one hand, just over three-quarters of respondents (77%) said AI tools improve code security. At the same time, however, 59% are concerned that AI tools trained using general-purpose large language models (LLMs) will introduce security vulnerabilities that first appeared in the code used to train them.”

The adoption of code AI reminds us of the adage “what’s old is new,” meaning that just as we’ve watched the rollout of email and spammer fraud, new technology will always have its place for those that use it for good and that those will try to manipulate it and use it in nefarious ways.

We will eventually reach a point where generative AI can provide developers with prescriptive feedback on what security issues to fix regarding the specific language and frameworks they work with, as our CSO Scott Gerlach mentioned recently . But we aren’t there yet, and the power behind AI and LLMs requires a new level of responsibility when developing secure code.

When Developers don’t understand the code they are copying and pasting, it’s easy to pull in vulnerable code unintentionally, making the need for continuous testing of your running applications even more crucial. Which, spoiler alert, DAST solutions like StackHawk are perfect for testing AI-generated code at run time.

DAST uniquely positioned to test & secure AI-generated code

APIs powered by generative AI present a new place for attackers to gain a toehold in disrupting the market. As organizations deploy new applications utilizing LLMs, a two-part combination of data and code is released, leading to the importance of security testing the running application. Static code security solutions such as SAST won’t help in this situation; testing applications at runtime is something only modern DAST solutions can perform.

Secure Code in the Age of AI: Challenges and Solutions -Pic 1

The primary capability of DAST solutions is to send various iterations of data to an input and check its outputs for responses that might indicate a vulnerability. Testing how LLMs act upon input can only be tested by trying inputs and checking how the output behaves at run-time, a perfect match for DAST.

Another way to look at this is that LLMs are built to answer the same prompt differently. This non-determinism makes testing them difficult and requires a multitude of different inputs to run over a number of tests to validate they are probabilistically safe to deploy. This technique is similar to fuzzing applications and again points to DAST as a great testing solution.

With the rollout of the new OWASP Top 10 for LLM project, created to identify specific vulnerabilities based on LLMs, we believe that 6 of the 10 will benefit from DAST’s very core testing concept.

API Security Testing & StackHawk

At its core, StackHawk was built to bridge the trust gap between AppSec and Developer teams to ensure the delivery of safe code faster. StackHawk focuses on testing APIs and web applications during runtime, prior to production, and gives engineering teams the ability to find and fix code more effectively while running in their CI/CD workflows. With the acceleration of new technologies built on AI and LLM, StackHawk’s DAST solution is well poised to continue to be a very critical part of the security landscape for testing and adopting a secure code mindset. Keep an eye on this space as we continue to share our learnings and discuss more.

More Hawksome Posts

Understanding LLM Security Risks: OWASP Top 10 for LLMs (2025)

Understanding LLM Security Risks: OWASP Top 10 for LLMs (2025)

As LLMs like ChatGPT moved from research to real-world applications, traditional security frameworks fell behind. OWASP’s Top 10 for LLM Applications highlights new risks—from prompt injection to model poisoning and system prompt leakage—that come with AI-driven systems. Understanding these threats is key to securing the next generation of applications. StackHawk helps teams find and fix vulnerabilities early, including those in AI-powered apps.

Top Security Testing Strategies for Software Development

Top Security Testing Strategies for Software Development

Security testing is a critical step in modern software development, ensuring applications stay resilient against evolving cyber threats. By identifying vulnerabilities early in the SDLC, teams can prevent breaches, protect data, and maintain user trust. This article explores key security testing types, benefits, challenges, best practices, and essential tools to help you strengthen your application’s defense—from code to runtime.

A Developer’s Guide to Dynamic Analysis in Software Security

A Developer’s Guide to Dynamic Analysis in Software Security

Running software under real conditions reveals vulnerabilities that static code checks miss. This guide breaks down dynamic analysis, how it works, when to run it, which tools to use, and where it fits in modern security testing workflows to help developers catch runtime issues before they reach production.