StackHawk
Hamburger Icon

Secure Code in the
Age of AI: Challenges
and Solutions

joni-klippert@2x-1-ow5g5fs0er3j9gfu6l1v9s35oyob7u8unjuhurnhq8

Joni Klippert|August 4, 2023

Explore the challenges and solutions of securing code in the AI era and learn how modern DAST tools like StackHawk are fit to secure AI-driven applications.

The pace of AI-based technologies is growing faster than any technology we’ve yet to see. It has sparked curiosity into the elements of how AI and Large Language Models (LLM) tools can help drive innovation and improve efficiencies industry-wide and has offered some playful observations, such as asking ChatGPT to describe the value of StackHawk as if it were The Godfather.

Playfulness aside, the software industry is rapidly adopting new solutions that incorporate generative AI to drive innovation and provide more value to customers at an accelerated rate. Recently, the R&D and Marketing teams at StackHawk wrapped our own AI-focused hack-a-Thon, with a focus on driving innovation into our everyday practices utilizing various AI and LLM solutions. It was a pretty exciting week, and I’m looking forward to sharing our learnings at a later date (intentional teaser, stay tuned). However, as the excitement continues to unfold, we are very mindful of ensuring secure code, after all, we should be, we’re a security company. 

Security implication of using AI

As our news feeds fill with new solutions and company pitches backed by GenAI and LLM-based technologies, we continue to proceed with caution as we see evidence that using GenAI in code introduces more security vulnerabilities for many organizations. In a recent survey published by Snyk, they shared the following: 

“On one hand, just over three-quarters of respondents (77%) said AI tools improve code security. At the same time, however, 59% are concerned that AI tools trained using general-purpose large language models (LLMs) will introduce security vulnerabilities that first appeared in the code used to train them.”

The adoption of code AI reminds us of the adage “what’s old is new,” meaning that just as we’ve watched the rollout of email and spammer fraud, new technology will always have its place for those that use it for good and that those will try to manipulate it and use it in nefarious ways. 

We will eventually reach a point where generative AI can provide developers with prescriptive feedback on what security issues to fix regarding the specific language and frameworks they work with, as our CSO Scott Gerlach mentioned recently. But we aren’t there yet, and the power behind AI and LLMs requires a new level of responsibility when developing secure code.  

When Developers don’t understand the code they are copying and pasting, it’s easy to pull in vulnerable code unintentionally, making the need for continuous testing of your running applications even more crucial. Which, spoiler alert, DAST solutions like StackHawk are perfect for testing AI-generated code at run time. 


DAST uniquely positioned to test & secure AI-generated code

APIs powered by generative AI present a new place for attackers to gain a toehold in disrupting the market. As organizations deploy new applications utilizing LLMs, a two-part combination of data and code is released, leading to the importance of security testing the running application. Static code security solutions such as SAST won’t help in this situation; testing applications at runtime is something only modern DAST solutions can perform. 

Secure Code in the Age of AI: Challenges and Solutions blog image image

The primary capability of DAST solutions is to send various iterations of data to an input and check its outputs for responses that might indicate a vulnerability. Testing how LLMs act upon input can only be tested by trying inputs and checking how the output behaves at run-time, a perfect match for DAST.

Another way to look at this is that LLMs are built to answer the same prompt differently. This non-determinism makes testing them difficult and requires a multitude of different inputs to run over a number of tests to validate they are probabilistically safe to deploy. This technique is similar to fuzzing applications and again points to DAST as a great testing solution.

With the rollout of the new OWASP Top 10 for LLM project, created to identify specific vulnerabilities based on LLMs, we believe that 6 of the 10 will benefit from DAST's very core testing concept.

API Security Testing & StackHawk

At its core, StackHawk was built to bridge the trust gap between AppSec and Developer teams to ensure the delivery of safe code faster. StackHawk focuses on testing APIs and web applications during runtime, prior to production, and gives engineering teams the ability to find and fix code more effectively while running in their CI/CD workflows. With the acceleration of new technologies built on AI and LLM, StackHawk’s DAST solution is well poised to continue to be a very critical part of the security landscape for testing and adopting a secure code mindset. Keep an eye on this space as we continue to share our learnings and discuss more. 


Joni Klippert  |  August 4, 2023

Read More

Add AppSec to Your CircleCI Pipeline With the StackHawk Orb

Add AppSec to Your CircleCI Pipeline With the StackHawk Orb

Application Security is Broken. Here is How We Intend to Fix It.

Application Security is Broken. Here is How We Intend to Fix It.

Using StackHawk in GitLab Know Before You Go (Live)

Using StackHawk in GitLab Know Before You Go (Live)