StackHawk
๏ƒ‰

API Discovery Now Uses Code Context to Infer REST API Structure โ€” With Full Transparency and Control

Scott Gerlach   |   Jun 12, 2024

LinkedIn
X (Twitter)
Facebook
Reddit
Subscribe To StackHawk Posts

Weโ€™re excited to share a new capability in HawkAI: it now analyzes small, relevant portions of source code to infer how REST APIs are structured โ€” enabling faster, smarter API specification generation. This functionality currently supports:

  • Java/Spring
  • Node/Express

This update boosts your teamโ€™s ability to test early and often โ€” with zero manual spec work.

Just as important: StackHawk remains deeply committed to protecting your source code, PII, and proprietary data. Here’s exactly how that works โ€” and whatโ€™s changing (and not changing).

Whatโ€™s New: Code Context for REST API Spec Generation

To build API specifications automatically, HawkAI now sends minimal, targeted slices of source code to our trusted AI provider. These slices are limited to whatโ€™s necessary to understand how detected REST APIs are constructed โ€” such as route definitions, controller logic, and request structure.

  • โœ… Only applies to REST APIs built in Java/Spring or Node/Express (today, more language framework pairs coming soon)
  • โœ… Only files relevant to API routes and their data dependencies are shared
  • โŒ Never unrelated files, full repositories, or sensitive business logic

These snippets are used only for real-time analysis and are never stored or used for training.

What Hasnโ€™t Changed: No Code Sharing in Attack Surface Discovery or Sensitive Data Detection

StackHawkโ€™s Attack Surface Discovery and Sensitive Data Detection features continue to operate entirely within StackHawk systems. These features:

  • Do not send any source code to AI providers
  • Do not use external AI inference
  • Analyze patterns and metadata only, not source code contents

This means your sensitive logic, secrets, and proprietary app structure remain completely private when using these detection features.

Our Updated Data Privacy Commitments

Weโ€™ve refined our AI principles to match the new functionality:

Approved Vendors Only: We currently use OpenAI, which meets StackHawkโ€™s rigorous security and privacy review requirements.

Minimal & Controlled Code Sharing: Only the smallest possible slices of relevant code are sent โ€” and only for supported REST APIs.

No PII: PII and sensitive data are never included in the shared context.

No Code Retention or LLM Training: Code sent to the AI provider is not stored or used to train models.

Where Is Code Sent?

REST API spec generation involves secure transmission of code fragments to a selected AI provider (currently OpenAI) via encrypted channels from StackHawkโ€™s infrastructure. All data handling aligns with our Third-Party Risk Management Policy as well as aligning with StackHawkโ€™s data privacy and security commitment.

Only shared when the Repository integration and API Discovery are enabled, Rest API is detected and supported frameworks are used.

Code is used exclusively for real-time inference.

Never retained or reused.

Which AI provider are you leveraging?

StackHawk currently utilizes OpenAI, but our system is designed to be adaptable, allowing the integration of other large language models as needed. This flexibility ensures we continuously evaluate and implement the most effective AI solutions based on research and testing to enhance our functionality.

Can I opt out of AI usage?

Yes, API Discovery is enabled by default only through source code repository integration. If you would like to utilize other SCM integration features without allowing AI access, read the docs for instructions on how to disable HawkAI on your account.

Why This Matters

Legacy DAST and API testing tools require manual specs, brittle recordings, or slow onboarding. With API Discovery:

  • You get faster setup with accurate, auto-generated API specs
  • Your code and customer data stay safe
  • You maintain full control and visibility

FEATURED POSTS

4 Best Practices for AI Code Security: A Developer’s Guide

AI-assisted coding is transforming software development, but speed often comes at the cost of security. In this guide, we outline four best practices developers can adopt to secure AI-generated code: configuring tools with security-first rules, integrating automated testing, monitoring production applications, and strengthening developer security skills. With 76% of developers now using AI tools, itโ€™s critical to balance productivity with robust security guardrails to prevent vulnerabilities from slipping into production.

Security Testing for the Modern Dev Team

See how StackHawk makes web application and API security part of software delivery.

Watch a Demo

Subscribe to Our Newsletter

Keep up with all of the hottest news from the Hawkโ€™s nest.

"*" indicates required fields

More Hawksome Posts