This doesn’t happen to every CISO, but the vast majority struggle to successfully integrate AppSec into the developer workflow without causing friction. There are simply too many antiquated, siloed security concepts and too much security-focused tooling for anything else to be the outcome. Let’s break down why these seemingly good ideas end up causing chaos and confusion for developers.
Making Developers Learn the Language of Security
Training developers on AppSec may seem like the right idea, but in execution, developers will struggle with misaligned incentives and murky translation from concepts into results.
Integrating security into the developer work style – aka the get features out the door work style – is swimming against the current. Security training for developers often involves confusing new terminology and concepts alongside inarguably horrible tools, whether commercial or open source, that take months to learn how to use.
Further, teams frequently choose their highest performing developers to attend security training. Though this seems to be a good move on its face, most often these are the team members that need the training the least, since they have the most experience. Developers return from training and are expected to teach junior developers what they have learned. With a topic as big and difficult to master as AppSec, this is the equivalent of a game of telephone with Shakespeare’s Hamlet as the starting message.
I liken this to when company execs ask the accounting team for finance metrics: you don’t see accountants sending executives links about how the general ledger works. Instead, they send tools and data that present a clear picture of the company finances so executives can make informed decisions.
Luckily, companies in the security space are starting to emerge that focus on providing developers with tooling to make these informed decisions. For example, Snyk has a toolchain-native integration that keeps developers updated on out-of-date or vulnerable packages in their apps.
A useful extension of this tooling might be one able to assess if a developer is actually using one of those libraries in such a way that exposes the vulnerability.
We Broke Your Thing! Doesn’t That Make You Feel Great? Because We’re Super Excited!
The relationship between the security team and other departments is historically adversarial. Some security teams avoid this, but for most, there is an undercurrent of the Culture of No. The belief, misplaced or otherwise, that security is holding back the organization from achieving goals is pervasive, and our behavior as an industry is not helping to repair that relationship. There are a couple of things that exacerbate this feeling:
- Security teams black box pentesting and don’t ask developers what they’re most concerned about or where they can provide the most value.
(side note: I’ll talk about the jacked up procurement process that actually favors minimal findings vs. robust AppSec programs another day)
- Security teams are SUPER EXCITED to find issues. They should be, because that means they have done their job. But at times, the excitement means they forget that not everyone shares their enthusiasm. Their findings mean a lot of work for someone else.
- When security teams find bugs, they throw them over the Jira wall and hope someone (you, the developer) takes the time to fix them…despite the fact that your teams entire focus is feature delivery, not security improvements.
While this may sound harsh, it’s a judgement-free reality to how security teams can be perceived from the outside. Developers that are aware of this ahead of time can use the insider knowledge to more easily identify with and connect to security teams.
We Found a Billion Vulnerabilities! We Have to Fix Them All Now!
The value of existing security tools is predicated on their ability to find vulnerabilities. While this can make for a good tool, it can also make for a nonsensical alert machine. Consider a QA engineer that creates 1,000 tickets for “bugs” that are unlikely to degrade user experience. Is she doing her job well?
Security teams can get lost in tallying the number of vulnerabilities they can find. Though security teams may wish to find more bugs, it is more important to consider the context in which these vulnerabilities are found, how they are communicated to developers, and how they can make the application more secure. All of these components are crucial for developers to efficiently identify and prioritize vulnerability fixes based on their impact to the bottom line.
Compounding this issue, when security teams find vulnerabilities, they are commonly only found after the app is put into production. This forces developers into an expensive trade-off between bug fixes and new feature development long after they have moved on to something else.
To efficiently resolve vulnerabilities, developers need to be able to prioritize them based on what they impact and how big the impact will be. They need to combine insights about the vulnerabilities with information like how valuable the app or feature is to the organization and what kind of sensitive data the app handles. This is information that developers are responsible for and hold already as they develop new features.
Which makes me wonder, if developers are able to find AppSec bugs while they are writing the code, can they simultaneously understand the impact of the vulnerability in the app? In reality, they should be able to immediately fix, defer, or backlog the bug, and streamline the process in an informed way. Developers already do this with bugs EVERY DAY…the bugs have just never had the AppSec label.
Why are we building StackHawk?
A dev-native AppSec tool that easily integrates into the development process should be table stakes in the industry, but it isn’t. Developers need a tool that is integrated into the build pipeline and helps them write bug-free code from day one. Find issues before your code is pushed to production. Just like you shouldn’t wait to run the linter for the first time two months after the code is released to production, you shouldn’t be testing for security bugs then either. Code quality is code quality.
At StackHawk, we believe in speaking the developer language and pointing to fixes that make a difference. Developers shouldn’t have to fix every AppSec bug that comes up; there is an opportunity cost associated with risk that needs to be taken into account when making fixes. More importantly, developers shouldn’t have to wait to find and fix bugs until the eleventh hour. Make AppSec seamless.
My time working in the security industry with developers has shown me time and again that, in order to succeed at AppSec, we need to replace work in isolation with work that is easily integrated into the development processes and evaluated on a regular basis.
Developers should own AppSec because they truly own code quality. Tell your security team that you and other developers can work on the easy parts of AppSec. Initiate a conversation with the security team, instead of the other way around. Security is a joint responsibility, and a feature should not be considered done unless security has been taken into account. If you can work with the security team such that it benefits both teams, you’ll find the whole process becomes a lot easier. In fact, the security team may even buy you a taco or two for it!