DevSecOps Guide to Bul­let­proof Your Devel­op­ment Work­flow: part 2



In part 1 of this art­icle, we’ve learned about what you need to pro­tect your soft­ware, and the tools you can use for that pur­pose.

Here’s what we’ll cover in part 2:

  • Let’s bul­let­proof our devel­op­ment workflow
  • Embrace the DevOps cul­ture and adapt
  • The chal­lenges you might face
  • When to break the build

Now we’re ready to start!

Let’s bul­let­proof our devel­op­ment workflow

This is where things start to vary from pro­ject to pro­ject and the way you set up your work­flow. But this art­icle aims to address things in an object­ive way, so let’s work with a scen­ario where we have a pipeline for Con­tinu­ous Integ­ra­tion (CI) and a dif­fer­ent one for Con­tinu­ous Deploy­ment (CD).

For the remainder of this art­icle let’s assume our CI pipeline executes on every Pull Request (PR) open to main, and the CD pipeline every time we com­mit to main.


Soft­ware Com­pos­i­tion Ana­lysis (SCA)


Start­ing with Soft­ware Com­pos­i­tion Ana­lysis (SCA). These are quick scans that you can get up and run­ning in a couple of minutes. So, your CI pipeline is the per­fect place for this scan to live in. When you add a step to your pipeline, it won’t take long to execute and you’ll have imme­di­ate feed­back on what vul­ner­ab­il­it­ies the third-party tools in your code have.


Static Applic­a­tion Secur­ity Test­ing (SAST)

Then for Static Applic­a­tion Secur­ity Test­ing (SAST) scans, the CI is also the place where you’ll want to run these. Since we’re run­ning these on every PR we have the chance to check vul­ner­ab­il­it­ies before we actu­ally merge them on our main branch. Also, depend­ing on how many branches and how you pro­mote your code through them you might decide if it’s worth it or not to have it in the CD pipeline or if you’re just hav­ing repeated unne­ces­sary scans. Since SAST scans usu­ally execute quickly the CI is per­fect for them.


Dynamic Applic­a­tion Secur­ity Test­ing (DAST)

For Dynamic Applic­a­tion Secur­ity Test­ing (DAST) scans we need an actual app run­ning so if you want to set them up in your pipelines you’ll want a CD pipeline deploy­ing your applic­a­tion some­where and then chose a tool that offers a CLI to run the scan once the deploy is done. You can always chose to set this up accord­ing to your own needs and lik­ing. Let’s say that at the end of every week (or at the start) you want to make sure everything is up and run­ning, in this scen­ario you want to sched­ule your DAST scan to run every Fri­day at 7pm, for example.


Infra­struc­ture Scanning

Infra scan­ning is a per­fect integ­ra­tion for CI as well. They are quick to execute and offer some flex­ib­il­ity. If you have your infra files isol­ated in a ded­ic­ated repos­it­ory you can trig­ger a scan on every Pull Request for example. On the other hand, if you have your infra files together with some other files (whatever they are) you can fil­ter these scans to be triggered based on folders updated, so you can eas­ily set this up to run every time your infra­struc­ture folder has been changed.


Kuber­netes Plat­form Scanning

Lastly for Kuber­netes Plat­form Scan­ning we can take a look into a tool like Kubescape for example, and in here you have a couple of possibilities:

  • If you have a devel­op­ment or some sort of test­ing cluster where you deploy your ser­vices you can choose to per­form the scan dir­ectly on the run­ning resources on the cluster itself. Based on the response, fix those prob­lems before you pro­gress updates between envir­on­ments. With this approach you will integ­rate this scan after you run your CD pipeline.
  • Or if you want to cap­ture the find­ings pre-deploy­ment you can run the tool dir­ectly on your Kuber­netes mani­fests and choose to block deploy­ments all-together based on the scan. With this approach you will integ­rate this scan in a CD pipeline as well, but as a step pre-deployment.

This was just an object­ive way to approach the integ­ra­tion of these scans in your work­flow. Now, you can take these and tweak for your own use case and for your own liking.

Embrace the DevOps cul­ture and adapt

A man pointing at you with the word written on the bottom Improvise, Adapt, Overcome.
Fig­ure 1: Embrace the DevOps culture


Let’s say one of these scans is a CI enemy, mean­ing it takes so much time that CI just becomes abso­lutely use­less, break­ing its pur­pose. Say a scan takes like 10 or 12 minutes con­sist­ently. Might be time to cut your losses, have that scan run on a sched­ule instead of run­ning on the CI. Make it run every­day at a cer­tain time. Impro­vise, Adapt, Overcome.

Can you afford to take those 10 minutes in your CD pipeline? Then by any means, have it run there.

Don’t want your DAST scan to run in your CD pipeline? Run it every­day at mid­night or after sched­uled weekly deploy­ments and then setup a pro­cess to deal with the res­ults. The same can be done with Kuber­netes resources scanning.

Embrace the DevOps cul­ture and adapt, adapt, adapt

Chal­lenges come with everything and with DevSecOps a big part of these can be quan­ti­fied based on the matur­ity of the pro­ject that will start imple­ment­ing this concept.

Imple­ment­ing on some­thing that is just start­ing is a lot easier than doing the same on an already matured pro­ject, simply because there are no other pro­cesses in place and everything can be defined from the ground up with no bad habits to break.

The chal­lenges you might face.

A black and white picture of a young man with lyrics "nobody said it was easy" on it.
Fig­ure 2: “Nobody said it was easy”

Let’s look into a couple of the chal­lenges you might face.

Try­ing to integ­rate all at once? Don’t.

You’ve browsed the web and found all of these cool scans you can integ­rate in your pro­ject, I know it’s really tempt­ing to just start imple­ment­ing them as you go. But beware! These tools come with a learn­ing curve, so don’t fall into this trap.

In a new pro­ject, this prob­lem kind of solves itself. You maybe haven’t star­ted with infra-as-code yet in the pro­ject, you prob­ably don’t have a tool like Kuber­netes and maybe not even a Dock­er­file for a while. So you don’t have to worry about scan­ning these com­pon­ents until you actu­ally have them in place!

So when start­ing some­thing new you can go slow and then build the pro­cess as you pro­ceed. Start with the basics, SCA and SAST scan­ning. Later as you keep adding more pieces you can take your time and integ­rate the other scans.

Adding a Dock­er­file in this sprints plan­ning? Also plan to imple­ment a scan for con­tainer images.

Then the pro­ject gets big­ger and we now need Kuber­netes to orches­trate the con­tain­ers for us, per­fect, scan those Kuber­netes mani­fests. The same for infra scanning.

On a more mature pro­ject all of these pieces (or at least some) might be in place already but the logic remains the same. Start going one by one and never all at once. Harden your sys­tem little by little, that’s how you win consistently.

Build­ing a Dev­Se­vOps culture.
Now this one is a doozy.

Make sure you bring people in! Not only developers but from the products’ side as well. Every­one has to under­stand what it means to adopt DevSecOps into a pro­ject. This will take:

  • Time to ingrain;
  • Com­mit­ment from the team;
  • Your san­ity (Just joking…maybe);

Once again it will be a lot easier to build and estab­lish a cul­ture on a new pro­ject or a new team versus adapt­ing it in a mature pro­ject. If you’re start­ing fresh, people are more open to adopt new pro­cesses but when they are used to work in a cer­tain way for a long time, there will be res­ist­ance to change.

Build­ing cul­ture is mainly about chan­ging how a group of people think, whether that is a team of 5 engin­eers or a com­pany of 5000 people. And when start­ing to think about imple­ment­ing a new cul­ture you have to object­ively think about what that group of people will gain with it. Let’s say it’s a com­pany, how will my com­pany bene­fit from imple­ment­ing a DevSecOps cul­ture? What examples of pre­vi­ous fail­ures in adopt­ing these prac­tices exists that have lead to big losses of money and trust to companies?

Once the bene­fits and what the com­pany might lose by not imple­ment­ing DevSecOps are under­stood, it’s import­ant to start shap­ing up a plan for this imple­ment­a­tion. Not neces­sar­ily from the tech­nical per­spect­ive, but about how to start spread­ing this cul­ture in a nat­ural and organic way through the organisation.

Start­ing from the top down is a great strategy, this means hav­ing team leads under­stand the bene­fits of imple­ment­ing a DevSecOps cul­ture and if they are great lead­ers people will listen and under­stand just like they did.

Do you have DevSecOps affi­cion­ados in your com­pany? Let them pre­pare a work­shop for the other people in the com­pany. Hear­ing someone pas­sion­ate about a topic speak about it, is very dif­fer­ent than hear­ing someone that has been “man­dated” to do so. The pas­sion of someone speak­ing of some­thing they like spreads like wildfire.

It might be hard, but it can be done. And you have to be sure you enforce the cul­ture every time you can do so. This might mean little things like not allow­ing a team mem­ber to bypass a fail­ing SAST scanning…

Learn to say no, it’s also how you build culture.

Fight­ing accu­mu­lated findings

One thing is guaranteed…there will always be secur­ity debt, always.

In an ideal world you would mit­ig­ate every single find­ing that shows up. But let’s say you’re intro­du­cing a new scan to the pro­ject you’ve been work­ing on for a year or so. Ima­gine the amount of accu­mu­lated secur­ity debt in the form of vul­ner­ab­il­it­ies that you’re going to find…yep.

Will you be able to alloc­ate imme­di­ate time to deal with those? Prob­ably not, and over the years I’ve learned that know­ing how to man­age these prob­lems is truly a fun­da­mental skill.

So how do you deal with this situation?

You need to have a pro­cess in place. You won’t be able to alloc­ate time to fix­ing these vul­ner­ab­il­it­ies, so you need a place to start. So let’s dis­cuss the concept of baselines.

A baseline is like a place to start, it’s a deal you make with every­one and your­self where you recog­nise how many vul­ner­ab­il­it­ies are in your ser­vice and you make an agree­ment around it. That agree­ment comes in the form of num­bers.

Let’s say you can­not allow Crit­ical vul­ner­ab­il­it­ies to enter your main branch, so those ones need to be imme­di­ately dealt with, no ques­tions asked. Then you have 5 high vul­ner­ab­il­it­ies, 8 medium ones and 14 low ones, for example.

Four key points. First one is critical vulnerabilities are not allowed, second one is high vulnerabilities third one is medium vulnerabilities and last one is low vulnerabilities.
Fig­ure 3: Basline

So your baseline is:

  • No Crit­ical vul­ner­ab­il­it­ies are allowed;
  • 5 Highs;
  • 8 Medi­ums;
  • 14 Lows;

This is your baseline, which is just a start­ing point. And from this point on you agree that you are work­ing over these num­bers and act­ing upon them. This allows you to not com­pletely stop the work you have to do in order to mit­ig­ate vul­ner­ab­il­it­ies, and at the same time you still make a com­mit­ment with secur­ity as you acknow­ledge what issues are in your code and those num­bers have to be kept com­pli­ant with the baseline.

And this last point takes me to…

How to deal with secur­ity debt.

Just because you won’t imme­di­ately deal with all of those vul­ner­ab­il­it­ies does­n’t mean you can cover your eyes and pre­tend they are not there. Abso­lutely not!

You still need to tackle them and that requires build­ing a well defined pro­cess around it.

First, you need to cre­ate mit­ig­a­tion timelines. Let’s say that:

  • Crit­ical vul­ner­ab­il­it­ies have to be dealt imme­di­ately, so stop try­ing to align that div to the left and go fix all your criticals;
  • If a new High vul­ner­ab­il­ity is found you have to deal with it in 5 work­ing days – Alloc­ate time in this sprint to do so;
  • If a new Medium vul­ner­ab­il­ity is found you have to deal with it in 15 work­ing days – Do we have time to tackle it this sprint? If not, maybe we have some lee­way to do it dur­ing the next sprint;
  • If a new Low vul­ner­ab­il­ity is found you have to deal with it in a month – More time flex­ib­il­ity to fix due to lower severity;

This is just an example.

Once you have this in place you can start look­ing into that baseline. You now know how to act when new vul­ner­ab­il­it­ies appear over the ones you have acknow­ledged in your baseline, now we have to start lower­ing those baseline numbers.

This comes with com­mit­ment, so make sure when you are plan­ning your sprint, you alloc­ate time for those vul­ner­ab­il­it­ies on your baseline.

This can come in the form of:

  • We are tack­ling 2 High’s of the baseline in this sprint (depend­ing on the estim­a­tion of each High vul­ner­ab­il­ity you can fully focus on deal­ing with High’s first, as you should);
  • We are tack­ling 5 Medi­um’s in this sprint;
  • Maybe we can leave Low’s for last;

And then when you start act­ing on these, start lower­ing those baseline numbers!

When to break the build

Seven key Points with the point of Set up job, checkout code, linting, code build, SCA, SAST Scan and infra Scan.
Fig­ure 4: Break­ing the pipeline

Another import­ant choice to make is when to break the build, but let’s take the chance to build upon a concept that we just talked in the pre­vi­ous “chal­lenge”, build­ing baselines.

Once we have a baseline in place we always have clear num­bers to work with and eas­ily define when we can break the build. We set up that Crit­ical vul­ner­ab­il­it­ies have to be dealt with imme­di­ately, so if you find even just one, break the build!

We have cur­rently 5 high find­ings in one of our ser­vices and we defined that num­ber as the baseline, once a sixth high find­ing is found, break the build!

And once you start deal­ing with secur­ity debt and lower­ing those baselines you also can adapt when to break the pipeline in a con­sist­ent way.

Embrace break­ing the build

Every­one, DevOps engin­eer or not, loves a suc­cess­ful pipeline, to look at Git­Hub Actions or Jen­kins little steps and see­ing them all in beau­ti­ful green col­our­ing. There’s a hid­den pleas­ure there, right? It can’t just be me…

But we have to men­tion this here, because at the end of the day it’s just as import­ant: We have to embrace break­ing the build.

We need to embrace it, accept it just as a nat­ural step, because it is. Of course it means more work is about to come, but it also means our doings are work­ing, we’re enfor­cing secur­ity, we’re cre­at­ing cul­ture and accept­ing it just as another step on the soft­ware devel­op­ment process.

Then by the end of it we can rejoice as all the pipeline steps have been successful!

Con­clu­sion

To fin­ish it up and even though I tried to approach things in an object­ive way, this guide is not a one size fit all kind of thing. There is not a defin­it­ive answer as use cases are very dif­fer­ent from one another and there is not a one true path to integ­rate DevSecOps. My goal was to show you the main con­cepts, the tools and an object­ive way to think about the implementation.

Nowadays secur­ity can’t be pushed aside, so embra­cing DevSecOps in your devel­op­ment work­flow is the best way to make sure you deliver the best, more secure soft­ware and do it in a fast way.

Once you real­ize what you need to secure in your product it’s a mat­ter of start­ing integ­rat­ing slowly each dif­fer­ent scan. Here you will need to see where each one fits the best in your work­flow but no mat­ter how com­plic­ated you have things setup there’s always a way to do it!

The day to day of a DevOps Engin­eer (or whatever you want to call our spe­cies) is based on adapt­ing and learn­ing on the go, so grab that bull by the horns and get it done.