Combat Automation in OPSEC - How to UnF$@#! Your Code

Anyone who runs a successful SOC operation knows this one important thing; the tools you wrote- at best are an unmaintainable dumpster fire just waiting to happen. That's not meant to be demeaning, but rather- inspiring. It's an acknowledgement that even in 2018, some of the best SOC's who write the coolest toolkits in the world ALSO have technical debt issues. This should be liberating for the rest of us mere mortals.

This technical debt usually comes in one of two ways; under-engineered or worse- over-engineered. The former is actually easier to solve than the latter. With under-engineered code you're usually dealing with simple abstraction and duplication, not complexity. Most of the time you're dealing one or two really large functions. These functions need simply to be broken up into a series of steps and abstracted away from the main. This is relatively trivial because the function(s), while awkward are pretty straight forward logic.

Over-engineered code is a bit harder to process. It usually involves jumping from function to function, class to class and object to object simply to figure out what's going on. Instead of understanding things in a linear fashion, the reader is left with a headache. Code abstraction in and of itself is not a bad thing- but it can be overdone depending on the language. If you're writing Python and find yourself writing classes too early in the process, or worse- overwriting the "new" function, you're probably doing it wrong. Abstracting down (eg: pushing obvious layers down away from the user) is a good thing, abstracting up (making me think about your Mixins) is not.

There's a balance between these two scenarios and it sometimes takes years of just writing (and supporting) open code to understand the nuance. If you've ever published openly, you'll instantly recognize what i'm talking about. Open code usually lends itself to more scrutiny and feedback, if only because it's open and users want to understand it. They ask questions, find bugs, make pull requests, so over time the code tends to strike more of that balance than non-open code.

A Pull Request at a Time

The question then becomes, how do you automate and get in front of this problem? We all want to write nice clean code, especially code we can be proud of. Sometimes life just happens- this is combat programming remember. Often, we have two or three minutes to hammer out a pull request and even less time to think about the complexity or duplication of it. We're just happy if we have tests and that they pass.

If we're just writing a quick patch, the patch itself doesn't (usually) introduce any kind of overly engineered complexity to the code. The irony here is that, over time, the SUM of many under-engineered patches can lead to an overly complex product. The only solution to this is finding automations that can quickly assess each pull-request within the context of the application as a whole.

This is where tools like CodeClimate come in handy. This obviously isn't the only tool out there, but it's one of the more prevalent ones and helps introduce the next step in your testing pipeline. So your tests and security checks all pass on TravisCI, but is what you're submitting creating too much technical debt? How much time do you spend on code reviews? How many of the folks reviewing your code are able to do it 30s after you submit a PR? How long does your PR stay dormant in a queue waiting for the analysis to happen?

How to Become a Better Combat Coder

What i've found over time, if you automate a lot of this stuff at the PR level it forces you to learn a few things:

  1. It forces you to keep your pull-requests smaller and more concise

  2. It teaches you the best practices for each language, without having to read any books

  3. It teaches you how to write more advanced code by having some of the best bots in the industry "code review" for you, in 30s or less.

The less technical debt you carry as a SOC or CSIRT, the faster and less expensive it is to iterate on those tools. The faster you iterate on those tools, the more experienced you become. The more experienced you become, the more you win (or, in a CSIRT- the more you not lose). None of this happens overnight, but over time you'll find yourself become a significantly combat programmer. On pull request at a time…

These automated checks are a number of the reasons why, when you submit a pull-request to any of the csirtgadgets projects, we usually just hit merge.

Did you learn something new?