How to Build a Faster IOC

In part 1 of this series I showed what you get treating the indicator as the platform. I'll be the first to admit, I didn't understand what I was unraveling until I was about halfway through that post. That's sort of the joy from writing these, a topic is picked so I can think through it- the post is just an after effect of that process. Most of the time, the posts force me to push through a problem and solve its immediate parts, if only so I have something to write about.

Towards the end of that piece I started seeing a much larger picture, how you interconnect indicators with other services. Traditionally (CIF or otherwise) you'd pass indicators throughout a framework as packages, rather than thinking about the indicators individually as "things moving throughout a system". I realize that sounds funny, but let's assume over time the indicator is the thing that's going to change, but the operations on that indicator (geo, FQDN resolution, HTTP transport) probably won't. This means baking those "things that don't change" down into the indicator actually makes practical sense.

Assuming you're constantly iterating on a problem, then this kind of abstraction starts happening naturally. You want the most complex and fast-changing code at the top of the stack, and the slower changing, more stable parts at the bottom. This abstracts the complexity of things that aren't prone to change (again things like GEO resolution, HTTP transport, etc). New users are now free to think more clearly about the actual harder problems (eg: probability, AI, correlation, threat actor attribution, etc) instead of digging through a functional mess (pun intended!). There are probably 100 ways to solve the latter, only a few agreed upon ways to solve the former (eg: maxmind).

I'd like to think i'm brilliant but as I write this i've come to realize it's a problem i've observed in the past. I stumbled upon this pattern in the the guts of various ZeroMQ projects, where the higher level pattern describes a "message protocol" and the implementation of that "message" has special transport functions baked right into the object itself. Instead of having separate handlers outside of the message protocol itself, each of the message objects has a send/recv set of functions baked right into the object.

I realize this is not new and feels a little like RPC, but it's an interesting way to think about a set of objects. It makes everything a bit more self contained and it enables you to abstract any "magic transport things" that need to be done away from the end user. In that they can simply focus on the import things, the results. I don't need a bunch of extra code to set up a https://csirtg.io or https://farsightsecurity.com or https://spamhaus.org connection, I just need to call that function and I get a set of results in return. In the early stages the extra functions get a bit funky, but those can be more cleanly abstracted as the new patterns emerge.

Another interesting aspect of this 'pattern' (indicator as the focal point) comes into view when you think about formatting an indicator. In some of my Ruby code (the things that power https://csirtg.io) i've started playing with the idea that format's themselves are just a function of the Indicator object. This means, again, instead of passing the indicator object through a function that converts it to CSV or JSON or STIX or BRO or SNORT that those are just functions of the indicator itself.

This doesn't really change much code in and of itself, it's probably similar complexity (if not more?) and it's probably more if you come from a functional programming stance. However it does subtly change how you call the code when it comes to the last mile. It makes your code a bit more readable at the higher levels and even a bit more flexible too. You don't have to think too much about passing things THROUGH functions to get the results you want, rather, let the functions pass through them.

Also, the less complex your code is, the easier it is to take advantage of the advanced features of the language your provides (eg: Python generators). The more advanced features you're able to use, the faster your code is, the more efficient your framework is. CIFv1 and v2 started out as massive Perl monoliths that required TONS of resources. It was a functionally driven (eg: written with lots of functions) framework rather than thinking about indicators as objects. If you wanted to understand CIFv2 you had to spend a lot of time thinking through how indicators were passed around, not what was happening to them.

When CIF was re-written in Python2 I was able to take advantage of SOME of the native memory saving operations but not all. The only reason CIFv3 required significantly less resources was because we were able to re-use some Python generator logic (thanks Justin!). The problem became, v3 still had a TON of functions you had to follow to see what was happening to your indicator as it was processed. As we added more generators to that logic (to conserve memory when processing millions of indicators), you'll notice that logic became harder and harder to follow.

The result, we weren't able to chain as many generators together as we'd like to have (eg: re-using memory rather than leaking it). This means we're probably still over-using computational resources in places. While CIFv3 does run on significantly less, v4 is geared to further reduce that. Why? The average amount of data a v4 instance should handle is orders of magnitude higher than its predecessors. The amount of data our users need to process is going up exponentially. The competitive edge comes from performance, not it's user experience.

Did you learn something new?