Hunting for Threats Like a Quant.

The overconfidence effect is a well-established bias in which a person’s subjective confidence in his or her judgements is reliably greater than the objective accuracy of those judgements, especially when confidence is relatively high.
— https://en.wikipedia.org/wiki/Overconfidence_effect

Confidence has enabled me to get away with a lot over the years.

There are plenty of times in my life i've taken on more risk than I should have. There are just as many times (if not more) that i've totally blown out because of [over] confidence. In trading we call that "trading too big", meaning the risk is un-quantified and presents no statistical evidence to back up the trade. I put all my money into $AAPL because I "know" it's going up. 15 minutes later i'm down a couple of zeros (not A zero but a COUPLE of zeros). I rationalize the result to myself "it's just a hiccup, these things happen, nobody else understands, THEY SOLD 10-MILLION IPHONES THIS WEEKEND!".

Eventually I puke out the position (figuratively, and in this instance, literally) and move on to the next stupid ego driven decision. This doesn't mean confidence isn't useful, without confidence how would anyone take on risk? It's learning to normalize the utility, recognize what IT is and why you're leveraging it. Is it the best measuring stick you have right now? Do you have enough knowledge about the problem to actually quantify it? Without it we'd probably never get off the couch. What fun is your life if you're not confident in your ability to execute on the unknown?

Naivete.

When we started CIF (~2007?), confidence (as naive as it was) is all we had. We used this to measure what threat intel made it into our devices and which were treated as "context data" (eg: for searches). We dubbed it "the Wes coefficient", meaning "on that day, when Wes wrote the doc, how did he subjectively feel about how this threat data?". Then, a number was pulled magically out of a hat between 1 and 100 and applied to the data set. It stayed with that data-set until someone complained, in most cases, for years.

We found that, there were a handful of data authors who intuitively understood the context of the data they were sharing, then there was everyone else who had no clue. HERE'S AN IP, NO TIMESTAMP, NO DESCRIPTION, JUST BLOCK IT! Boggles the mind how many $REALLY_LARGE_CORPORATIONS still do this to this day… and people PAY them for this!

Typically those were either 65, which was 'just better than a coin toss', 75 or 85. 95 was reserved for "things a human we had a trusted relationship with submitted something manually through a web interface". Why 95 and not 100? Who knows, mostly no feeds ever received a 100, that "thing" they submitted will decay almost the instant you put it in and 90 felt too low. We didn't know anything, we (I?) were just making it up with what felt right, hoping that over time we'd figure out a sound mathematical solution to the problem.

Then, 10 years went by.

I am astonished by a few things:

  1. This problem is still rarely taken into account when describing threats .

  2. Humans understand things like 1-4 or 1-10 or A-F but 1-100 and their eyes glaze over. In fact, "thumbs up" or "thumbs down" is really only anyone cares about (noticed the recent changes on Netflix ratings?). "Should I care about this? yes or no?" Anything greater than 50 is a yes, anything less than is no.

  3. There's a grey area between 32 and 68 (the bell curve), and a hardish yes/no on either side of that. Most people hate the grey area within bell curves, they want black and white. The faster you can get to 32 or 68, the more scalable and predictable results you can ascertain.

  4. Because of this, humans will pretty much take any feed, toss it into a device and deal with the consequences if/when they appear.

  5. Humans will take a 65% confidence feed, throw it into a DNS Sinkhole and then complain to you when a google ip shows up in the feed. Even when statistically speaking, the 65% feed, over time ends up being closer to 75-85% accurate (we like to low ball things).

  6. Whitelists of the top 10% of things (ip address space, popular domains, etc) WILL SAVE YOU FROM YOUR EGO 99% OF THE TIME.

  7. 99% of the time, humans do not understand and/or make up statistics 100% of the time.

  8. 95% of the time, the magical numbers JUST WORKED (see #5 and #6 for clarification).

3000 Trades a Year.

Screen Shot 2018-03-30 at 12.22.49.png

As I find myself making well over 3,000 trades a year, with my own capital I am becoming more and more of a statistics freak (and losing lots of family and friends in the process, see #2 in list above). Not in a way where if the math doesn't work out- I don't make a trade, but in a way that makes the decision process more mechanical and thus more predictable. Over time I have a trade success rate of well over 90% (that's not profit / loss, but trades that incurred $0.01 or more in profit, all fees being equal). Being mechanical and making decisions based on statistics rather than confidence (and actively trading lots of little things, instead of passively one big thing) is the key driver with that.

TRANSLATION: lots of smaller feeds that are more predictable (eg: open, transparent and easily tested) will get you a more consistent and predictable (reads: less random) outcome. Think about it, many smaller viewpoints, more transparency equals more predictability. You could spend all your hard earned dollars on one large trade (eg: a large commercial feed). You could gain a false sense of confidence in that, you likely paid a LOT OF MONEY for that one large feed, thus you are confident in it's perceived value. Nobody was fired for buying IBM, right?

Your results feel predictable, that steady stream of glowing alerts from your IDS logs, makes you feel all warm and fuzzy on the inside. But then it hits, that one breach, your "highly confident", "highly valuable", "highly predictable" feed misses. Just like the downturn of a single stock, and along with it your entire portfolio.

Overconfidence is what got us this far.

Screen Shot 2018-03-30 at 12.35.43.png

Over the years confidence is really what made CIF what it is, a fast and [mostly] stable way of getting threat intel into your network so your network can DO something about it. Combined with a solid whitelist, the "65%" feeds acted more like "85%" feeds and the "85%" feeds were damn near 99%. However, there are a few problems with this in which i'm trying to solve in CIFv4:

  1. The math isn't reproducible, which means it cannot scale exponentially as the problem-set does.

  2. If you simply keep making up numbers, humans will be hesitant to rely on them, which means less adoption, which means less intel makes it into the Internet fabric.

  3. Lots of teams, companies, organizations have some form of "reputation score", but they're very proprietary and black-box, this doesn't scale well in my opinion.

  4. Humans don't necessarily care WHAT the math says, just as long as they can verify predictable results on a somewhat normalized scale.

  5. I don't care what insane price some bullshit analyst THINKS $AAPL is going to, nor do I know all the in's and outs of the Black-Scholes model, but over time I can verify that 84% of the time, based on that model prices will fall into a normalized distribution. To pull the trigger and make a trade it's good enough for me.

  6. Models tend to scale much better than humans, when implemented properly.

  7. Models TEACH humans how to think about a problem, confidence, does not.

It doesn't mean confidence is completely going away in v4 (at-least I don't think?). Confidence still has its place in various forms of estimation and when used properly is a decent "gut check" to help verify your probabilities. "Hey, this author usually has great data, why is the model counter-suggesting all of a sudden?". Is it a problem with the model? Did the human find something the model wasn't aware of? Is the human drunk?

Traditional threat hunting is useful.

We need threat hunters to focus on figuring out what the other humans are doing to protect OUR humans from the BAD ones. We NEED that data to help inform the models. However, with all the shine and glitz that's been going on in DC, NY and "the Valley" recently, TOO much focus has been spent in nice shiny UI's that entangle users in them and keep the data compartmentalized in their ecosystems. Sure, you can hunt for threats, but your data stays here using our black box magic, so we can do stuff with it and sell it in other places.

If we are to succeed at making YOUR Internet a better place, we need that information to federate out among our peers. We need each of our models to be predictably influenced by our friends to help protect ourselves against threats we do not yet know about. Those models need to be transparent in order for us to gain confidence in them.

While option pricing models are pretty static these days, threat intel models tend to be a little more fluid given the expanse of the software universe. This is where probabilistic models require some subjective influence to help define them. Make no mistake, it's the probabilistic feature of the model that enables it to scale.

 

Did you learn something new?