In an earlier post, I described how to deploy a threat prediction microservice. I dove into the mechanics of prototyping a REST API that spits out a simple answer, yes, no and maybe. Something from that post that was missing, not only a production example, but one that enriches the experience (eg: uses better, continuously updated models, etc). It's one thing to spit out a simple answer, the reality however is, most people won't trust that number out of the gate, they want context.
Learning to Trust Statistics
When I first started trading derivatives I didn't trust the probabilities built into the option model prices. I thought I needed more context than a simple "this option has a 70% chance of expiring out of the money". I thought I needed to look at the stock price, read the news and worst of all, try to think. That's how we're trained in life, if we think about something long and hard it MUST be a good decision.
Our 'well thought out decisions' in life should come out to be 'better than a coin flip', right? The problem is, without trust in probabilities, we'd never accomplish anything. Said differently, you're probably going to get paid this month for your work, so you don't have to worry finding another job right now. We're probably not going to have rain tomorrow, so we can plan to go camping. I could spend all my time researching the weather, or trust my weather app. Sometimes it's wrong, but most of the time I can handle the outliers.
In the early stages of trading I definitely tried to gather context about the price of an option (eg: check the news, technicals, PE ratio..). However, it wasn't so I could make a BETTER choice than what literally TRILLIONS of dollars in the market was already telling, it was so I could start trusting those probabilities. Similarly, when you think about something like the Super Bowl, or the World Cup.. do you trust your ability to make good judgements (go Bills!), or do you trust the betting line?
Learning to Automate Statistics
I'm not suggesting that traditional analytics platforms aren't useful, or that threat research itself is overrated. After all- something has to influence the models, but it's extending that research into the realm of automation that's important. We not only have to enable the tools to take the patterns we discover from these efforts, but also provide a [contextual] bridge between the two. In this others who want to make use of the data can feel comfortable with the data.
That simple context check also provides an important feedback loop into the model, which usually enhances the model. The more eyes you have on a problem, the better the chances your mode goes from a baseline of 68% to something like 84% and 93%. Your users are looking to scale this into their environments. Since scale is effectively leverage, blind trust in any model is as bad [or worse?] than not using one at all.
A funny thing happens though once that bridge has become established, people start to use it. If you give them enough context in a response they usually know pretty quickly if the response "feels right" or "feels off". Granted, this is an emotional response they can't quite articulate. Over time however, as your model starts producing warm fuzzies in them. They'll get tired of having to repeat the same tasks over and over, and instead start reading the API doc so they can automate it.
What kinds of things could you build with access to threat prediction service?