[jacobian.org] risk tools: two-scenario threat modeling; comfort scores
Two quick perosnal updates:
- I'm going to be travelling for then next month, so don't expect to see anything from me until mid-September at the earliest. And if you send me email, expect it to take a minute for me to get back to you.
- I'll be at DjangoCon US in Chicago in early September; come say "hi" if you're there!
In this newsletter:
- Two Scenario Threat Modeling (August 8)
Plus 1 bookmark.
New Posts
📝 Comfort Scores: A risk mitigation tool for pre-trip briefings (August 4)
Here's a tool I find useful at pre-trip briefings that can help the group assess its ability to tackle some tricky objective. It's especially good for groups with mixed skill levels where people aren't necessarily familiar with everyone else's skill set. I've used this in contexts like group kayak trips, group canyoneering trips, ambitious adventure runs, and so forth.
You do need a small baseline of trust and psychological safety with the group since it requires being a little bit vulnerable about your comfort level. So it's probably best in situations with shared context, e.g. kayak or climbing clubs; or among groups with similar training backgrounds, e.g. a group of guides with similar certifications
It works like this: everyone individually thinks about the trip and the group's objectives, and states their comfort with the trip as planned, giving a score of 1, 2, or 3.
-
3 means: "I'm super comfortable. This is well within my skill -- so much so that I'lll be able to help others."
At a "3", you're not just getting through the day; you've got spare mental and physical capacity to help out others. This might be a Class IV boater on a Class III river they've run a bunch of times, or an experienced backpacker out for a short overnighter in easy terrain, etc.
-
2 means: "I can do this, I can look out for myself. but it will require focus."
At a "2", you're at your comfortable level -- not above it, you're not overreaching, but you're doing something that'll challenge you and require your full attention. You will not be available to help others.
-
1 means: "I'm uncomfortable or nervous, and I'll need some help getting through the day."
A "1" doesn't mean you're over your head, or that you shouldn't go on the trip; it means that there's a reason you're doing this in a group! You'll be stretching some, and may need a hand to have an optimal day. For example, I'm going on a canyoneering trip next week where I'm going to be a "1": it's a canyon I've done once before, and I know there's a very tricky move on the final rappel sequence. I needed an assist there last time, and I don't think I've improved enough to attempt that bit unassisted.
The group then totals up the scores. Your total score should be around double the size of your team -- e.g. if you're a team of 6, you'd like a total score of 12 or more. Anything above that mark indicates you've got more help available than you have need for help -- a good sign!
If the total score is below that mark, it's a sign you should stop and think carefully, and even consider cancelling the trip or changing objectives. An average below "2" indicates that you have more need for help than you have help available, and that might be dangerous. At the very least, you should carefully discuss how you can increase your safety bargain.
This exercise can also prompt discussion with the "1"s about what and where they might need a hand. Often I've seen "3"s and "1"s connect during/after this exercise and make some specific plans for where/when/how to work together to increase safety/comfort.
I think this is probably most useful in wilderness risk mitigation, but you can probably apply it to other disciplines. If you do, I'd love to hear about it!
📝 Two Scenario Threat Modeling (August 8)
A trap that many people fall into when trying to threat modeling or risk planning is a fear of being incomplete that leads them to not even try. People think, "there are so many possible things that could go wrong, so many potential risks. It's going to be such a huge effort to enumerate all possible scenarios, and I don't have time, so I guess I can't do threat modeling." That is, threat modeling seems so big, so hairy, that people believe it's too complex to tackle.
This just isn't true! Some planning is always better than no planning. In fact, you can get a surprising amount of value out of a very simple and fast technique: imagine a couple of scenarios -- just two! -- and game out what you could do to mitigate them.
Scenario-based threat modeling
What do I mean by "scenario"? There are a variety of techniques for doing threat modeling: systems-oriented (diagram a system and consider threats at each node in the system); data-oriented (map all the data in your system and consider threats to each bit of it); attacker-oriented (enumerate the possible "bad guys"), and so forth.
One of the very simplest, though, is to tell stories. Make up something that might go wrong, imagine how it might happen, and think about how we might mitigate the risk. These can be stories we make up from whole cloth, "ripped from the headlines" scenarios we've seen happen elsewhere, or (most commonly) scenarios "based on true events" that mix some reality with some imagination.
Scenario-based planning tends to work really well because human beings are great at telling and remembering stories. We think in narratives, stories prime our imagination. It's easy for us to keep our risk scenarios in mind — far easier than remembering some complex threat model or risk plan or attack tree.
You only need a few scenarios to generate a ton of insight
With other forms of threat modeling (especially systems-oriented and data-oriented techniques) incompleteness can be a big problem. For example, a systems diagram that leaves off the build server that bridges testing and production leaves off a critical node, and leads to faulty insights about the security of your network perimeter. But with scenario-based planning, incompleteness is sort of inherent — there are a near-infinity of possible futures — and it only takes a very small number of scenarios to yield tons of insight.
It's a similar dynamic to usability testing. In that field, Jacob Nielsen famously found that very small usability studies -- five users -- offer similar results to much larger studies:
The most striking truth [...] is that zero users give zero insights.
As soon as you collect data from a single test user, your insights shoot up and you have already learned almost a third of all there is to know about the usability of the design. The difference between zero and even a little bit of data is astounding.
[...]
As you add more and more users, you learn less and less because you will keep seeing the same things again and again. [...]
After the fifth user, you are wasting your time by observing the same findings repeatedly but not learning much new.
In a very similar fashion, with scenario-based planning, the very first scenario you consider yields a surprising amount of information, and after that there are rapidly diminishing returns.
I'm not aware of any sort of formal study here, so I can't offer a number as specific as Nielsen's "five". But I can tell you, from a ton of personal experience, that considering just two scenarios -- as long as they're the right scenarios -- can yield nearly as much actionable insight as a much more in-depth, complex, formal threat model exercise.
Two-scenario threat modeling
Thus, we get to what I've been calling two-scenario threat modeling. In this exercise, you come up with two specific scenarios to guide your risk migitation conversation:
-
Worst-case scenario.
This is the big, existential threat; the scary thing that keeps you up at night. An avalanche on a ski trip; a bank losing customer funds; a total data breach; a medical device being compromised to harm a patient; and so on.
Usually this is fairly easy to imagine: most situations have a couple-three "really bad things" that everyone's already thinking about. Don't waste time trying to decide which of a few options is "worst": this doesn't have to be the worst-case scenario, it can simply be a worst-case scenario.
This scenario can be pretty unlikely — though, make sure it's at least reasonably possible — as long as it has very high impact.
-
Most-likely scenario with tangible impact
This one's a bit hard to describe. It's not the most likely scenario, since these are often boring. E.g., the most likely problem for a web app is probably some sort of minor crash without data loss which just doesn't have a lot of "meat" for discussing mitigation.
Instead, look for a scenario that ranks somewhat high on both impact and likelihood. Something that's fairly likely to happen, and that would really hurt if it did. Err on the side of higher likelihood: you want a scenario that's as likely to happen as possible, while still having at least some impact.
Some examples: a partial data breach; an attacker is able to escalate privileges, not to a full admin but to some sort of partially-privileged role like customer support; mild hypothermia; getting lost; a breach of embarrassing (but not existentially-threatening) internal documents; etc.
This can be done individually, but I recommend making this a group exercise. Brainstorming/imagination exercises almost always turn out better when doing in a group context.
Be super-specific about both scenarios. Remember: tell stories. The examples I gave above are just starting points; a full scenario should include a detailed narrative. For example, instead of "partial data breach", go with something like:
We accidentally deploy a testing version of our web app with debug mode on to a public domain. An attacker discovers this testing app, and is able to generate a crash which, because of debug mode, reveals an AWS credential. This credential that allows read-only access to some of our S3 buckets, one of which contains a partial backup of our user database. The attacker downloads this backup before we discover the error and take down the app. This backup contains about 30% of our user data, including names, emails, and zip codes; it doesn't contain passwords, hashed or otherwise.
Write down both scenarios in detail. There are then any number of things you could do from here, from informal (e.g. brainstorm mitigations and come up with some potential projects to reduce risk) to formal (e.g. construct formal attack trees for each scenario). Perhaps I'll write about some of those techniques in the future -- let me know if that sounds interesting. The very easiest, however, is simply to circulate these scenarios widely, and encourage people to keep them front-of-mind during their work. Once again, this leans into our propensity for stories; people usually find it pretty easy to remember a couple of scenarios, and avoid making decisions that increase risks in those areas.
So there you go: a threat modeling / risk planing exercise that doesn't take much time. Give yourself an hour for a brainstorm meeting, and a few more hours to write something up -- boom, you have a useful threat model in under a day.
Elsewhere...
- 🔗 This World of Ours (James Mickens) — Speaking of threat modeling, I was reminded of this classic paper in the threat modeling literature canon. Hilarious and also insightful — worth a read if you haven't seen it before.