the grugq's newsletter

Archives
Subscribe
December 11, 2025

Bad OPSEC Considered Harmful

Bad Opsec Considered Harmful

I recently became aware of a GitHub repository collecting “Bad OPSEC” cases—instances where people were caught due to mistakes that allowed investigators to identify them.1 It’s a decent resource for the student of operational security, but one glaring issue stood out. The first line of the README, and presumably the rationale for the repository itself:

“The best way to learn about opsec is to learn how people fail.”

I believe this is categorically false. Learning from failures is probably the second worst way to learn OPSEC. The worst way, of course, is to get caught yourself.

Here’s the problem: OPSEC is the result of applied principles. The best way to learn it is to learn those principles and how to apply them—not simply to read examples of failures. What we have in this repository are case studies without the studies.

The Problem with Failure-Based Learning

There are two fundamental problems with learning OPSEC from collections of failures.

First, the sample is biased. You only see the people who got caught. The ones who succeeded—who maintained good OPSEC and were never identified—don’t appear in any repository. For every case study in a collection like this, there are doubtless examples of operators who made different choices and didn’t get caught due to better OPSEC (or, simple luck). You can’t learn what works by studying only what failed.

Second, extracting the right lessons requires a framework you probably don’t have yet. Without understanding the underlying principles, a reader might look at the Harvard case (see below) and conclude “don’t use Tor” or “use café wifi”—superficial pattern-matching that misses the actual failures of cover, concealment, and compartmentation. The case studies become a list of things not to do, rather than a foundation for reasoning about novel situations.

Reading raw case data leaves the beginning student with nothing but a list of past failures. Unless they manage to reverse-engineer the underlying principles violated in each case, they’ll be no better off than before. Case studies are valuable learning aids, but only when presented within a structured framework that encourages critical thinking about the theory behind each failure.

An Example: The Harvard Bomb Threat

Take the case of the Harvard student who called in a bomb threat. The repository presents it as a single bullet point with two links—one to a Slate article and one to the criminal affidavit. There’s no supplementary material explaining the security problems the student encountered or how he failed to address them. In 2013 I wrote about this case, and although I’d change a fair amount of that old post today, the underlying analysis remains sound.

To show how knowing the rules transforms examples into actual case studies, let’s examine the Harvard bomb threat through a proper analytical lens.

Background: The Three Cs

The three Cs of OPSEC are cover, concealment, and compartmentation.

  • Cover is the ostensibly legitimate reason for an activity—a plausible explanation that satisfies casual or even serious scrutiny.
  • Concealment is hiding activity from the adversary (in this case, police and university officials).
  • Compartmentation is isolating the components of an operation from each other, so that compromise of one part doesn’t compromise any others.

The Case

In December 2013, a Harvard student sent bomb threats to university buildings, hoping to avoid a final exam. He used Tor to anonymize his connection and sent the threats via a disposable Guerrilla Mail email address. Despite these precautions, the FBI identified him within days.

Where He Violated the Principles

The student’s operational plan had several critical flaws. He believed that Tor would protect his IP address and that a temporary email service would mask his identity. Technically true, but, as we’ll see, completely irrelevant.

  • He posted a panicked message to a class mailing list hours before the bomb threats, establishing a clear motive.
  • Guerrilla Mail did not hide the originating IP address—it showed the connection came from the Tor network.
  • He connected to Tor using his university network connection, which required authentication with his student ID.
  • He relied exclusively on fragile technical solutions for anonymity.
  • He had no plausible reason for using these technical solutions—which themselves stood out.

Let’s examine these failures through the lens of the Three Cs.

Concealment failure. His attempt at technological concealment failed entirely. The link from the email to the Tor network was transparent, and the correlation between Tor usage and threat timing was trivial to establish. His concealment was neither deep nor durable. When it failed, it removed the only barrier in the entire operation.

Cover failure. He had no “cover for action”—no plausible explanation for why he used Tor exactly once, at 4am, at the same time the bomb threats were sent. This mattered only because his concealment/compartmentation had already failed. But that’s precisely the point: OPSEC layers exist because individual measures fail.

Compartmentation failure. He failed to compartment the threat-sending persona from his student identity. He attempted technological concealment-and-compartmentation-in-one using Tor and a disposable email, but technological compartments are extremely brittle—when they fail, they fail catastrophically. The network logs showed his university account was literally the only one using Tor at the exact time the emails were sent. Adding stronger compartmentation, such as a physical solution would have been more robust, e.g. going off campus and using a public connection with no link to his identity.

The Lesson

With a framework in hand, the case study becomes genuinely instructive. Without one, a reader might walk away thinking “don’t use Tor” or “use café wifi”—superficial conclusions that miss the underlying principles entirely. The real lesson is about the importance of layered defences, robust compartmentation, the brittleness of purely technological solutions, and the necessity of good cover.

A Second Example: The Doxbin Survivor

The Harvard case shows what happens when compartmentation fails. For contrast, consider what happens when it holds.

In November 2014, Operation Onymous—a joint FBI/Europol action—seized servers hosting dozens of Tor hidden services. Among them was Doxbin, a pastebin for publishing personal information, run by an operator known as “nachash” (נחש, Hebrew for “snake”). His server was seized. His logs were in law enforcement hands. In short, his technological concealment had failed completely.

And yet nachash was never arrested.

The reason, as he later explained on the tor-dev mailing list, was financial compartmentation. When he registered for hosting, he used fake information. When he paid, he used methods that didn’t trace back to his real identity. The server seizure gave investigators access to everything on the server—but the trail from server to operator led nowhere.

As nachash put it in his subsequent guide, “So, You Want To Be a Darknet Drug Lord…”: “If your box gets seized and your hosting company coughs up the info… your hosting information needs to lead to a dead end. All signs in Operation Onymous point to operators being IDed because they used real info to register for hosting service.”

Good financial compartmentation can cover a multitude of sins.

The Lesson

This case inverts the Harvard example. The Harvard student’s technical measures (Tor, disposable email) failed, and he had no other layers to fall back on. Nachash’s technical measures also failed—his server was physically seized—but his compartmentation at the financial layer held, and that single intact boundary was enough.

The principle is the same in both cases: OPSEC is layered defense. Any single layer can fail. The question is whether you’ve built enough independent layers that a failure in one doesn’t cascade into total compromise.

From Examples to Understanding

You need to know the rules to see how they were broken. This holds for every case.

The Bad OPSEC collection provides real value by aggregating cases in one place. Researchers, educators, and the simply curious benefit from having these examples gathered and linked to primary sources. The problem isn’t the collection itself—it’s the framing that suggests the collection alone is sufficient for learning.

A list of failures without a framework is like a medical textbook that only shows photographs of diseases without explaining pathology. You might learn to recognize a specific rash, but you won’t understand why it appeared, what it indicates, or how to reason about symptoms you haven’t seen before. OPSEC requires the same kind of principled understanding.

For those who want to develop that understanding, the path forward involves pairing case studies with materials that explain the underlying rules of behavior:

  • The Moscow Rules—tradecraft principles developed by CIA officers operating under hostile surveillance during the Cold War. They’re phrased for physical operations, but the underlying logic transfers.
  • Nachash’s “So, You Want To Be a Darknet Drug Lord…”—practical guidance from someone who survived having his server seized, written with the authority of lived experience.
  • Rules of Clandestine Operation—a collection of rules from multiple sources, clustered to show how different communities phrase the same fundamental concepts. Drug dealers and terrorists have different operational requirements, but the basic principles of good security are remarkably consistent across domains.

The difference between reading failures and understanding OPSEC is the difference between knowing that someone got caught and knowing why they were always going to get caught. The latter is what keeps you from making novel mistakes and becoming a case study on Bad OPSEC.



  1. Some of these cases may involve parallel construction—where the public narrative of how someone was caught differs from the actual investigative path. Court affidavits show real evidence, but that evidence may have been found retroactively, after tips from sources that can’t be disclosed. This doesn’t invalidate the lessons, but it’s worth remembering that “how they say he was caught” and “how he was actually caught” aren’t always the same story. ↩

Don't miss what's next. Subscribe to the grugq's newsletter:

Add a comment:

Share this email:
Share on Twitter Share on Hacker News Share via email Share on Mastodon Share on Bluesky
X