NP-hard does not mean easy
Recently the internet resurfaced my 2017 article, "NP-hard does not mean hard". I wrote the article mainly to express the nuance that NP-hardness only models the worst case of a problem, not the average case under any particular distribution—i.e., the instances you happen to encounter in the real world. More specifically, being NP-hard means that a problem has sufficient expressive power to model arbitrary boolean logic. But you can't blame NP-hardness for why you're bad at Super Mario.
One commenter made a remark that I've seen repeated elsewhere in various forms, that NP-hard problems like SAT and traveling salesman are "basically solved," due to the quality of modern SAT solving techniques CDCL, modern integer linear programming (ILP) solvers like Gurobi, and, I guess, decent heuristics for traveling salesman problems (I find the focus on TSP strange because I haven't yet heard of anyone who really cares about solving TSP in practice, compared to packing and scheduling problems which are everywhere).
Based on my experience, both directly at Google and indirectly through research and interviews I've conducted for my next book, Practical Math for Programmers, this is far from the case. Though all these tools that solve NP-hard problems are good, everyone who actually uses them is acutely aware of the underlying difficulty. we can hardly call the problems themselves "solved."
I used ILP solvers at Google to solve massive optimization problems. Or rather, we tried to make them massive, packing as much as we could into a single model. But we quickly hit the limits of what the world's best solvers could handle in a decent time frame (say, 12 hours).
A big part of the problem with my work at Google
included numerical stability and small numbers.
As you might know, an ILP solver is minimizing or maximizing a linear objective—in
my case cost to Google—subject to a set of linear constraints.
My constraints collectively expressed
many of the constraints of building and managing computers in datacenters.
Originally we hoped to build one massive ILP to solve it all,
but it just not realistic.
For one, the decisions that needed to be made
ranged in scale from the cost of individual memory sticks
to the global power usage of all of Google's datacenters.
These numbers differ by many, many orders of magnitude.
And internally, an ILP solver decides where to search next
by inspecting various ratios of constants
from the problem definition.
When these ratios get smaller than 10^(-16)
,
floating point inaccuracy creeps in and makes the solver
flounder by exploring the wrong parts of the search tree.
Beyond that, ILP solvers do not always return optimal solutions. They instead ask the user to configure an optimality gap, and the solver stops when its best solution is provably better than the threshold. In particular, with a poor choice of units, that optimality gap can hide all the "real" hardness in your problem. Or worse, it can cause the solver to sacrifice the quality of many small decisions, the sum of which are small compared to the major decisions being made. That seems like an acceptable trade-off, until the team that is responsible for ensuring the small decisions are sensible come yelling that the solver is making stupid decisions, and they're right.
Incrementally better solvers or a better model won't help you here. It's the core NP-hardness that matters: working out that remaining 0.01% optimality is where the exponential growth comes in, and proving a given solution is optimal requires the solver to increase its lower bound on the optimal solution, which in that last mile is effectively a brute force search.
The way to deal with this is to decompose a massive problem into subproblems. Solve the models in order, or in parallel, or in a dependency graph topological sort. The decomposition naturally sacrifices global optimality so the damn solver can finish in time. In many cases it makes sense, since there is a natural decomposition that you can plan large-scale details of a datacenter building, that shouldn't have too large an impact on any individual server rack. But still, some decisions that must be made at scale and in aggregate prevent more optimal choices made later. So the challenge is to find a decomposition that is workable for the business. But it all starts from the failure to solve an NP-hard problem.
Similar examples pop up all throughout applied solvers. I'll give two brief ones that I expand on in Practical Math. The first is in version selection for package managers. The leading techniques are based on CDCL solvers—see PubGrub—but they can't use existing solvers as a black box because they are too slow. The Conda package manager made a big issue of its performance problems in this article, and it boils down to "SAT solvers are slow." Even more modern solvers like PubGrub needed to reimplement CDCL from scratch so that they could inject custom domain-aware heuristics to make it performant enough for modestly-sized package ecosystems.
Another, which I have been studying more recently after an excellent interview with Lily Xu, is in how ILP solvers were applied to wildlife conservation. More specifically, wildlife sanctuaries around the world have to deal with poachers and other things that threaten wildlife. On the poaching side, the math problem is in how to schedule ranger patrols in the park so as to best combat poaching. The original techniques that Xu and her colleagues tried to use, based on some prior work of Milind Tambe and his group on patrol scheduling at airports for counterterrorism, used ILPs to solve a game-theoretic formulation of the scheduling problem. But they found ILP solvers couldn't scale to the problem sizes they needed. In fact, Tambe and his other students found the same scaling problems even staying within the setting of patrol scheduling for counterterrorism. Both Xu's and Tambe's work branches off to their novel approaches from the starting obstacle of needing to solve impractically large ILPs. And in both cases, they have some manner of decomposing the problem into sub problems, with a lot of extra work to analyze that the results still produce an optimal solution (otherwise it's hard to publish, I'm sure).
All that is far from the hardest part of applying math to wildlife conservation, but it supports my claim that you can't treat NP-hard problems as if they were solved, even in practice and even on "real world" instances.