Problems harder than NP-Complete
People always talk about "P vs NP" like P problems are easy and NP problems are hard. This is a useful day-to-day model but also an oversimplification.
Problems can get way, way harder than NP.
(If you want a brief refresher on P and NP, check out my post NP-Complete isn't (always) Hard.)
PSPACE-complete
P is the set of all problems that can be solved in polynomial time, relative to the input. PSPACE is the set of all problems that can be solved with polynomial space. It's assumed, but not proven, that PSPACE is strictly larger than NP. In other words, there are some problems that take polynomial space to solve but nonpolynomial time to verify.
The canonical NP problem is quantified satisfiability. The normal SAT problem, which is NP-complete, is finding assignments to variables that satisfy a set of boolean clauses:
some ABCDEF:
A | C
!A | D | !E
B | !D | E | F
etc
The QSAT analog adds the forall
and some
quantifiers:
forall A:
some BCDEF:
A | C
!A | D | !E
B | !D | E | F
etc
In other words, can we solve the problem regardless of what I choose for A
? If I instead write
some BCDEF:
forall A:
...
The problem is instead "is there a fixed assignment which works if A is true and if A is false?"
And, similarly to how many problems can be reduced to SAT, many problems can also be reduced to QSAT, such as finding optimal sorting networks. Many simple systems are model-checkable (showing a specification has a property) in PSPACE.
EXPTIME-complete
EXPTIME is the set of all problems that are solvable in exponential time, ie the difficult grows with O(2^n). It's suspected that EXPTIME is strictly larger than PSPACE and NP, meaning there are problems that take more than polynomial space to solve and more than polynomial time to verify.
I looked around and most EXPTIME-complete examples fall in one of two categories. First, game problems. Given a configuration of an NxN Go/Chess/Checkers/Shogi board, does player 1 have a winning strategy?
The second, succinct circuits, are more interesting.1 An n-node simple graph can have up to 2^n n^2 edges, meaning it takes exponential space to encode.2 Some graphs can be defined algorithmically by a boolean function f(A, B)
which returns whether or not nodes A and B have an edge between them. If the function is small enough,3 then we can encode the graph in polynomial space! However, because determining if two nodes are connected is no longer a constant-time lookup, any P-complete graph problem becomes an EXPTIME-complete succinct graph problem. Similarly, NP-complete problems become NEXPTIME-complete, which is its own complexity class.
I've not been able to find any industry use of succinct circuits.
2-EXPTIME-complete
2-EXPTIME is like EXPTIME except instead of the Big-O being O(2^n), it's O(2^(2^n)).
A lot of very niche and weird stuff falls under 2-EXPTIME. The canonical example is deciding Presburger Arithmetic statements. Put roughly, that's any mathematical statement that uses positive numbers, variables, addition, equality, forall
, and some
, but not multiplication. This is a statement in Presburger arithmetic:
forall x:
exists y:
y + y = x
|| y + y + 1 = x
Presburger arithmetic is decidable, so you can check the validity of any statement in at most doubly-exponential time. If you add multiplication, then statements become undecidable.
Some kinds of program synthesis (taking properties and generating a matching program) are 2-EXPTIME-complete.
This paper finds that if "planning" is EXPTIME-complete, then "planning with partial observability" is 2-EXPTIME-complete. I think this means finding a series of steps that solves a task, where in some cases you have to take steps without knowing environmental state that affects you. Like beating Pokemon while blind and deaf.
Finally, this paper shows that checking certain formulae in "Relevance logic" is 2-EXPTIME-complete. RL seems to be a logic where A => B means that "A is relevant to proving B", so you can't do things like F => T, I think. Seems pretty neat.
ELEMENTARY-complete
ELEMENTARY is EXPTIME + 2-EXPTIME + 3-EXPTIME + (etc). If a problem is elementary, it means there is some n
where the problem is strictly in n-EXPTIME.
There are provably no ELEMENTARY-complete problems. For every elementary problem, there is a harder elementary problem.
And there are problems harder than every elementary problem, like
TOWER-complete
Let's define tetration ^^
as repeated exponentiation, so 3^^2 = 3^3, 3^^3 = 3^(3^3), etc. TOWER-complete, then, is O(2^^n). Notice that for each n, 2^^n
eventually resolves to some exponential "tower". But it's not ELEMENTARY because bigger instances of the same problem will require taller towers, where ELEMENTARY requires all instances of the problem to be part of the same tower.
We are now deep in the math weeds. This paper (pg 3) finds a wide variety of TOWER-complete problems, the easiest of which to describe is (2): does a CS regular expression without stars have no matches? The rest of his examples are about proving statements in various theories.
Kinda rushing past TOWER-complete to get to the real fun one:
Ackermann-complete
Define (a variant of) the Ackermann function like this:
A(1) = 1*1
A(2) = 2^2
A(3) = 3^^3
A(4) = 4^^^4
etc
Ackermann-complete problems take time growing with O(A(n)). This is the most obscene concept I've seen this month. And, unlike 2-EXPTIME and TOWER, it has a shockingly simple example.
Define a Vector Addition System as follows: You have a starting vector S, like (1, 12, 0, 3)
. You have a set of update vectors U, like (1, -1, -1, 7)
, (95, 0, 10, -2)
, etc. At every step, you can take any update vector U and add it to S, as long as no element goes below 0. So you couldn't add the first vector but could add the second. You can do this any number of times and use each vector as many times as you want, etc.
From here, we can define the reachability problem: given a target vector T, like (7, 7, 7, 6)
, is there any sequence of steps that reaches T? This was proven decidable in 1981 but nobody could prove the complexity, and for a long time it was assumed to be EXPSPACE-complete (where n is the vector dimension). Then, just last year, it was proven to be Ackermann-complete.
It is so wild to me that such an easily-explained problem has such a mindbogglingly high complexity class.
Hyperackermann-complete
The paper Complexity Hierarchies Beyond Elementary introduces the HAck complexity class and gives examples of it. Both the definition and the examples are beyond my understanding.
What did we learn
Things can always get worse.
One interesting pattern I saw was "problem lifting". People take a class of problems and apply an extra complication that consistently changes the complexity class. Like if you replace graphs with succinct circuits, you turn P-complete problems into EXPTIME-complete problems. Or if you take planning problems and add in partial observability, you turn EXPTIME problems into 2-EXPTIME problems. That's neat.
I'm speaking at GOTO Chicago
May 22nd-24th! I will finally, finally be publicly giving my talk on the crossover project. You can see the talk page here!
Update for the Internets
This was sent as part of an email newsletter; you can subscribe here.
-
Thanks to Tyson Williams for telling me about succinct circuits. ↩
-
I originally wrote 2^n which is wrong. I was thinking of the number of subgraphs, whoops ↩
-
More specifically, if it's a boolean circuit. ↩
If you're reading this on the web, you can subscribe here. Updates are once a week. My main website is here.
My new book, Logic for Programmers, is now in early access! Get it here.