I appreciate the specificity of this point, and I love the clarity of your point that a specification is "sufficient" if every program it generates meets your needs.
I will say, though, that I think the practical wisdom of the original statement / cartoon here is that when you're talking about program generation with LLMs, there's a big difference between "every program so far" and "every conceivable program" (the phrase "comprehensive and precise enough
to generate a program" is sort of masking that difference because you could read it either way). In your example, if my spec is vague on the miles to km transition (i.e. it doesn't specify whether it's 1.6, 1.61, or 1.609) then it could just so happen that every program the spec has generated SO FAR has used the precise value of 1.609, and so I assume my spec is precise enough. But a future generation / rewrite / refactor of the program could materially change that and just use 1.6, reasoning that it's still correct as far as the spec cares. But now maybe there's a half-inch gap in my boat hull, or whatever. You could protest that the spec obviously wasn't precise enough in hindsight, but that's kind of the point; language is nebulous and unless it actually generates the same code every time, there's always room for you to later learn that you were imprecise in some way that usually, but not always, gets the right result.
Point being, I think when people trot out the kind of statements the first character makes ("Some day we won't even need coders any more. We'll be able
to just write the specification and the program will write itself."), or related statements (like "The spec is the source of truth, not the code"), they are overlooking this combination of LLM non-determinism and language vagueness, and thus the right response really is "If that's what you want, it's code you're looking for."
I appreciate the specificity of this point, and I love the clarity of your point that a specification is "sufficient" if every program it generates meets your needs.
I will say, though, that I think the practical wisdom of the original statement / cartoon here is that when you're talking about program generation with LLMs, there's a big difference between "every program so far" and "every conceivable program" (the phrase "comprehensive and precise enough to generate a program" is sort of masking that difference because you could read it either way). In your example, if my spec is vague on the miles to km transition (i.e. it doesn't specify whether it's 1.6, 1.61, or 1.609) then it could just so happen that every program the spec has generated SO FAR has used the precise value of 1.609, and so I assume my spec is precise enough. But a future generation / rewrite / refactor of the program could materially change that and just use 1.6, reasoning that it's still correct as far as the spec cares. But now maybe there's a half-inch gap in my boat hull, or whatever. You could protest that the spec obviously wasn't precise enough in hindsight, but that's kind of the point; language is nebulous and unless it actually generates the same code every time, there's always room for you to later learn that you were imprecise in some way that usually, but not always, gets the right result.
Point being, I think when people trot out the kind of statements the first character makes ("Some day we won't even need coders any more. We'll be able to just write the specification and the program will write itself."), or related statements (like "The spec is the source of truth, not the code"), they are overlooking this combination of LLM non-determinism and language vagueness, and thus the right response really is "If that's what you want, it's code you're looking for."