the speaker claims that polymorphism and typeclasses are "simpler" than conditionals
I don't remember the talk in detail, but I think I know where the claim comes from...
With conditionals you're making a choice and splitting the possible control flow — that's the point of conditionals after all. With parametric polymorphism you don't, it always works the same way. Thus it's simple.
Ad hoc polymorphism (through inheritance or another flavour of "virtual methods") is trickier. On one hand, at a call site, you still have a single control flow, at least in the source code. On the other hand, the actual method you invoke could be pretty much anything and do anything it fancies.
Type classes are somewhere in between ("making ad hoc polymorphism less ad hoc"): technically, methods implementing a type class still can do anything, though often they're also constrained by parametricity. And that's (another) reason why Haskell programmers like algebraic structures and lawful classes (and instances) so much: you know what they can and cannot possibly do beforehand. A monoid is always a monoid, and it always does only one thing. :)
Another way to look at it is from the unit-testing standpoint: how many tests do you need to cover the thing? With a conditional you need to cover both branches, with parametric polymorphism you have only one branch, only one behaviour. In theory, you need only one test: it either always works correctly, or always works incorrectly. :) But I need to think much more on this topic to really understand what's going on, and where simplicity (regularity) stems from. Ultimately, it stems from the parametricity theorem, but that's not a satisfactory answer... :)
I don't remember the talk in detail, but I think I know where the claim comes from...
With conditionals you're making a choice and splitting the possible control flow — that's the point of conditionals after all. With parametric polymorphism you don't, it always works the same way. Thus it's simple.
Ad hoc polymorphism (through inheritance or another flavour of "virtual methods") is trickier. On one hand, at a call site, you still have a single control flow, at least in the source code. On the other hand, the actual method you invoke could be pretty much anything and do anything it fancies.
Type classes are somewhere in between ("making ad hoc polymorphism less ad hoc"): technically, methods implementing a type class still can do anything, though often they're also constrained by parametricity. And that's (another) reason why Haskell programmers like algebraic structures and lawful classes (and instances) so much: you know what they can and cannot possibly do beforehand. A monoid is always a monoid, and it always does only one thing. :)
Another way to look at it is from the unit-testing standpoint: how many tests do you need to cover the thing? With a conditional you need to cover both branches, with parametric polymorphism you have only one branch, only one behaviour. In theory, you need only one test: it either always works correctly, or always works incorrectly. :) But I need to think much more on this topic to really understand what's going on, and where simplicity (regularity) stems from. Ultimately, it stems from the parametricity theorem, but that's not a satisfactory answer... :)