What hard thing does your tech make easy?
I occasionally receive emails asking me to look at the writer's new language/library/tool. Sometimes it's in an area I know well, like formal methods. Other times, I'm a complete stranger to the field. Regardless, I'm generally happy to check it out.
When starting out, this is the biggest question I'm looking to answer:
What does this technology make easy that's normally hard?
What justifies me learning and migrating to a new thing as opposed to fighting through my problems with the tools I already know? The new thing has to have some sort of value proposition, which could be something like "better performance" or "more secure". The most universal value and the most direct to show is "takes less time and mental effort to do something". I can't accurately judge two benchmarks, but I can see two demos or code samples and compare which one feels easier to me.
Examples
Functional programming
What drew me originally to functional programming was higher order functions.
# Without HOFs
out = []
for x in input {
if test(x) {
out.append(x)
}
}
# With HOFs
filter(test, input)
We can also compare the easiness of various tasks between examples within the same paradigm. If I know FP via Clojure, what could be appealing about Haskell or F#? For one, null safety is a lot easier when I've got option types.
Array Programming
Array programming languages like APL or J make certain classes of computation easier. For example, finding all of the indices where two arrays differ match. Here it is in Python:
x = [1, 4, 2, 3, 4, 1, 0, 0, 0, 4]
y = [2, 3, 1, 1, 2, 3, 2, 0, 2, 4]
>>> [i for i, (a, b) in enumerate(zip(x, y)) if a == b]
[7, 9]
And here it is in J:
x =: 1 4 2 3 4 1 0 0 0 4
y =: 2 3 1 1 2 3 2 0 2 4
I. x = y
7 9
Not every tool is meant for every programmer, because you might not have any of the problems a tool makes easier. What comes up more often for you: filtering a list or finding all the indices where two lists differ? Statistically speaking, functional programming is more useful to you than array programming.
But I have this problem enough to justify learning array programming.
LLMs
I think a lot of the appeal of LLMs is they make a lot of specialist tasks easy for nonspecialists. One thing I recently did was convert some rst list tables to csv tables. Normally I'd have to do write some tricky parsing and serialization code to automatically convert between the two. With LLMs, it's just
Convert the following rst list-table into a csv-table: [table]
"Easy" can trump "correct" as a value. The LLM might get some translations wrong, but it's so convenient I'd rather manually review all the translations for errors than write specialized script that is correct 100% of the time.
Let's not take this too far
A college friend once claimed that he cracked the secret of human behavior: humans do whatever makes them happiest. "What about the martyr who dies for their beliefs?" "Well, in their last second of life they get REALLY happy."
We can do the same here, fitting every value proposition into the frame of "easy". CUDA makes it easier to do matrix multiplication. Rust makes it easier to write low-level code without memory bugs. TLA+ makes it easier to find errors in your design. Monads make it easier to sequence computations in a lazy environment. Making everything about "easy" obscures other reason for adopting new things.
That whole "simple vs easy" thing
Sometimes people think that "simple" is better than "easy", because "simple" is objective and "easy" is subjective. This comes from the famous talk Simple Made Easy. I'm not sure I agree that simple is better or more objective: the speaker claims that polymorphism and typeclasses are "simpler" than conditionals, and I doubt everybody would agree with that.
The problem is that "simple" is used to mean both "not complicated" and "not complex". And everybody agrees that "complicated" and "complex" are different, even if they can't agree what the difference is. This idea should probably expanded be expanded into its own newsletter.
It's also a lot harder to pitch a technology on being "simpler". Simplicity by itself doesn't make a tool better equipped to solve problems. Simplicity can unlock other benefits, like compositionality or tractability, that provide the actual value. And often that value is in the form of "makes some tasks easier".
If you're reading this on the web, you can subscribe here. Updates are once a week. My main website is here.
My new book, Logic for Programmers, is now in early access! Get it here.
Reasonable post overall, but in defense of Python, instead of
[i for i, (a, b) in enumerate(zip(x, y)) if a == b]
you could do
[i for i in range(len(x)) if x[i] == y[i]]
...and in my new, currently-unannounced programming language, Voltair, one could do
[i for i in 0 to x.len if x[i] == y[i]]
the speaker claims that polymorphism and typeclasses are "simpler" than conditionals
I don't remember the talk in detail, but I think I know where the claim comes from...
With conditionals you're making a choice and splitting the possible control flow — that's the point of conditionals after all. With parametric polymorphism you don't, it always works the same way. Thus it's simple.
Ad hoc polymorphism (through inheritance or another flavour of "virtual methods") is trickier. On one hand, at a call site, you still have a single control flow, at least in the source code. On the other hand, the actual method you invoke could be pretty much anything and do anything it fancies.
Type classes are somewhere in between ("making ad hoc polymorphism less ad hoc"): technically, methods implementing a type class still can do anything, though often they're also constrained by parametricity. And that's (another) reason why Haskell programmers like algebraic structures and lawful classes (and instances) so much: you know what they can and cannot possibly do beforehand. A monoid is always a monoid, and it always does only one thing. :)
Another way to look at it is from the unit-testing standpoint: how many tests do you need to cover the thing? With a conditional you need to cover both branches, with parametric polymorphism you have only one branch, only one behaviour. In theory, you need only one test: it either always works correctly, or always works incorrectly. :) But I need to think much more on this topic to really understand what's going on, and where simplicity (regularity) stems from. Ultimately, it stems from the parametricity theorem, but that's not a satisfactory answer... :)
Ugh, monads again...
Making it easier to sequence computations in a lazy language (or thread state through functions in general) is the most boring (almost useless) thing about monads. A monad is the shape of the computation your program performs, you program it 10 times a week regardless of whether you even know it or not.
So the real thing is explicit representation (reflection) of monadic computation in a programming language. And explicit representation gives you new abilities: you can abstract over the thing. Like in your first example with HOF, when you have an explicit representation of functions (closures) you can start passing them around and write more abstract (more universal) functions.
The same with explicit monads: you can write functions passing monadic computations around, and write universal functions that work across iterators, async computations, exceptions and whatnot.
But the important point is not monads, it's another property you've glossed over: making new things possible. Some techniques do not make old things easier, they make new ones possible in the first place.
It's the second part of "make simple things easy and hard things possible". ;)
Minor idea I've been wiggling over: the best way to explain a monad is "it's a mathematical abstraction with some nice properties that make it good for explaining a lot of things."
Why do I like this? Because matrices and numbers are the same way! Numbers aren't anything, they're just math. We just use them in so many contexts that we've internalized them as natural!
Similarly, enough use of monads and they become internalized.