jacobian.org newsletter logo

jacobian.org newsletter

Subscribe
Archives
January 24, 2022

[jacobian.org] work sample tests welcome back, and a new newsletter

Hi there, it's been a minute. I sent my last newsletter back in November -- if I tried to convince you that November was only a week ago, would you believe me? Does anyone really understand time any more?

Actually, what happened was a couple of things. First, I was getting increasingly frustrated with my newsletter software, which just didn't support the workflow I wanted for this newsletter (a semi-automated flow where content I write and things I link are automatically pulled into a draft for me to edit and send). And, the weekly cadence I tried to set wasn't quite right; I don't produce enough content to make a weekly newsletter as "meaty" as I'd like it to be.

So, after a somewhat extended yak shave, I'm back. I've switched providers (to Buttondown), and going forward you can expect this newsletter to come out approximately fortnightly, rather than every two weeks.

In the meantime, I've been doing a fair bit of writing; let me catch you up:

My posts

📝 Series: Work Sample Tests

I closed out 2021 by writing a series of posts about work sample tests.

Work sample tests are an exercise, a simulation, a small slice of real day-to-day work that we ask candidates to perform. They’re practical, hands-on, and very close or even identical to actual tasks the person would perform if hired. They’re also small, constrained, and simplified enough to be fair to include in a job selection process. Work sample tests are a critical factor in effective hiring. Interviews aren’t enough; hiring without work sample tests risks selection people who excel at interviewing but can’t actually perform the job.

All the posts in that series:

  1. Introduction to Work Sample Tests (Nov 9)

    What are work sample tests, and why do we need them?

  2. The tradeoff between inclusivity and predictive value (Nov 11)

    Good hiring processes try to maximize inclusivity and predictive value, but unfortunately, work sample tests bring these goals into conflict. There's always a tradeoff between predictive value and inclusivity. The guiding principle of work sample tests is: construct a test that balances predictive value and inclusivity. Fair work sample tests will be predictive enough to give you a high degree of confidence that you're making a good hire, while also being designed to be as accessible to as many candidates as possible.

  3. A Framework for Good Work Sample Tests: Eight Rules for Fair Tests (Nov 17)

    What makes a work sample test "good" -- fair, inclusive, and with high predictive value? Here's my framework: eight principles that, if followed, give you a great shot at constructing a good work sample test.

  4. Coding “Homework” (Nov 23)

    Coding homework is my default work sample test: I use it for all engineering roles unless it's obvious that another kind of exercise is better. There are good reasons to make homework-style work sample tests the default: they're relatively easy to construct, they scale reasonably well to large hiring rounds, they're accurate simulations of real work, and easier than most other kinds of tests to construct in a way that maximizes inclusivity. Here's how to conduct a coding homework work sample test.

  5. Pair Programming (Nov 30)

    I tend to prefer asynchronous work sample tests. The flexible scheduling of asynchronous exercises (i.e. "work on this whenever you like") works better for the majority of candidates. But for some candidates, and some roles, synchronous exercises work better. By "synchronous" I mean: work sample tests that are explicitly scheduled, and that has both the interviewer and the candidate working directly together at the same time. In these cases, I often turn to pair programming.

  6. Bring Your Own Code (Dec 7)

    If you're hiring engineers, some candidates will already have code they can share: side projects, open source, and so on. It's silly to ask those candidates to write new code just for your interview if they already have code they can share. So, if you're asking candidates to code as a work sample test, you should also offer to let candidates submit something they've previously written. Here's how.

  7. ‘Reverse’ Code Review (Dec 15)

    For most software engineering roles, the best work sample test will be some combination of the exercises I covered earlier in this series. But not every role; there are some circumstances where other types of tests fit better or are better at revealing some critical piece of information relevant to hiring. This post covers one of them: a "reverse" code review, where instead of you reviewing the candidate's code, you have them review yours.

  8. Labs & Simulation Environments (Dec 24)

    The work sample tests in this series so far all involve software development. But what about roles that don't involve day-to-day coding: roles like security analysis, penetration testing, technical support, bug bounty triage, project or program management, systems administration, technical operations, and so on? For those roles, I turn to simulated, "lab"-style environments. Here are some examples of that kind of test.

  9. What doesn't work (and why) (Dec 30)

    One thing I haven't covered is counter-examples: types of work sample tests that don't work. I tend not to do this sort of thing: I find it's usually more useful to talk about what does work than to pick apart what doesn't. But here, I think it's illustrative: looking at why certain kinds of work sample tests fail can help illustrate the principles of effective tests. Let's look at a few kinds of work sample tests that (usually) fail, and why.

  10. Wrap Up and Q&A (Dec 06)

    Finally, a wrap up: I'll address a few random points I couldn't quite fit in elsewhere, and answer some questions from readers.

Elsewhere...

🔗 Maintaining a healthy work culture is the first role of every executive - Graham says wrong things — A fantastic post about creating workplace culture. It's all great, but this part in particular is just incredible:

This is the part where I say something about how more diverse teams build better products, and how diversity of backgrounds, identities, and opinions leads to better decisions. That is all true. However, in this organization we value diversity and inclusivity because that is the morally and ethically correct thing to do. That it benefits us, our customers, and the company is nice. We will do it regardless of how true that is. If inclusivity fails to benefit us, our customers, or the company, we will seek to realign that conflict rather than cease being inclusive."

🔗 Shreyas Doshi on the hiring fallacy — Great Twitter thread with some hard truths about “we need to hire more engineers”

🔗 Becoming a Better Writer in Tech — Great advice on getting better at writing.

Don't miss what's next. Subscribe to jacobian.org newsletter: