# Take Home Interview Code
After writing about one kind of contrived sample code, I want to write about a different kind: the kind that is part of an interview process.
A disclaimer right up top: I do occasionally evaluate coding samples for Root, but I did not design the current coding problem, nor did I design the metric that is used to evaluate solutions. Nothing I say here should be taken to imply anything about Root's interview process. Anything I say here is a reflection of my experiences evaluating samples at other companies. Also, as you look at my advice, remember that different companies will look for different things, and as general as I try to make this advice, it's still limited to my own experiences.
What I'm talking about here is the take-home part of an interview process, where after an initial phone screen, a company will send a programming candidate a sample problem. Sometimes these are algorithmic, sometimes they are meant to be mini-versions of what the company does, but the idea is to get a working sample of code from the engineer before committing to an expensive full day of interviews.
If you are giving these example problems
The first question, I guess, is are these samples at all useful, and does using them have bad secondary effects? For example, are there groups of potential engineers that might do unfairly poorly on a take home sample that would not be representative of their eventual work.
I don't have anything to do with the process at Root, but I did, for a while, run the hiring process at Table XI, and my eventual opinion was that samples were flawed, but I didn't have a better idea, and that it seemed that some of the flaws could be mitigated with a little thought.
One potential problem with interview problems is that they might depend on specialized algorithmic knowledge that has little or nothing to do with day-to-day work in the actual job, and therefore favors people with a traditional CS background even though those skills may not be predictive of success in the job. This was definitely true at TXI and we switched from a very algorithmic problem to one that was much less dependent on algorithm textbooks and much closer to our actual work.
A related problem is making the candidate guess what the criteria are and make success contingent on reading the mind of the eventual evaluator. Also known as the "Oh, you created a WidgetFactory
class and we were looking for WidgetService
..." problem. What helps here is being very clear on what you are looking for. Is it just program correctness? Code structure? Object-oriented modeling? Readability? Those are all reasonable targets, but they'll each have you pushing different candidates forward. Be clear to the people who are evaluating the code what they are looking for.
Make the criteria clear to the applicant. There's some resistance about giving information about the criteria to the applicant on the grounds that that it feels like cheating. I think that's not the way to think about it -- part of what you are evaluating an applicant on is whether they can follow somewhat complex instructions, so giving them explicit goals feels to me like it should be part of the exercise. (I do worry a little bit about people actually pulling copies of successful solutions off the internet, but in practice that's almost never been an issue for me.)
A big problem with these take home exercises is how much time they can take. The time can be a real burden on a candidate, especially if they are doing multiple interview processes simultaneously, and it can be a real issue for applicants who may be caregivers or have responsibilities that make it hard for them to clear the time.
This is a hard issue. My experience is that suggesting that people only spend X hours on the exercise doesn't work because people are nervous and want to spend time to make sure they put their best foot forward, which I completely understand. At Table XI, the explicit policy was that if you felt like you didn't have the time we would accommodate you, usually by skipping right to the in-person pairing exercise. But again, people are understandably reluctant to ask for exceptions to the rules.
What I eventually settled on was making it a pretty small exercise and not giving people a time limit on how long to take with it, and then also assuming that the eventual pair programming interview would cover any slack.
I think the reason why you don't just skip the take home exercise and just have people do a pair exercise is that weeding out poor take home samples takes a lot less time (1 engineer, half hour) than a pair interview (2 engineers 1 - 2 hours).
I am open to the idea that asking people to comment on a PR might be a good substitute for a take home sample, but there are some logistical problems (a PR requires applicants to know a specific language and set of tools, for example). I'm also open to the idea of paying people for their time in finishing the sample, but that's usually a decision that's not being made by the engineering team.
If you are taking these example problems
A few general pieces of advice if you are doing a take home sample problem. Just so I don't repeat this for every bullet point, I'm assuming here that the code sample is being reviewed for "quality", whatever that means, and that purely objective metrics like code speed or developer speed are not the main point. Sometimes, though, those very much are the point, and if you are in one of those situations, much of this advice does not apply.
Be sure you understand the requirements You should be clear on exactly what the code you are writing is expected to do, what the inputs are, what the outputs are, any assumptions you should make about scale, what to do in error cases, that kind of thing. If you don't know, ask. If you aren't comfortable asking, then make sure you include with your submission a clear statement of what assumptions you made. "The problem wasn't clear about what to do if the Widget ID had seven digits, so I decided to throw an error." Different places might feel differently about asking questions. At TXI, where we were consultants, somebody pushing back for clearer requirements was a definite positive, because it indicated useful client-facing skills.
Also, per what I said above, try to see if you can find out the criteria or hidden requirements. Do they care about how fast you solve the problem? Do they only care about performance? About code structure? The more you know, the better you can do.
Make sure your code works, and you can prove it This is a pretty close second as rules go. To be clear, not being completely functionally correct was not a total deal-breaker at TXI (it depended on exactly what the issue was), but why take the risk? If it is possible to have a "correct" answer to the problem, make sure you have it. An automated test to that effect is useful.
Choose your tool wisely Do the problem in the language and framework that you know best, rather than trying one you don't know very well because you think the company will be more responsive to it. (Unless the requirements say you need to use a particular language...) A lot of the time companies are looking for skilled coders and figure they can teach you the language, so writing good code in a language you know is better than writing mediocre code in a language they use.
Be more communicative than you would normally be Communication skills are a really important part of most developer jobs, written communication is very important. This may well be your only chance in the interview process to show off written communication. If you made an assumption about the requirements, document it. Many times, a code problem will be amenable to both a toy solution and a solution that assumes the code is embedded in a larger problem. If you choose to go one way or the other, explain the choice you made and why. Document any design decision that you consider interesting. Don't assume that the code evaluator will be able to infer your intent just from reading the code.
Include Run Instructions This is maybe a subset of the last point, but it's a very important subset, so I'm calling it out specifically. Do not assume that the person evaluating your code will know how to run it. Do not assume that the person evaluating your code will have installed all the tools they need to run your code. Do not make it difficult for the person evaluating your code to figure out how to run it. They will likely stop trying, and nothing good is going to happen to your submission after that. So: "This code was writing using Ruby 2.7.2, which you can install via WHATEVER. To run this code you need to run this setup script. And then this command. To run the tests, run this command." Be very clear and make no assumptions.
Test Of course, in order to have a command that runs tests, you need to have tests. Different places are going to feel very differently about this, of course, but in Ruby-land, I'd say it is a strong expectation that a code sample will include a good test suite, for whatever value of "good" works for you.
Assume it'll be attacked If the code has a sample kind of input that it takes, assume that the person evaluating it has access to a very tricky set of input that exercises all the edge cases. Maybe they do, maybe they don't, but you're almost always better off by showing that you've thought of the edge cases in advance and that your code behaves reasonably in exceptional conditions.
If you have something you want to say about interview samples, the permanent home of this post is https://noelrappin.com/blog/2021/04/take-home-interview-code/ -- leave a comment there.
Dynamic Ruby is brought to you by Noel Rappin.
Comments and archive at noelrappin.com, or contact me at noelrap@ruby.social on Mastodon or @noelrappin.com on Bluesky.
To support this newsletter, subscribe by following one of these two links: