subscribe
27 April 2015 by Daniel Tunkelang

Engineering the Hiring Process

At Karat, we are passionate about improving the effectiveness and efficiency of hiring. From time to time, we’ll post articles from our employees, advisors, and friends so they can share what they’ve learned from their personal hiring experiences.

Daniel Tunkelang is an advisor to Karat. He’s worked at LinkedIn, Google, and Endeca in a variety of technical leadership roles, specializing in relevance engineering and data science.


When I arrived at LinkedIn in 2010 to lead product data science, the hiring process for my team was a bit chaotic. LinkedIn was a pioneer in creating the data scientist role, and the job description was still a work in progress. Our recruiters struggled to screen the high volume of candidates in the pipeline, and we lost some candidates simply because we took too long to respond to them. And our interviewing was haphazard — we had high variance in interview style and interviewer quality.

We had a remarkably strong team. But I realized that, if we were going to grow that team, we had to improve our hiring process. So I made that our top priority the moment I arrived. And, given the quantity and quality of great hires we made in the following two years, I suspect that was one of the best decisions I made as a manager.

Here are the key lessons I learned from that experience.

Step 1: Define the Hiring Criteria

“I know it when I see it” may be a good enough standard for the Supreme Court, but it’s not the way you should decide which candidates are qualified. If you don’t know what you’re looking for, your hiring process will become a random walk. Worse, you’re likely to end up hiring based on conscious or unconscious biases rather than based on the criteria that will add value to your team.

When I came on board, our team lacked clear hiring criteria. So I asked every team member to independently articulate what he or she believed were our top three hiring criteria. I then synthesized their inputs to obtain the following:

  • Technical skills. Fluent in at least one of C++, Java, Python. Experienced with Hadoop or similar framework.
  • Domain knowledge. Able to apply machine learning, information retrieval, etc, to our problem space.
  • Ability to get things done. Motivated to deliver results, even when work is unglamorous. Understands iterative delivery.
  • Creative passion. Figures out right problem to solve. Has taste in problems that balances ambition with realism.
  • Positive attitude. Project positive energy and pursues personal growth rather than competition with peers.

Your team will come up with its own list that reflects its values and needs. What matters is that you have a list that the entire team buys into.

Step 2: Screen Resumes Together

Armed with this list of criteria, we worked as a team to apply them, starting with resume screening.

I took 20 candidate resumes and distributed them to everyone on the team. Everyone on the team evaluated every candidate, independently assigning a score of "yes", "no", or "don't know" for each of the criteria. We then analyzed consistency across the team and discussed every difference in judgement.

This exercise (which required a few iterations) yielded several benefits. It quickly revealed inconsistent evaluation practices across the team. It also helped us reduce those inconsistencies by working through them. Finally, the process helped us make our criteria more crisp and objective.

Best of all, when we were done, every single person on the team was able to contribute to resume screening with the trust of the rest of the team. With everyone taking responsibility for the quality of resume screening, the team became far more efficient and effective at processing the high volume of candidates.

Step 3: Make Phone Screens Count

There's a popular school of thought that phone screens should be soft-ball interviews. I strongly disagree. In fact, phone screens may be the most valuable part of the interview process.

Why? A typical phone screen is an hour of investment for both the candidate and the interviewer. Compare that to a full-day onsite -- which also includes travel, solving NP-hard scheduling problems, etc. Robust phone screens are the only way to ensure that on-site interviews are an efficient use of everyone's time -- your team's and the candidate's.

Want to test candidates on coding ability? Do that in a phone screen with a collaborative editing tool. And don't stop at a FizzBuzz problem -- use a real problem that is as representative as possible of the coding you'll expect the candidate to do on the job.

Since phone screening is both difficult and valuable, figure out who on your team is best at phone screening -- and have everyone learn from them through shadowing. If you don’t have great phone screeners on staff — or you can’t afford to spend their time on interviewing, then consider using an external assessment partner (like Karat!). Either way, robust phone screening is one of the best investments your team can make in improving its interview process.

Improving our phone screening was crucial. We actually did two phone screens, one for coding and algorithms and one for open-ended problem solving. Our bar was clear: everyone on the team understood that we should only bring someone on site if we thought there was at least a 50% chance we’d make an offer.

Step 4: The On-Site

One of the benefits of robust phone screening was that on-site interviews became something everyone on the team looked forward to. Indeed, since we were making offers to half the candidates we brought on-site, the on-site interviews were as much a sell as an evaluation.

As with phone screening, we identified our best interviewers and used shadowing to have the rest of the team learn from them. Different team members tended to specialize in evaluating particular hiring criteria. As a manager, my only concern, beyond ensuring interview quality, was to make sure the interviews were fairly balanced amongst the team.

We typically would have 4 or 5 interviews to cover all of our hiring criteria. We’d have all of the interviewers coordinate in advance, telling each other which questions they were planning to use. Not only did this avoid duplication, but it ensured that interviewers did their homework on candidates.

Finally, we had a coordinator (not one of the interviewers) collect real-time feedback from interviewers throughout the day. That coordinator had the authority to adapt the schedule, but couldn’t share the feedback with other interviewers and thus taint their judgment.

Summing It Up

Building a great engineering team is hard enough. The least you can do is invest in engineering an effective and efficient interview process.