top of page

 Join our weekly blog

Using panels to remove bias from the hiring process



Career.Place was hiring. We wanted someone who was passionate about our mission of promoting and creating equitable hiring practices. Someone who was able to embrace our values of respect, integrity, and genuinely good intentions, was creative with approach, not afraid of risks, thrived with discovery and change, had an eye toward detail and the discipline to complete tasks, and a small collection of knowledge and abilities.


So naturally, we used our own candidate screening platform to evaluate candidates anonymously – asking them intentionally designed questions complete with a defined criteria and scoring system. These will be addressed in future blogs.


And then… this happened.


I was selecting candidates to move on to the next stage based on the blended results of three reviewers, myself and two of my colleagues. Most of the candidates received consistent results, with ratings all within matching or within one star of each other. But one did not.


This candidate received a five out of five-star rating from me (i.e. pass to next step). In my opinion, the response was thorough, well thought out, organized, creative. It demonstrated our values and showcased the knowledge and experience we were looking for.


This candidate received a three-star rating from one of my colleagues (i.e. don’t pass to next step). The comments indicated that my colleague was concerned with attention to detail and follow-through.


“Why?” I asked. I didn’t see any indication of issues with attention to detail.


“The response had three spelling errors in the first paragraph alone,” my colleague responded.

Sensitivities, biases, and blind spots


Spelling errors – my blind spot. I have a very low sensitivity to spelling errors unless they are egregious. In fact, I’m often completely oblivious to them. I’m dyslexic.


When I do find errors, my first thought is not ‘no attention to detail’, it’s ‘maybe the person is like me’. Maybe they are also so bad at spelling they often must use the thesaurus to back into the word they intend to use because the spell check can’t figure out their intention. Maybe they also believe homonyms are one of the most evil and ill-humored language inventions alongside silent letters and exceptions to the vowel rules – “i before e except…” why except? What’s wrong with a little consistency!

But I digress…


My colleague, on the other hand, has no such blind spot. Instead, they have a sensitivity. A veteran, developer, and personality that thrives in detail, my colleague notices deviations and deviations show weakness. After all, it’s not “leave out every T and dot some of the i's”.


My blind spot left me oblivious to a potential issue, and my colleagues sensitivity elevated a potentially small issue into a disqualifier.


Two experts, aligned in intention, seeped in the same mission and vision, bound by the same rules of scoring, and evaluating candidates under the veil of anonymity, had completely different results.


Luckily, we were both evaluating the candidate.

Triangulating the truth with a panel


We all have biases. We all have sensitivities and blind spots, preferences and comfort zones. We all experience the world through our own lenses colored by our own lives. Therefore, when evaluating candidate responses, even with every precaution such as keeping the candidates anonymous, ensuring all candidates receive the same questions in the same way, and having a pre-defined scoring system, biases can still come into play. But, for this too, there is a solution. Use a panel (a team of reviewers).


More than one view on a response, especially when those views are diverse, will expose blind spots and sensitivities that may end up favoring one candidate over another for reasons beyond ‘excellent for the job’.


To maximize the value of panels:

  • Target as much diversity as possible within the bounds of relevance. The more diverse the panel, the more likely there will be different blind spots, sensitivities, and biases so they will all neutralize. However, the panel must also be qualified to evaluate the candidates. i.e. don’t add diversity just for diversity sake, there must also be value to the evaluations.

  • Remove as many unnecessary bias triggers as possible. As we are all subject to bias (to be bias is to be human), remove as many triggers as possible. For example, anonymize the candidates to remove identity and demographics, standardize the format (when format doesn’t matter) so there is no preference to style or color choice, remove extraneous information such as alma mater or previous employer, etc.

  • Provide the same/similar information. Ideally all panelists should have access to the same content – such as the same written response to a question, code sample, recorded interview response. If that’s not possible, or if the organization has a policy of one-on-one interviews, train the interviewers on interviewing, information gathering, and note taking and ensure that the same people are asking the same questions for consistency between candidates.

  • Use both individual and group results. While discussing perspectives and results has a lot of value to identify and respond to assumptions and biases, not all perspectives will be weighed equally. Consider if one of the panelists is particularly persuasive, persistent, or the manager of the other panelists. Collect the individual results first, then open it up to a discussion to further refine results if necessary.




272 views

INCLUSIVE   EQUITABLE   EFFICIENT   #NoBias

See how career.place can help your hiring process

bottom of page