Closing the Behavioral Interviewing Practice Gap: Part 1

Closing the Behavioral Interviewing Practice Gap: Part 1

Part 1 of this three part series defines the gap between the promise and the practice of behavioral interviewing for employment selection. Part 2 describes the design components for a web-based solution to closing the gap. Part 3 walks the reader through the candidate experience and the resulting assessment report for the service called BIO (Behavioral Interview Online).

Most large organizations train their recruiters and hiring managers in one form of behavioral interviewing or another (Janz, Hellervik, and Gilmore, 1987; Janz 1989). Yet many seasoned practitioners (Wheeler, 2011, “Why Interviews are a Waste of Time”), including myself, have questioned the practical power of behavioral interviewing in the field. Making hiring decisions under field conditions where: (a) interviewers often have varying levels of training, (b) the hirer falls under pressure to reduce time-to-fill while keeping the cost-per-hire under control, and (c) the hirer’s primary duties occupy more than a full time job as it is, leading to shortcuts that undermine the power found in research studies.

As a result, the author observed the following five “worst practices” as the norm:

  1. Few field interviewers take anything but the most cursory of notes—that they themselves find difficult to decipher 20 minutes after they were taken.
  2. Field interviewers rarely rate each interview answer on behavioral scales, summing the ratings to form a total score.
  3. Field interviewers often fail to hear when candidates answer a behavioral question with a non-behavioral answer, allowing candidates to go on giving advice instead of collecting specific past performance to job-related challenges.
  4. Many interviewers fill silence by either suggesting what they seek (telegraphing the ideal answer) or moving on to the next question.
  5. Field interviewers rarely seek confirmation for the specific examples of past performance that they collect.

Beyond these five limitations, behavioral interviewing is normally applied to the 4-6 finalist candidates, relying on resume sorts, telephone screening, or Boolean text searches to pick the candidates who make it onto the short list. These alternative methods have low decision power, resulting in high screening error rates.

How Big is the Gap?

Hiring decision power is measured by the correlation between interview score and a measure of job performance.  A correlation is an abstract statistical number, but it directly reflects the proportion of potential talent value that the selection strategy (the interview in this case) delivers to the job.  So a value of .50 means this strategy captures 50% of the potential talent value that would have been captured by one that hired ONLY the very best talent from those that applied.

It would be nice if we could directly assess the decision power of interview-based staffing decisions made in the field, but that is precisely why there is a gap. It’s a lot like the uncertainty principle in quantum physics, where the very act of measuring a particle causes it to change. Instead, I will draw on experience and related research to estimate the gap.

Early research on the interview by Ed Webster found interviewers in a hurry to reach quick decisions about candidates. They didn’t need to take notes or evaluate every question, because they’d already made up their minds. Careful interviewers following their behavioral interview training ask questions from their interview guides, probe to make sure they acquire a specific, behavior description (or intention), take readable notes, and rate each answer against the behavioral anchors provided in the guide. When the doors close in the field, interviewers often do the minimum— ask a couple of questions from the guide. When they revert to the Ed Webster worst practices, they damage the decision power potential of the behavioral interview by from 10-20 points. Thus .53 becomes .33 – .43.

Turning to the fifth item on the worst practices list, how much additional decision power could be gained if there was a way to confirm each of the candidate’s answers? Behavioral interviewing practice does not collect such confirmations. While confirming each answer presents practical problems, the benefits are obvious. I estimate it would add at least 5 – 7 points of decision power, raising the .53 to from .58 – .60. Thus a best practices field approach that combined great note taking, careful rating of each answer and collecting confirmation for most answers could easily move the needle on hiring decision power from the mid thirties up to the high 50s.

Finally, the gap between the potential value delivered by a staffing strategy is a function of decision power times funnel power. Funnel power is a direct function of the number of candidates evaluated at each decision point. If there is only one candidate per open position, funnel power = 0. Hiring better talent when you have more candidates to choose from makes intuitive sense. The relationship between funnel power and number of candidates per decision point is complex mathematically, but well known.

Some common values appear in the following table:

Candidates/ Decision Funnel Power
2 .81
3 1.09
5 1.41
10 1.75
20 2.06
50 2.89
100 3.5


Putting the behavioral interview down at the end of the hiring funnel, where it applies to just 3-5 short list finalists, reduces the funnel power and thus the value of the staffing strategy. So if there was some feasible way to move the greater decision power of a behavioral interview up the hiring funnel, it could further leverage the value of a rigorous behavioral interviewing staffing strategy from 30% to 150%.

Summing up, “How big is the gap between what is now, and what could be?”

Part 1 of this series examined the gap between the promise and the practice of behavioral interviewing applied to employment selection. A gap emerged of 20-30% between the decision power of current behavioral interviewing practices in the field and the characteristics of behavioral interviews as conducted in research studies. Further opportunity to leverage the predictive power of behavioral interviewing by placing it further up the selection funnel were pointed out, if there was a way to ease the practical constraints of feasibility and cost.

Visit on March 14th to read Part 2 of this series as it describes the components and operation of a web-based solution to closing these gaps.



Leave a Comment

Your email address will not be published. Required fields are marked *