Closing the Behavioral Interviewing Practice Gap: Part Two

Closing the Behavioral Interviewing Practice Gap: Part Two

At least two remedies have been developed for closing the defined gap between the accuracy achieved in behavioral interviewing and the results obtained in the field: As we discussed two weeks ago, [1] Training, followed by testing and feedback.  Today we will discuss [2] a web-delivered application that standardizes the behavioral interviewing process around best practices.

The current web application, Behavioral Interview Online (BIO) offers many value generating components:

  1. An enterprise architecture of performance competencies (80) covering a wide range of jobs from entry to executive levels within a corporation.
  2. A set of behavioral interview questions for each competency covering experienced and inexperienced candidates.
  3. A method for collecting contact information for credible witness to the performance example that the candidate chooses to describe, along with an automated method for collecting both numerical and literal confirmations from the witnesses.
  4. A set of probes for each competency that guide the candidate in reviewing the completeness of their behavior description answers.
  5. A method for scanning the candidate’s initial behavioral answers and providing feedback candidates can use to edit their initial answer to make it more behavioral.

An Enterprise Architecture of Performance Competencies

A comprehensive taxonomy of workforce performance competencies was assembled in the Human Performance journal in 1993, drawing upon the dominant competency models as practiced by leading IO Psychology consulting firms.

The enterprise model consists of 80 competencies organized into five categories. Configuring the competencies consists of picking the 4-6 most critical competencies that will be featured in the online interview, then ranking them accordingly.

The second step consists of picking an additional 5-15 competencies that will appear in a Personalized Interview Kit used to guide the final, in-person, behavioral interviews of the top candidates.

This author created a taxonomy of competencies based on the scope of impact for the given competency category, including: [1] Personal, [2] Interpersonal, [3] Coaching, [4] Management, and [5]Leadership skills. While these skills apply to different degrees for specific positions and organizational levels, they apply across industries and functions.

Personal Skills include:

  • Problem Analysis and Decision-Making
  • Examples of Interpersonal Skills include:
  • Responding to Questions and Listening Actively

 Examples of Coaching Skills include:

  • Supportively Confronting Poor Performance and
  • Aligning Team Objectives with Business Objectives

 Examples of Management Skills include:

  • Preparing Project/Business Budgets and
  • Defining and Tracking Essential Business/Project Success Indicators

Examples of Leadership Skills include:

  • Defining the Business Mission Statement and
  • Aligning the Mission Statement and Strategies

A Set of Behavioral Interview Questions for each Competency

The science team has written and revised sets of behavior description interview questions, populating each of the 80 competencies with at least 2 questions— one for candidates who have job experience related to the target job class and one for candidates that have little or no related job experience.

The format of these questions includes a platform statement to facilitate candidate acceptance of the question and recall the specific example requested.


Platform: The role of {Position, Title} includes analyzing important, complex problems before recommending a course of action.

Question: Please recall a time when your analysis of a complex business or customer/ patient problem proved most valuable to your organization.

A Method for Collecting the Contact Information for a Credible Witness

BIO implements an option for candidates to provide the name, number, and email address of someone who can confirm the key result claim they made as the initial response to a behavior description question.

Instead of merely presenting candidates with the platform and the question, the BIO approach asks candidates to first describe, in 25 words or less, the key result they achieved in the situation outlined by the question. Candidates are asked to be as objective and numerical as possible in terms of value delivered. Then they are asked to “make their key result statement more compelling by making it confirmable.”

Of course, not all candidates choose to make their behavior descriptions confirmable, and not all provide a valid email address. And  not all witnesses choose to respond with a quick confirmation rating on the performance achieved by the candidate.

In addition to being asked to provide a one-click rating of the candidate’s performance level, the witness is asked to provide a description of the actions the candidate took to achieve that performance. Interestingly, 84% of those who respond go on to provide their own description of the actions the candidate took to accomplish the task in question.

The candidate’s key result statement, their rating of how often this type of situation has come up, their estimate of how they will be rated by their contact, and their estimate of how confident they are of that rating enter into an automated preliminary composite score for each question.

A Set of Probes for Each Competency

Next, candidates describe how they accomplished the key result they documented. To help structure that description, BIO offers a set of structured probes with the advice to “review the first draft of your answer to make sure you have covered some aspect of each of the probes. BIO has probing questions customized to each competency.

For example, the probes for the sample “Problem Analysis” question are:

What was the problem and when did it first show up? How did you decide to analyze this problem? What was your solution? What is the worst that could have happened if you did not solve this problem? What did you do to ensure your recommendation was considered? What did solving this problem teach you? When was that lesson most useful to you?

A Method for Scanning the Candidate’s Initial Behavioral Answers and Providing Feedback

Finally, once candidates enter the first draft of their detailed action description, they are shown a page that analyzes their draft; highlighting three types of words that indicate their answer may not focus on their own specific, past performance:

The first types of flagged words are frequently adjectives—words like: usually, sometimes, never, always, often. These words may identify an answer that describes a generality instead of a behavior description.

The second types of flagged words are future or conditional tense verbs— words like: will, should, could, can, and try. These words may identify an answer that describes a hypothetical or opinion, instead of a behavior description.

The third type of flagged words are plural nouns— them, they, us, and we. These words may identify an answer that describes the performance of a team or group instead of the performance of the candidate.

BIO explains that the flagged words MAY indicate a departure from a behavior description, and candidates are advised that they can return to their answer to correct any phrases or to add phrases to clarify what they did to create value in the situation at hand.

BIO Assessment Report

Finally, BIO assembles the candidate answers into an assessment report that includes the platform and the question asked, the key result, a rating of how often this type of situation came up, the name, position, contact number and email address for the witness, when the witness was emailed, the estimated witness performance rating, a confidence rating in the estimate, and the specific actions taken by the candidate. If the witness responds, BIO inserts the date of the response, the witness performance rating, and the witness action description.

Part 1 of this series introduced the gap between the promise of high-decision accuracy made by advocates of behavioral interviewing, and the more typical level of accuracy found in the field.

Today’s Part 2 outlined the components and methods for a web application that reduces the size of the accuracy gap.

The third and final installment walks the reader through the candidate experience and the decision maker assessment report.

Visit Global Talent Advisors on April 4th to learn more.


Dr. Janz is a seasoned, published thought leader in talent sourcing, assessment, and development applies the following strengths to creating human asset value: advanced analytical skills, emotional intelligence, realistic optimism, a passion for achievement and the courage to be new. Tom published the foundational research and wrote the book on behavioral interviewing. He pioneered simulation software for forecasting the business impacts of scientific selection, designed an online behavioral interview platform, and created measures of leadership culture that have a direct link to business unit engagement, burnout, and profitability.

Leave a Comment

Your email address will not be published. Required fields are marked *