We know what works when it comes to selecting the best candidates. The classic source of accumulated knowledge about this was the 1998 Schmidt and Hunter Psychological Bulletin article titled: “The Validity and Utility of Selection Methods in Personnel Psychology: Practical Implications of 85 years of Research Findings.”
Although widely cited by even such self-described non-academics as Lou Adler, the problems with this title leap out. First among them- too many words. Second problem- the phrase “in Personnel Psychology”. Even in 1998, that phrase had died decades earlier. Who among senior leaders of today’s companies knows or cares about “Personnel Psychology”. But wait, there’s more. About a month ago, the findings were refreshed with the release of an updated working paper, now featuring 100 years of research findings (Schmidt, Oh, and Schaffer, 2017).
What do we know?
Age, graphology (handwriting analysis), openness to experience, agreeableness, and extraversion (the latter three coming from the big five personality dimensions) had the lowest prediction power of the 31 screening methods studied. Resume ratings came in just barely above that at 11% of a perfect predictor. Perfection is not possible when predicting future performance, but we can do better than 11%. Reference checks came in at 26% and unstructured interviews at 58%.
Measures of General Mental Ability (GMA) occupy top spot as the best single predictor of job performance and training success, now with a more appropriate adjustment for restriction of range by the authors. This raises the GMA base prediction power to .65, with some jobs that require less thinking power (cab driver) falling below that and other jobs with heavy cognitive requirements (software engineer) rising above. The next highest prediction power comes from interviews at .58. Integrity tests come in third at .46, but add slightly more than structured interviews when combined with GMA, yielding a top two-method combo prediction power of .78.
Structured interviews plus GMA achieved a population corrected prediction power of .76. Peer ratings had a reasonably strong single-method prediction power of .46, but failed to add much to GMA. The resulting two-method combo delivered a .65 prediction power, down from the mid .7 level already mentioned.
What we do instead.
Hiring consists of sourcing, screening, and decision-making. Sourcing is not covered here. Considering the full range of hiring from startups to international conglomerates, most screening consists of subjective skimming of resumes or application forms, a subjective review of references, and subjective telephone or subjective in-person screening interviews. Note the ‘subjective’ theme. If objective measures enter the field at all, type-based personality measures take top spot for popularity, followed by trait-based personality measures. Neither of these even show up on the science-radar as working as well as the wholly subjective methods covered first.
Some companies, even some small ones, collect measures that work and send the assessment reports to hiring managers. Some of those companies have a strong enough employer brand to compel candidates to complete those “powerful” assessments. But there is a paradox at work that undermines striking it talent-rich. In the past, delivering consistent prediction power meant lots and lots of repetitive, literal test items remarkably unlike doing the job. They were both endless and tricky, usually starting off with the obvious lie- “There are no wrong answers, just pick the one that first comes to mind as a best fit for you”. So, the more powerful assessments are often long, literal, and don’t look much what makes for success on the job. Now bring that irresistible force to bear on the immovable object of millennial candidates who hate boring, irrelevant, tricky web sessions answered increasingly on mobile phones. The result– drop-out rates soar, nearing 90% for online screening for high-impact jobs taken on mobile phones. Ninety percent overall dropout means that the dropout rate among the top performing 2-10% of respondents you want to progress to the site visit slate approaches 100%. Oops.
And if that wasn’t bad enough, when it comes to final decision making for professional, high-impact hires, the decision reverts to a subjective final call by the hiring manager/team. One large global bank headquartered in the UK recently shifted its candidate “vetting” process from the HR function to the fraud and risk department in security. They found that the competency based assessment reports were mostly ignored by hiring managers because they were hopelessly complex, even with the training received. Imagine that- Hiring managers found 32 personality scales and a handful of mental ability scores didn’t help them decide who they liked best. I suggest that this global bank is more the norm than the exception, at least regarding the use of assessments.
Why we ignore what works.
On long reflection, two principles explain why we are stuck in this costly set of ruts. One is Darwinian natural selection and two is “Follow the Money”.
We haven’t been selected for making objective decisions. While many snake-oil test vendors love to use the DNA mantra (as in, give us your top ten performers and we will customize our test to hire more people just like that), the use of DNA in that context is wrong in so many ways. Stubbornly using gut over data in talent decisions is in our DNA, honed over 70,000 generations when people lived in tribes (vs. the three generations where evidence-based talent decisions were possible at all). Think of ‘Survivor’ vs. survival skills. Researchers at Stanford University and elsewhere explore the behavioral genetics of thinking and interpersonal competencies. We have been selected to win arguments over who gets the best part of the cave by appealing to brute strength, and if not that, then fears, doubts, and desires. Data had nothing to do with it. Knowing HOW you know something, HOW SURE you are about that, and how you arrived at that level of confidence– is a very recent form of sophistication. It’s also the only way to get to the moon, describe the human genome, or cure cancer. I used to get angry over managers who “didn’t want to know how much subjective hiring decisions were costing the company”. Now I know why, but it still hurts both employees and shareholders none the less.
Follow the money. It always pays off with understanding. In this case, the primary actors in the drama- the recruiters, staffing firms, and HR managers. They generate a lot of money doing things the way they are. They are not so sure where they fit into an objective, powerful, efficient, self-correcting selection system. Some of them wouldn’t, but many would while doing more highly leveraged work. There is little valuable labor contribution to the respondent screening phase of an optimal selection system. Labor that accelerates closing on highly competent, engageable talent, working with the top talent instead of lamely trying to figure out who they are, delivers real value. But it takes senior leadership that has overcome it’s genetic pre-disposition towards hiring clones or buddies least likely to take their job away from them. Sam Palmisano, a legendary IBM CEO, never wanted to be the smartest guy in the room. He suggested that to measure the greatness of a leader, measure the success of the people that leader coached. He offered that his success came from becoming more objective and analytical than he ever thought he could become, but then speeding up the process of having the numbers guide decisions.
The cost of ignorance and what to do about it.
Borrowing from Ben Franklin, President Barack Obama was quoted as saying, “If you think the cost of education is too high, you need to consider the cost of ignorance in the 21st century.” When it comes to selection, it’s even better. Doing it right often actually costs less, given you don’t ignore all the wasted labor and outlay costs of resume sorts, reference calls, screening interviews, science-free testing, and excessive final decision site visits.
Now comes the obnoxious sentence that tells you I am out of words for this posting. But no worries. I will be back with more on calculating business impact, options for optimal objective screening, and evidence-based final decisions. Until then, I recommend Kevin Wheeler’s recent article on Predictive Analytics, Bias, and Interviewing for useful tips on reducing bias in selection.