The previous post in this series warned of nasty talent traps that needlessly limit performance value.
This post describes hiring analytics that overcome those limitations.
The self-evident worst practices in the segments on the left side of the above wheel lead to the specific Talent Traps outlined in the previous post, and the Talent Accelerators shown in the slide below.
Scenario Modelers are online interactive surveys that dig deep to collect the specific behavior patterns that define the critical differences between successful and struggling performance. What do successful people do that accelerates their value contribution? What do struggling people do instead? What are the talent characteristics that signal whether a person is more likely to act effectively vs. struggle on the job? Our scenario modelers prioritize core competencies and collect action descriptions for one or more facets that combine to paint a clear picture of performance on that competency.
Progressive Self Selection tackles the increasingly serious problem of top talent abandonment (good people get clicked off by bad tests, i.e. long, boring, and irrelevant) in three ways:
- First, we offer fast, visual profilers that cut testing from 200 clicks to as few as six. These profilers cover personal work style, quick & clear thinking, abstract reasoning, competitive strategy, integrity, sales, customer service, safety and leadership culture.
- Second, we add gaming elements to the tests with levels, scores, and sounds that make the test more engaging.
- Third, we rethink the traditional hiring funnel, where candidates spend from 20-90 minutes answering dozens of multiple choice items, often receiving no feedback at all. Other times, they get the feared turn down email (i.e. “unfortunately, due to a large number of very qualified….” you know the routine). Once in a blue moon, they get a request to set up an interview. What’s worse, candidates have to go through this grind again and again to get an interview, and can look forward to several interviews before they get an offer.
Instead, we offer a process where all candidates get some coaching feedback whether they land on the short list or not. Throughout the screening process, we offer feedback on what they really want to know (how likely it is they will be invited for an interview). They also see a clear description of what’s involved in the next step. The first and the last decision in any hiring process is made by the candidate.
Accurate Assessment applied at the final decision filter stage needs to show consistent predictive power in the field with candidates or sample the kinds of performance challenges hires will face on the job (and ideally both).
On the first point– A depressing proportion of assessments on the market today were created using science-free methods (i.e. the ever-popular Myers Briggs type indicator). Some begin with a pinch of science but then quickly drop that to focus on making money.
Those inventories are based largely on one initial development sample of incumbents who take the test, correlating their test score with a measure of job performance. Lo and behold, prediction power (correlations in the .2os and .30s) emerges and the test publisher turns to making money.
Here’s the problem— you can’t hire incumbents (they already work for you). The other thing about incumbents– they answer test items more openly, since they have the job and have likely been told “don’t worry about this testing project. We won’t use the scores for anything but testing the test.” So the decision power looks good, the test publisher sells the test. The HR manager feels great. Only one problem– when actual candidates take the test and their performance is checked out later, the correlations consistently drop from the .30s to the single digits or teens. Oops!
On the second point– Many personality and mental ability assessments look nothing like the job. This causes candidates to wonder how this test can be a fair measure of whether they should be invited to an interview. Other assessments based on performance scenario modeling present candidates with challenges like the ones they will face on the job. Candidates rate a series of action options on which ones are best or worst. Our approach has candidates rate how close each action option comes to their “best behavior” in these realistic work scenarios. That way, candidates who rate an action option as highly similar to their best behavior can be asked to describe their performance in that scenario, should they progress to a behavioral interview.
Automated Behavioral Interviewing removes the source of the greatest amount of noise in the decision stream leading to candidate offers– the interviewer. Most readers know that traditional interviews, which rely on candidate generalities and opinions, don’t work well. Most readers have heard that interviews that focus on specific past achievements do work. I published the first scientific research and wrote the book on behavioral interviewing, so I can confidently say that “Behavioral interviewing works, but it is not working.”
How does that make sense? Research published by people not selling anything shows that interviewers who follow behavioral interviewing best practices consistently generate candidate scores that correlate with job performance. Good.
Here’s the thing. Most hiring managers don’t follow those best practices. They go to the workshop and eat the doughnuts. When it comes to hiring interviews, they remember one or two questions from the workshop, but largely revert to their old favorites. When they do ask behavioral questions, they largely don’t detect when candidates wander away from a behavioral answer. They don’t take notes that focus on what the candidate did (that even they can read the next day). They don’t rate each answer against clear behavior descriptions of terrific vs. terrible performance.
Instead, they form an overall impression of whether to advance, waitlist, or reject the candidate after the first answer or two.
Bottom line, decision power gets cut in half, if not more.
We are poised to launch a natural language alternative that administers behavioral interview questions over the internet in either video, audio, or text and then collects the candidate’s answer via PC, tablet, or smart phone. The stored answer is converted to text and scored using natural language analytics (latent semantic indexing).
Research done at ETS, Gallup, and Purdue University proves strong scoring power for the LSI scored answer compared to a qualified job expert score for the same answer. Correlations range from the .50s to the .70s— very strong indeed for business research.
Those wishing to learn even more can find additional details on www.BiLABs.Science . If you like this posting, please share or tweet about it.