Here's My Cheeky "?...but!"? on Using The Predictive Index in Pre-Hiring
Can a choice of words predict personality?

Here's My Cheeky "...but!" on Using The Predictive Index in Pre-Hiring

This article is a review of The Predictive Index (PI) (a behavioral assessment tool used by some companies' HR during the pre-hiring process) based on my personal experience (with opinions mixed in about all sorts of semi related things). Hope you enjoy! What's your experience been?

Disclaimer: the purpose of this piece is to share point of view, solicit other opinions, and also to share a touch about myself as shameless self-promotion... but beware! I'm a bit risky, kinda cynical, and perhaps overly bold in my word choices. I gotta be "no holds barred." I mean, I'm a Maverick by nature. If I didn't act that way this would just be public shaming, and, as a Collaborator, I'm just not about that.

THE SITUATION

I was recently crushed to find out, as a result of taking a personality assessment required by a hiring process, that I am a personality type "Degenerate Loser Douchebag", and that I didn't get the job. I'm still trying to figure out which thing sucks worse. Damn. I'm Smh kicking rocks right now, wondering: wtf can I do??? Help!

Shaking my head, kicking rocks, feeling whack!

Sike. Just kidding! Lol. That didn't happen. I mean, that would be really messed up, right?

I was being "extra" there, but the reality is that many people do get anxiety about personality assessments (and other tests) that come up in some companies' hiring practices. It's understandable. They SHOULD have concerns about the possible negative outcomes and judgments they might incur as result of them... from people who otherwise don't really know them... yet who also may hold the keys to their future livelihood in hand.

In case you don't know, gatekeepers and decision makers in the hiring process are often quite fickle and biased in all sorts of ways, whether intentional or not. That's just a plain fact.

Sorry to lie to you before about kicking rocks and all that, by the way.

The real story is that I did take the PI for the first time and, oddly enough, a second time about a ten days later not long ago. It was an applicant requirement for two different roles I applied to recently. In both processes taking the PI was required prior to a getting any contact for an initial person-to-person phone screening. After the first PI (turns out, I'm a Collaborator), I got an invitation to phone interview. After the second (turns out, I'm also a Maverick), I did not.

It's true that I didn't get either job. While I don't think that I was disqualified from either role due my PI results, I don't know for a fact what role the results really played. I am interested to know how much weight is put on PI test results by companies that use them during hiring, if you might know, but I'm sure there's great disparity in its practicable application (another part of the problem), and I also doubt I'll actually get much real feedback about that at all.

I keep hearing about transparency these days, but few organizations actually do it well regarding their hiring practices in my experience. C'est la vie and oh well. I'll never know, so "Who's to say?"

For what it's worth, if my PI results disqualified me from either role somehow I don't care... because the results of the both assessments were fairly accurate in representing ALOT about me, and I, like anyone, want to work on a team that values me for who I am.

Nonetheless given the accuracies of the PI, or any other behavioral predictor for that matter, I am NOT a fan of using them as a hiring determinant at all. And, honestly, part of my ideology is concerned about working at an organization that employs them during applicant screening. Its myopic... and kinda lazy, quite frankly.

With peace and love, I think it's stupid to rely upon finding the "correct" personality for a role based on the PI, or any similar type of behavioral predictor test like it... and my overall experience is proof of some reasons why. I mean that statement, first, in a sense that I have seen people transcend limitations and do miraculous things you'd never expect from them throughout my life and career many many times, and that I've also seen some big unexpected let downs from "perfect" people to boot.

The ability to transcend IS the human condition, just as to err is human as well.

Aside from those maxims I'll prove my points in this article based on my experience with the PI test specifically, that it, in and of itself, is a flawed mechanism which should not be required, relied upon, or even considered by gate keepers or hiring people at all in candidates' job application processes.

Plainly, the use of personality assessments which purport to predict candidates' future behaviors based solely on their type of personality, in general, as a pre-offer (or interview) requirement is unfair in a sense of social equality and bolsters process and individual biases.

Even with all the really cool science behind them, and with some credit for accuracy where it exists, as a tool for hiring I find use of personality classification and behavioral prediction tests in the hiring process to be shallow... and pedantic.

THE PROBLEM

In case you're unfamiliar with the PI Behavioral Assessment, it is a very simple test to take. In two rounds, participants simply choose adjectives from the first list that they feel describe the way others expect them to act, then from a fresh, selection-free-adjectives-list they select the words that describes how they see themselves.

The output of the inputs to the PI exam results in the test subject's personality being categorized into one of 17 profile types. Each type is associated with certain sets of traits, motivations/drives, and overall statistically reliable norms (based on where they range within polarized spectrums) of behavior styles that employers can (allegedly) expect from the test taking applicant in the future. Each type also fits into a certain higher level grouping as well.

Restated: the use-case-sell-pitch is that each ONE person with THAT birthday and that ONE social security number (if applicable) whose MOTHER'S NAME is {your mom's name} fits neatly into one of 17 boxes well enough that it is the "right move" for employers to estimate how they'll conduct themselves in THE role their applying to take on. Make sense?

In context of use during the hiring process, the results and summary are intended to inform hirers' imaginations about how the applicant will behave with the team, culture, and, ultimately, in the role they've applied to take on as whole in the future... or perhaps not fit for that matter, in certain ways. In my case the PI test reports represented me to the prospective employers before any of their people actually spoke to me at all.

The use of the PI in this way ascribes to the premise that it is a reliable (enough) predictor of ALL human behavior based solely on personality styles that it should be used to assess INDIVIDUALS' fitness to perform a specific job with a company.

*pause*

Moment of reflection: I wonder which caveperson invented the wheel... did only one type of personality figure it out and square away everyone else? Hmmm... people have been figuring out how to innovate and problem solve simply as a survival mechanism a long time regardless of challenges or innate limitations.... my guess is that all kinda cavepeople figured that shit out pretty well regardless of their self view or how heavily they felt they needed to conform to others' expectations. My guess is that every kinda personality of caveperson figured out the wheel and fire both around the same time as each other. Then again, the world may never know.

*unpause*

The the overall insights of the PI and its classification of test takers among the 17 types are derived from comparison analysis between a person's sense of Self (their natural tendencies), Self-Concept (view & impetus to oblige conformity standards), and a Synthesis, which represent the overall prediction of how that person reconciles the other two sets in their behaviors.

Based on the these concepts the employer can (supposedly) predict how the applicant actually behaves in the world from the PI, and therefore how they will behave in the job opportunity. It does not account, at all, for the actual learned skills and experience the applicant has, nor their actual evidenced behaviors in the world historically.

No alt text provided for this image

It's understandable that employers want tools to help them objectively assess candidate-role fit quickly and accurately for the sake of everyone's time and money. However, have you ever see the movie "Minority Report"? Here's a quote from it:

"There hasn't been a murder in 6 years. There's nothing wrong with the system, it is perfect."

THAT is the pitch to employers. In essence, the results are supposed to be a tool to judge someone's ability to be effective in THAT role with extremely high statistical probability...

...even though the idea of which types of people are a fit for that role are also contrived of someone's imagination.

I mean, the specific role isn't "plugged in" to the test, as far as I know. The way I understand it the "types" that are a fit for "that role" are predetermined without regard to the actual applicant pool and its individuals at all.

My guess of how the type-fit-classification phase of the process normally works for businesses using the PI is that some person (perhaps in a meeting? perhaps in a silo? perhaps under advisement of a customer success person onboarding the tool as part of the services package?) probably figures out which "types" go with which roles before there's any specific people applying at all. It makes me sad to know that some organizations limit the capabilities of their hiring teams so bad in this way if I'm right.

One thing for certain is that the individual applicants' past performance or real world LEARNED SKILLS are not known at this phase in the process when the "right types" are being picked. BIG ASSUMPTIONS are made in these predeterminations, which are inherently impossible to be unbiased.

This process is therefore unsound and unfair and stifles a level of workforce thought diversity growth prior to getting a single applicant, even though the eventual candidate pool may contain a plethora of viable, skilled, and talented people who could truly help it for the overall betterment of the organization.

FURTHER ANALYSIS

Neither time I took the test did I "study" for it, nor Google it, or anything. I really like to take assessments like the PI (and MBTI, McQuaig, et al, et cetera) totally fresh and uninfluenced as my going in position as much as possible. I only first started studying the details on the PI itself (and how it's being applied towards workforce shaping) after getting copies of my results upon requests this past week.

By default I'm someone who strives to respect (and enforce as a duty) the integrity of tests in general, because I like to see how things truly measure up against standards. I hope to get value from a better understanding of reality in the results of testing. I kinda always want accurate outputs/results in ALL things, universally, in detail, on different levels, I suppose. I digress.

*pause*

Side Note: Just as some states entitle job seekers to copies of credit reports, background checks, etc, that employers might obtain as part of job application processes (btw, all states should IMHO), there should be laws entitling applicants to get copies of all pre-employment testing results, absolutely.

*unpause*

When I first took the PI I read the test introduction before clicking "start" and was quite intrigued, wondering: To be? Or not to be? Do I "see" myself accurately? Do I portray my true self to others clearly? Am I well or mis understood in this world? I was excited. I was trusting and optimistic that the test would reveal something profound and truthful.

The test itself was anticlimactic, and I was done in 5-10 minutes or so. Both times I was thoughtful, introspective, and honest as I selected my answers. However, based on the results, my answers between the first test and the second test 10 days later must have changed. For whatever reason (Mood differences? Change in thought process? My kids interrupting me while I was doing it? Some kind of overall change in my self view and how I feel the world views me???) the results of my two tests are different in some major ways.

Here's how my results fell, graphically, between the two tests:

No alt text provided for this image

As a reminder, both times I took the test I hadn't researched it at all, nor did I approach it with intent to "game the system" in any way. Both times I was a clean slate and answered honestly from my point of view at the time. In the second go round I did not try to repeat or remember my answers from the first one.

Also, in neither case did I (consciously) try to project my answers against the specific roles; rather, my answers were all rooted in my own overall life and real world experience rather than in some imagined future state. These distinctions are definitely relevant variables to the tests' results reliability, I believe.

Perhaps had I wanted to game the system I would have just made answers that aligned with the job descriptions. Seems easy enough to do in hindsight.

I wonder if that works... do you know?

Anyways, if you look at the graphs in and of themselves here's what jumps out at me right away: I came out as two different overall classifications between the two tests.

  • Maverick
  • Collaborator.

So what, you say?

If I look at this as an experiment, based on sample size the test is at best 50% accurate at narrowing all people into 1 of 17 overall boxes (assuming one of my result sets is overall "right"). But even 50% is only "beer math."

That is, there's two different opportunities for "right" or "wrong" outcomes, at least, here...

...so that means the odds of one of these overall classifications being "right" is no more than 1 in 4, 25%.

These odds just reaffirm something for me that I already know, which is obvious (duh):

people are more multidimensional than 1 of 17 types, and in that regard the PI test has at least one invalid premise.

For all the total business costs (time, $, operational impacts, efficiency, growth, culture, et cetera) associated with hiring and getting it right (or wrong), I advise folks to save the trouble and expense of relying on personality classification behavioral predictors in hiring decision making.

No alt text provided for this image

EVERYONE, whether experienced at hiring or not, has at least 50% odds of picking right or wrong about an individual applicant. Plus, the more you make the personality and culture fit assessment on your own through interaction the better you'll get at it, and your odds (and skills!) in doing so will go up.

Between the two tests my Self and Self Concepts are nearly opposite graphically, yet my Syntheses are very similarly shaped.

And so after looking over the graphs for moment I read the report summaries...

While the results of the tests I took were not the same, I was flabbergasted at how accurate both PI results summaries were in describing my personality, mostly. Yet even between the two reports there are conflicts.

What I really noticed, though, is that they both match the way I have described myself in my resume and on my LinkedIn profile ALOT. Its kinda uncanny. It's almost so remarkable it appears contrived, but it's not. Yet, there are some very key differences as well.

Regarding those differences, I am confident that I have objectively represented myself and the behaviors and skills anyone can expect from me better on my resume than the PI has in either attempt they've had to get it totally right.

Additionally, the recommendations of me from others who have experience with me (on my LinkedIn page for example) directly challenges some of the PI tests' findings, and align with MY resume and LinkedIn / online "brand" transparency. I can assure you, the assertions from my colleagues' informed recommendations are MUCH more reliable on ANY point that conflicts with the PI.

Not trying to quibble about it... I just want us to have the facts straight.

Herein this evidenced shortcoming of the PI lies another re-affirmation of basic common sense:

If you want to make the right decisions about people there's no shortcuts around dealing directly with people to do it best.

Based on my PI experience I credit both reports as mostly reliable about many elements of my personality to a degree. However, people should not put faith in the PI in any capacity for hiring as an adequate tool for predicting actual behavior.

Using the PI for adjudging what behaviors a candidate will exhibit in the specific role and organizational environments at play without accounting for what ACTUAL SKILLS and experience they have is akin, basically, to judging them by their astrological sign. Lol.

Hope you appreciate the depths of that movie reference against this discussion. Apropos to these matters, no? Made me chuckle to think about, lol.

(Self Promotion: If you're interested in examining and comparing for yourself in detail, both PI's and my resume are uploaded to my LinkedIn page in my most recent experience working at "To Be Determined...". To Be Determined is the shittiest place I've ever worked, by the way. Okokok, no more self promotion here on out.)

I was quite skeptical before taking the test, yet I am pretty impressed overall with the accuracy of the results on the whole. BUT, ironically in at least two ways, the PI results' summaries convey certain things about me in a way that suffers from poor word choice, and lacks perspective.

If you follow me on this you'll be convinced the same, even if we've never met.

Comparing these documents (my resume vs. the PI results) is what made me realize the point I made earlier: the PI report is accurate mostly, but that my test result summaries' word choice and perspective differed from how I described myself in some very curiously inaccurate ways.

Does my reference to irony earlier make sense yet? Get it? Because you choose words for the test, and its supposed to be so deep, ya know? Lol.

I'll explain the perspective part more in a sec.

Before divulging more I'm curios: if you're obliged to look at my documents do you notice the differences in word selection and tone? Are they important distinctions or am I just quibbling over semantics?

Given the (most) often impersonal nature of hiring processes these days (compartmentalized from networking), the limitations facing the users of systems that enable them (on both sides, for recruiters and applicants, the systemic bottlenecks and cumbersome vampiring of precious time is constant), and considering the widespread use of ATS's, "skills" searching, and key word scanning with AI, et cetera, as part of that whole equation, the differences stick out to me as important.

If the importance of word choice in the job hunt game seems subtle to you ya might not be as woke as you thought.

It takes time for job seekers to make their resumes (and LinkedIn profiles too, I guess) ATS centric... or compliant... and optimized. One way PI's product could be improved is to use language that is more ATS centric, although we've already pointed out that it is not based on actual skills measurement at all.

In actuality the PI report summary language works against skill based assessment techniques completely.

My PI summaries takes some big leaps in how it conveys personality traits to align with the skills of the tested individual which are toxic, especially in the hands of an undiscerning eye, and neglects to acknowledge a couple of CRITICAL human universal truths:

  • Everybody ain't born with the same tools.
  • People will always do what they want to do, you can't MAKE people do anything.

Using my results to explain what I mean: both of my PIs put me in the "Social" grouping as someone who's extroverted. I agree. I have a fairly extroverted and open personality naturally.

However, both test results also assert that I am someone who is unconcerned with details, which is terribly misleading and lacks perspective of actually being familiar with me. In reality, I'm someone who likes to delegate and empower and I have a very well developed ability for keen attention to detail which has been a huge factor for my success throughout my career!

My point in sharing those examples is that how well someone might perform in a role depends on so much more than their innate psychological nature. There's a difference between personality and behaviors, and the PI is not a behavior predictor that should be used in hiring context. Peoples' potential effectiveness in a job must be considered against a total candidate concept compared to the specific company, situation, and role individually.

By the way, I like the concept of BE, KNOW, DO.

A person might have a personality that wants to be able to do certain things, yet not have the aptitude for a role that may otherwise be a "personality fit". A person might hate a job but perform it at record breaking levels out of motivation to earn, or for some other purpose that has nothing to do with their Self or Self concept at all.

Every applicant's actual abilities and motivations are unique to them and extend well beyond the PI's considered factors.

What is also clear regarding the language choice of the PI summary is that it assumes towards non-inclusion. Back to the example above to explain, the PI language choice "spin" reports in my summary that I'm unconcerned with details. The way I read it, that statement by the PI is a bit of a dis and misleads (hiring) people away from reality. Objectively, my judgement, ability to delegate, and preference to empower as opposed to micromanage are very strong leadership attributed I hold, and I have a tremendous ability to needle down into the weeds of detail as well as anyone. Yet, the PI doesn't provide any of THAT perspective.

Distinctions of this kind with informed perspective are completely lost in the PI results. A fickle, time starved, or shortsighted hiring entity might read the PIs comment on my level of detail orientation and throw away my resume, because the PI is there to help them weed people out and make their process faster. That is it's function more than actual matching of an individual to a role, and it can be too effective at misleading businesses away from outstanding candidates as a result.

So, if you're hiring isn't going great and you're having a tough time (and especially if that's happening alongside using the PI or some similar predictor for hiring decisions) here's an idea that could be a game changer for you: check your mentality.

Meaning, re-focus energy you are spending being eager to disqualify people towards learning how to harness the power of diversity and better understand peoples' value outside of your preconceived notions and biases.

If you look at it objectively and logically there is a high probability that these kinds of testing systems are actually working against you... and you may also be spending money it. That's not the kinda business decisions I make... it doesn't pass the smell test as good for a business at all, in my book.

No alt text provided for this image

BETTER SOLUTIONS

Ok, I know I'm long winded. So, I'll stop just talking crap and get to some key points focused on helping solve the problems that opened the door for these kinda tests being used during hiring in the first place...

I do think there's good use-case for the PI, just not for hiring. I think it could be useful for deliberately planned team and process building exercises, to aid in organizational learning and development, and to help co-workers better understand each others' innate personalities.

Depending on costs and organizational maturity, organization type, culture, etc, I'd likely only spend on it in surplus times or at certain waypoints in organizational transformation, growth, and maturity towards the end of bolstering team cohesion, hopefully.

The positive aspects of the PI tool has to be employed properly by skilled leaders for this to work, though, towards the right goals, and NOT in hiring decision contexts.

If you're someone (or dealing with someone) who absolutely insists on using some type of testing mechanism as part of hiring, or are hiring for the types of roles or size of candidate pools that really require it, I ultimately recommend some type of legitimately proctored aptitude and/or knowledge/skills based test.

Some may argue with me, but in my experience the ASVAB functionally works very well, although a test of that scope is excessive to employ commercially. I recently took the Management Battery as part an application process with the City of San Francisco. It was a solid testing mechanism and I hold it in high regard as a viable tool for management concepts skill assessment. Google it!

There's other types of tests out there I'm sure that are just as effective as those I've just mentioned, and as I was researching the PI a bit for this article I found out that Lou Adler has alot to share on this subject that looks pretty spot on to me. I recommend checking out his messaging on this topic if this stuff is interesting to you.

Anyways, to reiterate, if you are going to require pre-hire aptitude/skills testing it must be proctored and standardized to truly be effective and fair. Technical phone tests, for example, are biased. Also, non-proctored "click whenever" computer tests are unsound and unfair as there's ENORMOUS variation in people's test environments and access to cheating resources.

If you think the concept of proctoring is impractical you're just being a pessimist.

Here's a #freechicken example for if you've constantly got large pools of candidates that you need to test: rather than spending bits and pieces of your hiring oriented workforce's time all over the place trying to catch up with all candidates testing requirements and each set of results in time and space just to get unreliable outcomes, funnel your applicants to YOUR scheduled time and places to batch test them at once.

Control the testing! You want LEGIT results, right?

Perhaps you could have multiple testing windows, locations, or even software that would allow you to at least mitigate cheating through remote proctoring control mechanisms. If you employ this method smartly it can aid in helping to synchronize your hiring operationally and create cost and time efficiencies for the personnel on your team engaged in facilitating the hiring process.

A routine of testing applicants like this can be a mechanisms to iteratively align the phases of hiring processes for many different roles throughout a large company (which might require different tests for different roles). Multiple pools of candidates can be tested together across your whole of people ops!

This method can also enable integration of your hiring process routines into a full on operational battle rhythm, but that's a whole other subject. #freechicken.

I recently read an HBR article about a study that found interesting results testing for cognitive diversity as a factor in team performance. I recommend reading it. I think that the use case potential of the findings is very promising, but haven't learned much else about it.

While I appreciate that it (the AEM Cube) is more focused around inclusiveness and deliberately targets to eliminate bias in its findings more so than how I view the PI seeks to exclude, my going in position is that it also is unlikely to be the right approach to any sort of hiring assessment.

Why??? F***ing simple: human behavior is unpredictable.

I concede that there are overall averages in human behavior with SOME predictability to usually align to "reasonable person standards."

However, as a staple concept of American values every person should be entitled to equal opportunity on their individual merits rather than be required to conform to biased personality projections that propagate systemic disadvantage to those that are exceptional from the average.

Sticking up for values in this way should be a guiding principle to all companies' hiring process requirements decisions.

And, while some level of basic human predictability can be asserted as generally reliable, the unpredictability of human behavior cannot be statistically modeled to the point of truly reliable future prediction, even by AI.

If we could actually predict the future by simply testing personalities we'd have already achieved world peace by now and be able to stop random terrorist acts. I enjoyed "Minority Report" and "A Beautiful Mind" they're very entertaining movies, yet they both, in the end give support my point of view.

THE MORAL OF THE STORY IS...

Things are tough all over. Work is a grind, job seeking is a grind, hiring is a pain that never goes away, the pressure is on, and business is fast with high stakes.

Nonetheless, there's no shortcuts to treating people right.

Treating people right, from start to finish from first introduction, is how you get and keep great people.

There's alot of snake oil salespeople (unfit applicants) out that are better at getting laid (hired) than they are at maintaining relationships (doing a great job with earnestness, performance, reliability, competence), and there's a whole lotta hiring heartbreak on all sides of the equation....

No one wants to get burned (make a bad hire)

...but, c'mon, don't treat the potential next love of your life (unicorn candidates) wrong due to the biases you project after dealing with that last no good cheating playa' you dated (toxic hire).

Every individual must be judged on their professional merits.

As explained in the movie "Last of the Mohican's":

"Magua's heart is twisted, he would turn himself into what twisted him."

Don't get it twisted, it's important to want to vet people, I agree, but rely on indicators of aptitude, proven knowledge and skill, past performance, and experiential observation to assess potential fit instead of any test that claims to be able to predict human behavior.

If you get test results that conflict with others' reliable recommendations and real experiential proof, put faith in people by default ahead of predictive theories in hiring.

People are multidimensional and unpredicatable, and I'll stake my cheeky butt on that every time.

Thanks for sharing. Remnds me nze of Terence's quote Latin which As many opinions as men basajja! Thanks again. I am considering Remot Hybrid Entry Level Insurance Sales gig.

Stephanie Buerer (Coello)

High School Counselor

4 个月

As someone who studied Psychology and Counseling, we learned in our program that personality tests are basically bogus. They’re subjective and people answer them the way they want to be perceived, not the way they truly are. People get different results every time they take them. I don’t think anyone should use these for hiring purposes. I’ve seen super amazing and successful employees score in a way that HR didn’t like and I’ve seen some horrible people who get a favorable score, but then they get fired for being horrible to others.

回复
DeAnne Felt

Sales Specialist, customer consultation,service

9 个月

the new world!What happened to common sense. I am 70 looking for work and am disappointed and in disbelief at this testing! Liken it to dating site.You tell them what you think they want to hear. Its not a win win for either. I think with this PI your leaving a lot of your best candidates left at the table.

回复
Christopher Picarde

Help Companies Build & Engage Top Teams Through Science

10 个月

I highly recommend reaching out to an experienced PI certified consultant. These assessments are not designed to be used by untrained analyst for this exact reason. People make decisions with half the information. As a Certified Master Trainer and long time users, nobody should ever use PI to determine whether somebody can or can not do something. This is why the knowledge transfer piece is so critical. With that said, it is an extremely effective hiring tool, when used the right way by trained analysts. I wish you would have consulted with a PI certified consultant prior to making these judgements.

要查看或添加评论,请登录

Joshua C. Maynard的更多文章

社区洞察

其他会员也浏览了