Sunday, September 30, 2012

Research update: September, 2012

It's been a while since I've posted a research update.  I was waiting for a critical mass but honestly it was taking a while!  But no more waiting, let's see what's out there...

First up, a topic near and dear to my heart: HR certification.   In the September/October issue of HRM, Lyons, et al. took a look at a sample of web-based HR job ads to study the prevalence of requiring the PHR/SPHR certifications offered by SHRM, er, the the Human Resource Certification Institute.  Back in 2005, a similar study found that less than 2 percent of jobs listed the certifications as preferred or required.  This time?  15.6 percent.  Boom!  Quite an increase (although obviously still far from the majority).  Now the important question: does possession of said certificates actually predict job performance?  I gotta be totally honest, I've seen some blind admiration of these certifications without any indication that this question was addressed.

Anyhoo, what else is in that issue of HRM?  For one, a study of perceived supervisor support and team-level performance.  Subjects: 75 gas stations in Norway (I just wanted to say that).  Results?  Link between the two.  Implication?  Another competency to consider when hiring supervisors.  Oh, and btw, there's another study in the same issue about HR practices in the organization and links to OCB and customer service (hint: participation is good).

Like soccer (or, should I say, football)?  How about team research?  Either way you'll be interested in this fascinating study, which found that players transferred to another team improved the performance of their new team against their old one.

Finally, this small study which found having applicants complete self-affirming written self-guidance statements prior to interviewing improved their performance.

Okay, next up is the September issue of JAP.

How about another leadership study?  Why not.  In this one, researchers found supervisor consideration behaviors were positively related to employee attitudes (and apparently the more the merrier) while there was an ideal level of initiating structure behavior.  There's a lot more with this one, particularly on P-E fit, so check it out.

Have you ever found yourself wondering, "I wonder what the power of cross-level interaction effects is when conducting tests of multilevel contingency and interactionism"?  Well today's your lucky day, because check this out.

Promoting and fostering diversity in organizations is hypothesized to have various positive outcomes, one of which is creativity.  But previous results are mixed.  In this lab study, the authors propose an explanation: it depends on the extent to which team members take each others' perspectives.  Another reminder that simply recruiting and hiring people with different backgrounds does not ensure successful performance.

Anyone performing differential item functioning (DIF) analysis should check out this study, which recommends a particular procedure for identifying optimal anchor items.

Finally, and this doesn't directly relate to assessment but is just darn interesting, a study on "cyberloafing" and the impact that moving to daylight savings time has on it (hint: it's not good).

Okay, back to the topic at hand and the October JOB.

Need another example of the complexity of the relationship between personality factors  and organizational behavior?  Check this out (personality factor: conscientiousness).

Speaking of conscientiousness, this study found that adaptive performance (acquiring new competencies as a result of organizational change) was related to task performance but the relationship was impacted by employee conscientiousness (and organizational politics!).

Did someone say conscientiousness?  No, seriously, another study on it.  But it's another interesting one that looks at the importance of group perceptions as well as team composition when analyzing the relationship between individual personality measures and performance.  And it's another sports team study, this time university football players (U.S. football that is).

Last but not least, the October issue of JPSP.  And the first study may just be the most interesting thing I've seen in a while.

In it, the authors demonstrate via lab and field experiments with a variety of subjects and decisions (including graduate school admission), that people seem to have a preference for potential rather than achievement.  What are the implications for recruiting and assessment?  That decision makers may be unduly influenced (at the cost of validity) by the promise of an applicant rather than their demonstrated accomplishments.  However, this warning must be moderated with an acknowledgement that sometimes potential (e.g., ability scores) can be equally valid sources of predictive validity.  So, bottom line: another bias to watch out for.

Next, an elucidating if slightly depressing study of stereotype threat.  The authors demonstrate that the experience of stereotype threat among African Americans and Hispanic/Latino(a)s resulted in scientific disidentification and intention to pursue a scientific career.

Okay, this next one is tricky because it looks interesting but I wasn't able to find an in-press version (maybe a helpful reader can point one out?).  It's a refinement of a theory of basic individual values, and looks like it has implications for career and applicant selection.

Finally, very last but most definitely not least, a study of the accuracy of personality judgments, a particularly timely topic given recent research that suggests these judgments have significant value in personnel assessment.  Specifically, the authors were looking at hindsight effects (essentially how your perception changes after time and additional information).  Good stuff and implications for anyone wanting to use observer measures of personality.

Saturday, September 01, 2012

Is "big data" relevant for recruitment and selection?

Recently I was contacted by a reporter from the Wall Street Journal about the application of the "big data" movement to predictive analytics and HR topics like recruitment and assessment.  I wanted to share a little about my reaction and my thoughts afterward.

For the unitiated, the big data movement is all about stocking, mining, and extracting information from large datasets.  In the modern HR world, this data is typically found in human resources management systems--systems that contain information like time to fill, assessment results, compensation, and performance management measures.

Why has this become a hot topic?  It's hard to say.  Could be the buzz generated by Moneyball, the book and movie about the Oakland A's GM Billy Beane and his use of data to uncover data points (e.g., on base percentage) that bucked existing predictive measures such as batting average.  Or it could be Google's study of what predicts supervisory success (shocker: it's not technical skill).  I suspect it's also been fueled not insignificantly by software and consulting companies hoping to capitalize on the interest.

When I was chatting with the reporter, I found it difficult to answer some of the questions about the application of this movement to recruitment and assessment.  It was only later that I realized why: for us, this is an old idea. 

Asking if the analysis of large data sets is applicable to selection is like asking if sunshine is applicable to farming: it's a foundation upon which the practice exists.  Not only is the research that surrounds our field grounded in data analysis, the major impactful discoveries have been in large part based on the analysis of large data sets -- things like the predictive power of cognitive ability and conscientiousness, and the U.S. Army's Project A.  The entire profession of personnel psychology is founded upon the idea that through analyzing data we can help answer big questions like what predicts job success, leadership, and organizational attraction.

So I'm guessing I'm not the only one who is observing this trend and thinking: what took you so long?

Now, does all this mean the big data movement is pointless?  Absolutely not.  To the extent that organizations are renewing their interest in using data analysis to guide decisions, booyah.  But here are my big four concerns:

1) The results are only as good as the data.  Let's say you sick your analytical software on your HR data and find out that candidates recruited from the Northeast don't perform as well as those recruited from the South.  Easy enough, looks like we need to shift our recruiting resources.  But not so fast.  What is your performance measure?  What if I told you that the majority of your supervisors come from the South--might that impact the results?  What if one of the success criteria is knowledge of customers in the South--but you're interested in expanding into other territories?  Without first looking deeply at what our measures are, we risk coming to some very misleading conclusions.  (Some of you will recognize this as "the criterion problem").

2) There's analysis, and then there's analysis.  Anybody can run a correlation.  But do you know about power?  Statistical significance versus practical significance?  Multivariate analysis?  Collinearity?  If that sounds like gibberish, please seek professional help.

3) With apologies to Kurt Lewin, nothing is as practical as a good theory.  What if you find out that answers to "what's your favorite color" predict success as a senior manager?  What does this mean?  We can make all kinds of guesses, but without having a theoretical framework in place, we're letting results drive "the truth" rather than logically positing a relationship and seeing if the data support it.  Basically what I'm saying is: beware fishing expeditions.  All you have is a correlation; don't make the mistake of inferring causation.

4) What do you do with the results?  So employees that drive to work outperform those that take public transit.  Does that mean you force all your employees to drive?  What if your analysis uncovers something uncomfortable about your current executive leaders--what the heck do you with that?  (a: bury it, b: post and pray, c: use consultantspeak to obfuscate results, d: re-run analysis).

Without first thinking about these and other important questions, the "big data" movement, when applied to important questions like what predicts organizational behavior, has the potential to create all kinds of erroneous, wasteful, and potentially risky conclusions.

On the other hand, if this ends up creating an additional sense of energy around evidence-based management, I may end up looking back at this as an extremely positive development in helping organizations succeed.