Monday, June 25, 2007

June 2007 IJSA

Yep, it's journal time again...the June 2007 issue of the International Journal of Selection and Assessment is out...

This time the articles fall into two main camps: those focused on applicant reactions to selection procedures, and those focused on personality testing.


Applicant reactions

First up, Landon and Arvey looked at ratings of test fairness by 57 individuals with graduate education in HRM. The test characteristics included validity coefficient, mean test score difference between minority and majority candidates, and test score adjustment for the minority group. Results? Fairness ratings depended almost as much on how raters rated as they did on the test characteristics! Implications? "Extensively cross validated with a global random sample of over 3 million with criterion-related validities exceeding .90!" may not be worth a hill of beans to some folk.

Next, Bertolino and Steiner with a similar study of test fairness using responses from 137 university students (ahh, where would our research be without students). Students rated 10 selection methods on 8 procedural justice dimensions. Results? Work sample tests were rated highest (as they usually are), followed by resumes, written ability tests, interviews, and personal references. All things that are expected by most job seekers. What wasn't perceived as well? Graphology--thankfully. The most important predictors of procedural justice were opportunity to perform and perceived face validity. Nothing earth shatteringly new, but some international confirmation of previous, mostly U.S., findings.

Speaking of international support for the importance of opportunity to perform and face validity among an international sample judging fairness of selection methods (say that five times fast), Nikolaou and Judge analyzed responses from 158 employees and 181 students in Greece. Methods rated highest (drumroll...): interviews, resumes, and work samples, across both groups. However, students reported more positive reactions to "psychometric tests" (i.e., ability, personality, and honesty tests) than did employees--an important distinction. Also, although there does appear to be individual differences in rating (see previous study), core self evaluations didn't appear to explain much.

In summary: Work samples and interviews--high fairness ratings and (potentially) high validity. A great combination.


Personality testing

First up in the personality section is Kuncel and Borneman's study of a new method for detecting "faking" during personality tests by looking at the particular way "fakers" respond. The authors found support for this method (sample was not described), with 20-37% of fakers identified. May not seem like a lot, but they had a false positive rate (incorrectly labeling someone a faker when they're not) of only 1% with a baserate of 56% honesty. Not too shabby. Interestingly, the "faker" pattern did not correlate with personality or cognitive ability test results.

Second, a very interesting study of the "dark side" of personality by Benson and Campbell. Using two independent samples of managers/leaders (N=1306 and 290), the authors found support for a non-linear (inverted U) relationship between "dark side" scores and assessment center ratings as well as performance ratings. So, for example, having a moderate amount of skepticism or caution is good--but too little or too much creates a problem. The instruments used were the Global Personality Inventory and the Hogan Development Survey.


Okay, there's a third category

Okay, one more study. de Mejier et al. analyzed results from over 5,000 applicants to Dutch police officer positions. The researchers were interested in rating differences when comparing ethnic minority and non-minority applicants. Results? Similar (if not more) assessment information was used to judge the two groups, but a large number of "irrelevant cues" were used to judge minority applicants. One other difference: when rating minority candidates, assessors relied more on the ratings of others. Another argument for a standardized rating process!

No comments: