Friday, January 25, 2008

January '08 issue of J.A.P.

The January 2008 issue of the Journal of Applied Psychology is out. Unfortunately there are only three articles directly related to recruitment and assessment, but they're pretty good ones, so let's dive in.

First up, a Monte Carlo investigation of the impact of faking on personality tests by Komar, et al. "What is a Monte Carlo investigation?" you may ask. Essentially it's when researchers use computers to simulate data scenarios rather than collecting actual data from participants/subjects/victims. Anyway, the researchers looked at the impact on the criterion-related validity (as measured by supervisory ratings) of conscientiousness scores adjusting for various "faking" scenarios. They found that the validity is impacted by a variety of factors, most notably proportion of fakers, magnitude of faking, and the relationship between faking and performance. Another shot across the bow of self-report personality inventories, methinks, although the debate will no doubt continue!

Next a fascinating study of motherhood bias in both expectations and screening decisions by Heilman and Okimoto. The researchers found a bias against both male and female parents when it came to anticipated job commitment, achievement striving, and dependability, although anticipated competence was uniquely low for mothers and seemed to be the major contributing factor to lowered expectations and screening recommendations. An unfortunate reminder that these factors do matter and something to watch out for. The results are reminiscent of negative behavior toward "pregnant" women found in a previous study.

Finally, Zyphur, Chaturvedi, and Arvey present a discussion of job performance. They address two subjects: the impact of past performance on future performance and individual differences in performance trajectories. Analyzing past literature, the authors note that performance feedback influences future performance directly and different individuals do have different latent performance trajectories, which has big implications for selection. Why? Because many assessment techniques (e.g., T&Es, behavioral interviews) rely on an general assumption that more experience equals better performance. This study adds ammunition to those that argue that assumption has serious flaws (or at least is overly simplistic).

In addition to these three, you may find the following interesting as well:

Challenging conventional wisdom about who quits: Revelations from corporate America. (great stuff for those of you interested in retention)

Effectiveness of error management training: A meta-analysis. (for all you trainers out there)

Effects of task performance, helping, voice, and organizational loyalty on performance appraisal ratings. (for those interested in performance ratings)

No comments: