So this month the big buzz is over a research article in JASP on using Facebook profiles to judge personality and predict job performance. So let's tackle that one first:
Kluemper, et al. had three trained university students judge Big Five personality factors (using the IPIP) based on over 200 public Facebook profiles of other students. A much smaller sample (56) was used to determine links between evaluator judgments and job performance as rated by supervisors.
So what did they find? Well, among other things:
(1) inter-rater reliability ranged from .48 to .72 which seems low to me but apparently is typical for other-ratings of personality;
(2) two of the other-ratings (emotional stability and agreeableness) significantly correlated with supervisory ratings (around .30), as did one of the self-ratings (extraversion), suggesting to me the two methods might vary in constructs being measured;
(3) the same two other-ratings added incremental validity beyond self-ratings, whereas the opposite was not true.
To their credit, the authors caution readers (those that actually go beyond the mainstream press articles) about using their results to support hiring decisions based on Facebook. In fact, I'd like to quote them:
"Our findings should not be used by organizations as unbridled support for using SNWs [Social Networking Websites] in employment selection. Without more evidence of criterion-related validity and comparability with established employment selection methods, the use of SNW information for hiring purposes is tenuous. In addition to the potential for employment discrimination, there are privacy rights and ethical issues associated with accessing personal information. Clearly, research investigations of such issues lag current informal HR practices."
So bravo to them for researching this issue, and double-bravo (that's a technical term) for cautioning those with Facebook fever.
Let's move to other research out there...
The March issue of IJSA is chalk-full of great research, so let's take a look:
- Much hay is made over the "type" of validity exhibited by cognitive ability tests (don't get me started). To the extent this distinction makes sense to you, you might enjoy Frank Schmidt's argument that ability tests in certain instances can demonstrate content validity. The article is followed by several commentaries and a response.
- Next up is O'Neill, et al. with a critical review of Stevens and Campion's Teamwork-Knowledge, Skills, and Ability Test.
- Reeder, et al. investigate individual differences as they relate to the perception of cognitive ability tests.
- Now here's somethin': Edwards, et al. argue that the three-option multiple choice item is underutilized. This should be read by everyone who has nightmares about writing distractors.
- Hoffman and Meade argue that score differences across assessment center exercises reflect true differences rather than measurement artifact.
- Got hope? Zysberg shows that, through problem-solving-oriented coping, hope is related to success in a selection process.
We already know that people generally aren't very good at accurately describing their skills and abilities, but in the March Psychological Bulletin, Freund and Kasten provide an illuminating meta-analysis indicating that while the relationship between self-estimated and psychometrically measured cognitive ability is modest (.33), it varies depending upon scales and dimensions.
For anyone measuring contextual performance, consider the relationship between role expectations and OCBs as described by Dierdorff, et al.
Speaking of OCBs, Nielsen et al.'s study suggests when measuring OCB expression in a group setting, consider the level of task interdependence.
On the other hand, if you are interested in job performance ratings, be aware that there may be gender bias, and it differs in direction between performance ratings and promotability ratings (with the latter favoring males), according to a recent meta-analysis by Roth, et al.
Last but definitely not least, research by Hodson and Busseri suggest that individuals lower in cognitive ability may be predisposed to exhibit more racism. This suggests that using ability tests may not only increase the validity of your selection process but lower your chances of discriminatory behavior.
On a side/editorial note, I find it fascinating and somewhat frustrating that the research energy still seems to be about teasing out major constructs such as cognitive ability and personality. As a practitioner, I gotta tell ya I'm happy when hiring supervisors use any sort of structured assessment beyond their standard interview. I'd love to see more energy behind increasing the validity of the entire selection process. I doubt I'm the only one that feels that way.