Friday, July 13, 2007

New issue of Human Performance: Typical v. maximum performance

There's a new issue of Human Performance out (Volume 20, Issue 3) and it's devoted to a very worth topic--typical versus maximum performance.

What is the distinction, you say? Well it's pretty much what it sounds like. From Smith-Jentsch's article in this issue:

"Typical performance in its purest sense reflects what a person "will do" on the job over a sustained period of time, when they are unaware that their performance is being evaluated. By contrast, maximum performance reflects what a person "can do" when they are explicitly aware that they are being evaluated, accept instructions to maximize effort, and are required to perform for a short-enough period of time that their attention remains focused on the task."

The recent interest in this area stems largely from Sacket, Zedeck, & Fogli's 1988 article in Journal of Applied Psychology. Previous research suggested that measures of ability (e.g., cognitive ability tests) more accurately predict maximum performance whereas non-ability measures (e..g, personality tests) are correlated more with typical performance. This of course has implications for who we recruit and how we assess: Are we trying to predict what people can do or will do? The answer, I think, depends on the job--for aircraft pilot or police officer, you want to know what people can do when they're exerting maximum effort. For customer service representatives, you may be more interested in their day-to-day performance.

This topic is mentioned often in I/O textbooks but (as the authors point out) hasn't been researched nearly enough. The authors of this volume attempt to remedy that in some part. Let's look at the articles in the recruitment/assessment area:

First, Kimberly Smith-Jentsch opens with a study of transparency of instructions in a simulation. Analyzing data from two samples of undergraduates, the results validate previous findings: Making assessment dimensions transparent (i.e. telling candidates what they're being measured on) allows for better measurement of the abilities necessary for maximum performance, while not making this information transparent appears to result in better measurement of traits that motivate typical performance. So if the question is, "Do we tell people what we're measuring?" the answer is: It depends on what your goal is!

Next, Marcus, Goffin, Johnston, and Rothstein tackle the personality-cognitive ability test issue with a sample of candidates for managerial positions in a large Canadian forestry products organization. The results underline how important it is to recognize that "personality" (measured here by the 16PF and PRF) has several components. While cognitive ability scores (measured by the EAS) consistently outperformed personality scores in predicting maximum performance, measures of extraversion, conscientiousness, dominance, and rule-consciousness substantially outperformed cognitive ability when predicting typical performance.

Third
, Ones and Viswevaran investigate whether integrity tests can predict maximal, in addition to typical, performance. The answer? Yes--at least with this sample of 110 applicants to skilled manufacturing jobs. Integrity test scores (measured using the Personnel Reaction Blank) correlated .27 with maximal performance (measured, as is typical, with a work sample test). The caveat here, IMHO, is that job knowledge scores correlated .36 with maximal performance. So yes, integrity test scores (a "non-ability" test) can predict maximal performance, but perhaps still not as well as cognitively-loaded tests.

Last but not least, Witt & Spitzmuller look at the relationship between cognitive ability and perceived organizational support (POS) on typical and maximum performance. Results from two samples (programmers and cash vault employees) reinforce the other results we've seen: Cognitive ability (measured by the Wonderlic Personnel Test), was correlated with maximum performance but not typical performance, while POS was related to two out of three measures of typical performance but not with maximum performance.

Overall, the results reported here support previous findings: maximum performance is predicted well by ability tests while typical performance has stronger correlations with non-ability tests. But Ones & Viswevaran are correct when they state (about their own study): "Future research is needed with larger samples, different jobs, and different organizations to test the generalizability of the findings." Let's hope these articles motivate others to follow their lead.

No comments: