Friday, October 13, 2006

New issue of Human Performance


The latest version of Human Performance (v.19, no. 3) is out and has some articles relevant to assessment.

Before I get into 'em, allow me to express some disappointment that all but one of these was conducted using college students. That said...

The
first , by Nicholas Vasilopoulos and others focuses on forced-choice personality tests. The authors found, among other things, that individuals with high cognitive ability scores were able to inflate their scores (applicant v. honest condition) on forced choice scales, while individuals scoring lower on cognitive ability were not. Also, only in the forced choice/applicant condition were personality scores (using Goldberg's 300-item measure ) predictive of GPA. Lastly, personality scores did not add very much to the prediction of GPA after accounting for cognitive ability. Bottom line? Smart folks can figure out forced choice personality tests. These personality tests may not get you a whole lot if you are already measuring cognitive ability. Also, they may result in higher adverse impact than you might anticipate.

The
second , by Todd Thorsteinson, looked at the effect of framing when judging item difficulty. Mirroring a task similar to the Angoff procedure , he had participants in one condition rate the likelihood that an average test taker would get the item correct; those in the other condition rated the likelihood the item would be answered INcorrectly. (Note he used "average" not "minimally qualified") Result? Those in the first condition (most similar to the traditional Angoff method) produced lower critical scores (i.e., judged the test to be more difficult). Is this good or bad? Probably good since courts generally favor lower pass points when there is any doubt. But a reminder that how we frame tasks for our stakeholders matters.

The
third , by De Meijer et al. examined ethnic score differences on a cognitive ability test, personality test, assessment center, interview, and final employment recommendation among a sample of applicants to a police academy in The Netherlands. Results? Score differences between majority group and first-generation minority groups were comparable to previous findings (e.g., d around 1.0 for cognitive ability) but score differences between the majority group and second-generation minority groups were much smaller (about half). On the cognitive ability and personality tests, most of the difference was explained by language proficiency. Implications? Efforts to increase language proficiency may pay huge dividends in reducing adverse impact traditionally associated with some of the most valid forms of assessment. Whether this is provided by our educational system or by employers is probably one of the biggest issues facing society.

The
fourth , by Buhner et al. provides additional support for the idea of using working memory tests, particularly in predicting multi-tasking performance. Interestingly, the authors found that if the goal is to predict speed, measures that target "coordination" (building relationships between information) work best, whereas if the goal is to predict errors, measuring "storage in the context of processing" (what we usually think of as short-term memory) works best.

Heady stuff for a Friday! Have a great weekend!

No comments: