Monday, December 31, 2007

Content of the year

At the end of this, the first full year of this blog's existence, I decided to take a look back at 2007 and give you my Top 5 most popular posts of the year:

1. Jobfox plays matchmaker (there continues to be significant interest in Jobfox and their non-traditional approach to matching applicants with employers)

2. Reliability and validity--it's okay to Despair(.com). Whether it's the statistics words or Despair, I'll never know. But people sure like those little posters (and remember, you can make your own).

3. Personality testing basics (Part 2). As you can see from the sidebar survey, folks continue to be very interested in personality testing.

4. Wonderlic revises their cognitive ability test. Wonderlic, one of the oldest and most famous testing companies, continues to generate interest.

5. Checkster and SkillSurvey automate reference checking. There's further development to be had, but I do believe these tools could be a boon to HR and supervisors alike.

Okay, so enough about me. What about what everyone else is writing about? Here are my nominations for content of the year:

1. Morgeson et al. fired a shot across the bow of personality testing with their piece in Personnel Psychology that resulted in multiple, shall we say, not so thrilled responses. I don't know where this debate is going (although I suspect alternate measurement methods will play a part) but it sure is fun to watch!

2. There were some great books I came across this year. Particular props for Understanding statistics, Evidence-based management, and Personality and the fate of organizations. Yes, they were all published in 2006...are you saying I'm behind?

3. Dineen et al.'s great little piece of research on P-O fit and website design in the March issue of J.A.P. that I wrote about here. Take a look at your career website with these results in mind.

4. The Talent Unconference was a big success, and I'm very thankful that many of the presentations were videotaped; I put up links to some of them here

5. McDaniel et al.'s meta-analysis of situational judgment test instructions. Not only is this a great piece of research, it's (still) free!

So what about my New Years wish from last year? I'm still waiting. Although if people search databases like Spock eventually get up enough steam...perhaps I'll get my wish?

Here's to hoping 2008 is filled with interesting and useful things!

Friday, December 28, 2007

Monday, December 17, 2007

Call for Proposals for 2008 IPMAAC Conference

You're invited to submit a proposal for the 2008 IPMAAC Conference, to be held on June 8-11. The theme of next year's conference is Building Bridges: Recruitment, Selection, and Assessment and the conference will be in Oakland, California in the beautiful San Francisco Bay Area.

IPMAAC (IPMA-HR Assessment Council) is the leading organization for HR assessment and selection professionals interested in such diverse topics as job analysis, personality testing, recruiting, and web-based assessment.

IPMAAC members are psychologists, attorneys, consultants, management consultants, academic faculty and students, and other professionals who share a passion for finding and hiring the right person for the job. Whether you're from the public sector, private sector, or non-profit, everyone is welcome.

Please consider submitting a proposal for the 2008 conference--it could be a presentation, a tutorial, a panel discussion, a symposium, or a pre-conference workshop. If you have something you'd like to share, don't be shy.

Even if you don't submit a proposal, plan on attending the conference. It's a great opportunity to meet knowledgeable and passionate people and learn what everyone else is doing out there. Some presentations at last year's conference, held in New Orleans, included:

Succession planning and talent management

Using personality assessments in community service organizations

Legal update

Potholes on the road to Internet Applicant compliance

See you there!

Wednesday, December 12, 2007

HR.com's Virtual Conference

If you want to see something creative, head on over to HR.com's virtual conference called VIEW, which is taking place today and tomorrow.

HR.com says this conference, which is very Second Life-ish, will have 40+ speakers, 1000+ attendees and 70+ vendors.

Right now I'm watching Carly Fiorina talk about leadership. Later presentations include:

Managing a Talent Pool for Succession Planning

The Federal E-Verify Program and Electronic I-9 Compliance

Quality of Hire

Creating Value Exchange in the Candidate Experience

...and a lot more. Creative stuff! And oh yah, it's free.

Monday, December 10, 2007

November '07 Issue of J.A.P.

The November 2007 issue of the Journal of Applied Psychology is full of interesting articles, including several relating to recruiting and assessment. Let's take a look:

First, a field study by Hebl et al. on pregnancy discrimination. Female confederates posed as job applicants or customers at retail stores, sometimes wearing a pregnancy prosthesis. As "pregnant" customers, they received more "benevolent" behavior (e.g., touching and over-friendliness), but as job applicants they received more hostile behavior (e.g., rudeness). The latter effect was particularly noticeable when confederates applied for stereotypically male jobs. This isn't a form of discrimination that gets as much play as others, but may be much more common than we think. My guess is a lot of people associate pregnancy with impending time off and don't focus as much on the competencies these women bring to the job.

Second, a study on faking. But wait, not faking on personality tests, faking during interviews. Levashina and Campion developed an interview faking behavior scale and then tested it with actual interviews. Guess what? Scores on the scale correlated with getting a second interview. (Looks like those classes you took on answering vaguely are going to pay off!) But wait, there's more. The authors also found that behavioral questions were more resistant to faking than situational questions (another reason to use 'em!), and follow-up questions INCREASED faking (another reason NOT to 'use em!). Other goodies in this article: over 90% of undergraduate job candidates fake during employment interviews (I assume that's just this sample), BUT, the percentage that were actually lying, or close to it, was less (28-75%).

Third, Brockner et al. provide research results that underline how important procedural fairness (justice) is. Three empirical studies demonstrated that employees judge organizations as being more responsible for negative outcomes when they experienced low procedural fairness. So when applicants or employees get bad news, they'll blame the organization even more if they feel the process used was unfair. Why do we care? Because perceptions of procedural fairness impact all kinds of things, including recruiting (e.g., how someone reacts to not getting a job) and the likelihood of filing a lawsuit (for, say, discrimination).

Fourth, Lievens, Reeve and Heggestad with a look at the impact of people re-taking cognitive ability tests. Using a sample of 941 candidates for medical school that took an admissions exam with a cognitive component, the authors found that retesting introduced both measurement and predictive bias: the retest scores appeared to be measuring memory rather than g, and predictive validity (of GPA) was eliminated. More evidence that re-testing effects are non-trivial. Pre-publication version here.

Last but definitely not least
, one of my favorite topics--web-based recruitment. Allen, Mahto, & Otondo present results from 814 students searching real websites. When controlling for a student's image of the employer, job and organizational information correlated with their intention to pursue employment. When controlling for information search, a student's image of the employer was related to the intention to pursue employment, but familiarity with the employer was not. Finally, attitudes about recruitment source influenced attraction and partially mediated the effects of organizational information. What does all this mean? Don't throw your eggs into one basket--organizational image is important, but so is the specific information you have on your website about your organization and the specific job.

There's a lot of other good stuff in this volume, including articles on the financial impact of specific HRM practices, a meta-analysis of telecommuting impacts, engaging older workers, and daily mood.

Wednesday, December 05, 2007

EEOC Issues Fact Sheet on Employment Testing

On Monday, the U.S. Equal Employment Opportunity Commission (EEOC) issued a fact sheet on employment testing.

The fact sheet covers, at a high level, various topics including:

- types of tests

- related EEO laws (Title VII, ADA, ADEA) and burden-shifting scenarios

- the Uniform Guidelines on Employee Selection Procedures

- recent related EEOC litigation and settlements

- employer best practices

For those of you familiar with the legal context of employment testing, this isn't new information. But it is a nice quick summary of some of the major points, and could be very useful for someone not as familiar with this area.

For a more thorough treatment, I recommend the U.S. Department of Labor's guide for employers on testing and selection.

Monday, December 03, 2007

Winter '07 Personnel Psychology

Things are starting to heat up in the journal Personnel Psychology. The shot across the bow of personality testing that happened in the last issue of Personnel Psychology turns into a full-blown brawl in this issue. But first, let's not forget another article worth out attention...

First up, Berry, Sackett, and Landers revisit the issue of the correlation between interview and cognitive ability scores. Previous meta-analyses have found this value to be somewhere between .30 and .40. Using an updated data set, excluding samples in which interviewers likely had access to ability scores, and more accurately calculating range restriction, the authors calculate a corrected r of .29 based on the entire applicant pool. This correlation is even smaller when interview structure is high, when the interview is behavioral description rather than situational or composite, and job complexity is high. Why is this important? Because it impacts what other tests you might want to use--the authors point out that using their updated numbers they obtained a multiple correlation of .66 for a high structure interview combined with a cognitive ability test (using Schmidt & Hunters' methods and numbers). Pretty darn impressive.


Now that we have that under our belt, ready for the main event? As I said, in last issue Morgeson et al. came out quite strongly against the use of self-report personality tests in selection contexts--primarily because they claim the uncorrected criterion-related validity coefficients are so small. So it's not surprising that this edition contains two articles by personality researcher heavyweights defending their turf...

First, Tett & Christiansen raise several points; more than I have space for here. Some points include: considering conditions under which personality tests are used and validity coefficients aggregated; that there are occupational differences to consider; that coefficients found so far aren't as high as they could be if we used more sophisticated approaches like personality-oriented job analysis; and that coefficients increase when multiple trait measures are used. This sums their points up nicely: "Overall mean validity ignores situational specificity and can seriously underestimate validity possible under theoretically and professional prescribed conditions."

Second, Ones, Dilchert, Viswesvaran, and Judge come out swinging and make several arguments, including: conclusions should be based on corrected coefficients; coefficients are on par with other frequently used predictors, some of which are much more costly to develop (e.g., assessment centers, biodata); different combinations of Big 5 factors are optimal depending upon the occupation; and compound personality variables should be considered (e.g., integrity). Suggestions include developing more other-ratings instruments and investigating non-linear effects (hallelujah), dark side traits, and interactions. They sum up: "Any selection decision that does not take the key personality characteristics of job applicants into account would be deficient."

Not to be out-pulpited (yes, you can use that phrase), Morgeson et al. come back with a response to the above two articles, reiterating how correct they were the first time around. They state that much of what the authors of the above articles wrote was "tangential, if not irrelevant", that with respect to the ideas for increasing coefficients, "the cumulative data on these 'improvements' is not great", and that corrected Rs presented by Ones et al. aren't impressive when compared to other predictors. They point out some flaws of personality tests (applicants can find them confusing and offensive) but fail to mention that ability tests aren't everyone's favorite test either. They claim that job performance is the primary criterion we should be interested in (which IMHO is a bit short-sighted), and that corrections of coefficients are controversial.

So where are we? Honestly I think these fine folks are talking past each other in some respects. Some issues (e.g., adverse impact) don't even come up, while other issues (e.g., faking) are given way too much attention. It's difficult to compare the arguments side by side because each article is organized differently. It doesn't help that the people on both sides are some of the researchers with the most invested (and most to lose) by arguing their particular side.

I'm thinking what's needed here is an outside perspective. Here's my two cents: this isn't an easy issue. Criteria are never "objective." Job performance is not a singular construct. Job complexity has a huge impact on the appropriate selection device(s). And organizations are, frankly, not using cognitive or ability tests nearly as much as they are conducting interviews. So let's stop focusing on which type of test is "better" than the others. Frankly, that's cognitive laziness.

So is this just sound and fury, signifying nothing? No, because people are interested in personality testing. Hiring supervisors are convinced that it takes more than raw ability to do a job. We shouldn't ignore the issue. Instead we should be focusing on providing sound advice for practitioners and treating other researchers with respect and attention.

Should you use personality tests? I'll answer that question with more questions: what does the job analysis say? What does your applicant pool look like? What are your resources like? It's not something you want to use cookie-cutter, but not something you should write off completely.

Okay, I'm off my soap box. Last but not least there are some good book reviews in this issue. One is Bob Hogan's book (which I enjoyed immensely and actually finished which is rare for me), Personality and the Fate of Organizations, which the reviewer recommends; another is Alternative Validation Strategies, which the reviewer highly recommends; and the third for us is Recruiting, Interviewing, Selecting, & Orienting New Employees by Diane Arthur, which the reviewer...well...sort of recommends--for broad HR practitioners.

That's all folks!