Wednesday, December 30, 2009

Outback settlement contains interesting requirements


You may have heard that Outback Steakhouse, a restaurant chain based in Tampa, Florida, has agreed to settle a gender discrimination lawsuit for $19M. What's interesting about this isn't the size of the settlement, but rather the conditions attached.

Background: The EEOC sued Outback in 2006, claiming it systematically discriminated against its female employees by denying them promotion opportunities to the more lucrative profit-sharing management positions. In addition, they claimed that female employees were denied promotional job assignments such as kitchen management, which were required for employees to be considered for top management positions.

The settlement: Outback agreed to a four-year consent decree and $19M in monetary relief. So far, pretty standard. But there were additional settlement requirements, and here's where it gets interesting. In addition to the monetary relief, Outback has agreed to:

1. Create an online application system for employees interested in management positions. This is the first time I've seen this in a settlement (which isn't to say it hasn't happened) and seems to indicate that the EEOC views this as a more "objective" screening mechanism.

2. Create and hire someone for a newly created "human resources executive" position titled Vice President of People. Again, this is a new one for me.

3. Hire an outside consultant for at least two years who will monitor the online application system to ensure women are being provided equal opportunities for promotion and provide reports to the EEOC every 6 months.

The main thing that strikes me about this settlement is the faith that is being placed in an online application system to somehow ensure equal opportunity. Sure, having a standardized application system may cut down on some of the subjectivity of individual hiring supervisors, but it leaves me wondering:

- What will the screening criteria for management positions be?

- How will the outside consultant define "equal opportunities"?

- How will access to the online system be controlled, and who will be making screening/hiring decisions?

- What happens if there continues to be adverse impact, which you would expect if applicants continue to be screened on experience?

- What will be the duties of the Vice President of People, how will they be hired, and how will they interact with the consultant?

This will be interesting to watch.

Sunday, December 20, 2009

Validity: An elusive (unitary?) concept

What makes a test "valid"? What is the best way to develop a selection system? These are two of the most fundamental questions we try to answer as personnel assessment professionals, yet the answers are strangely elusive.

First of all, let's get two myths out of the way: (1) a test is valid or invalid, and (2) there is a single approach to "validating" a test. It is the conclusions drawn from test results that are ultimately judged on their validity, not simply the instruments themselves. You may have the best test of color vision in the world--that doesn't mean it's useful for hiring computer programmers. And many sources of evidence can be used when making the validity determination; this is the so-called "unitary" view of validity described in references like the APA Standards and the SIOP Principles. Unitary in this case refers to validity being a single, multi-faceted concept, not that psychologists agree on the concept of validity--a point we'll come back to shortly.

Although we can debate test validation concepts ad infinitum, the bottom line is we create tests to do one primary thing: help us determine who will perform the best on the job. The validation concept that most closely matches this goal is criterion-related validity: statistical evidence that test scores predict job performance. So we should gather this evidence to show our tests work, right? Here's where things get complicated.

It's likely that many organizations can't, for various reasons, conduct criterion-related validity studies (although baseline evidence of this would be helpful). Most of the time, it's because they lack the statistical know-how or high quality criterion measures (a 3-point appraisal scale won't do it). So in a strange twist of fate, the evidence we are most interested in is the evidence we are least likely to obtain.

So what are organizations to do? Historically the answer is to study the requirements of the job and select/create exams that target the KSAs/competencies required; this matching of test and job is often referred to as "content validity" evidence. But Kevin Murphy, in a recent article in SIOP's journal Industrial and Organizational Psychology, makes an important point: this is good practice, but not a guarantee that our tests will be predictive of job performance. Why not? For a number of reasons, including poor item writing and applicant frame of reference. Murphy makes a passionate argument that we rely way too heavily on unproven content validation approaches when we should focus more on criterion-related validation evidence. Instead of focusing on job-test match, we should focus on selecting proven, high quality exams.

Not surprisingly, the article is accompanied by 12 separate commentaries that argue with various points he makes. It's also interesting to compare this piece with Charley Sproule's recent IPAC monograph where he makes an impassioned defense of content validity.

A complete discussion of the pros and cons of different forms of validation evidence are obviously beyond a simple blog post. My main issues with Murphy's emphasis on criterion-related validation are threefold. First, as stated above, most organizations likely don't have the expertise to gather criterion-related validation evidence for every selection decision (maybe this is his way of creating a need for more I/O psychologists?). Perhaps "insufficient resources" is a poor excuse, particularly for an issue as important as employment, but it is a reality we face.

Second, even if we were to shift our focus to individual test performance, following a content validation approach for development enhances job relatedness (which Murphy acknowledges). Should your selection system face an adverse impact challenge, the ability to show job relatedness will be essential.

Finally, let's not forget that high test-job match gives candidates a realistic job preview--hardly an unimportant consideration. RJPs help candidates decide whether the job would be a good match for their skills and interests. And no employer that I know of enjoys answering this question from candidates: "What does this have to do with the job?"

The approach advocated by Murphy, taken to its extreme, would result in employers focusing exclusively on the performance of particular exams rather than on their content in relation to the job. This seems unwise from a legal as well as face validity perspective.

In the end, as a practitioner, my concern is more with answering the second question I posed at the beginning of this post: What is the best way to develop a selection system? Given everything we know--technically, legally, psychologically--I return to the same advice I've been giving for years: know the job, select or create good tests that relate to KSAs/competencies required on the job, and base your selection decision on the accumulation of test score evidence.

Should researchers work harder to show that job-test content "works" in terms of predicting job performance? Sure. Should employers take criterion-related validation evidence into consideration and work to collect it whenever possible? Absolutely. Will job-test match guarantee a perfect match between test score and job performance? No. But I would argue this approach will work for the vast majority of organizations.

By the way, if you are interested in learning more about the different ways to conceptualize validity--"content validity" in particular--Murphy's focal article as well as the accompanying commentaries are highly recommended. He acknowledges that he is purposely being provocative, and it certainly worked. It's also obvious that our profession has a ways to go before we all agree on what content validity means.

Last point: the first focal article in this issue--about identifying potential--looks to be good as well. Hopefully I'll get around to posting about it, but if not, check it out.

Sunday, December 13, 2009

R/A Predictions for 2010

With 2010 right around the corner, here are some predictions for what the new year will bring in the area of recruitment and assessment:

1) More personality testing. Year after year personality testing continues to be one of the hottest topics. Look for more research, more online personality testing, and new measurement methods.

2) More boring job ads. Even though we know better, don't expect to see any big leaps in readability for 80% of job ads. Same old job descriptions. Maybe we'll see some pictures. On the plus side, more organizations focus on making their career portals attractive.

3) A slow trickle of research on recruiting. The amount of large-scale, sophisticated research on recruiting methods remains a shadow of that found in the assessment literature. Don't expect this to change.

4) More focus on simulations. 2010 sees more focus on simulations, particularly those delivered on-line, as highly predictive assessments as well as realistic job previews. Oh, and they likely have low adverse impact (research, anyone?).

5) Leadership assessment gets even hotter. With the economy improving and more boomers deciding the time is right to retire, finding and placing the right people in leadership positions becomes an even more important strategic objective.

6) Federal oversight agencies get more aggressive. With more funding and backing from the Obama administration, expect to see the EEOC and OFCCP go after employers with renewed vigor. By the way, have you seen the EEOC's new webpage? It's actually quite well done.

7) More fire departments get sued. In the wake of the Ricci decision, fire dept. candidates feel emboldened when they fail a test or fail to get hired/promoted. Look for departments to try to get out ahead of this one by revamping their selection systems.

8) More age discrimination lawsuits. With so many boomers, expect to see more claims of discrimination, particularly over terminations. Keep words like "energetic" and "fresh" out of your job ads.

9) Automation providers slowly focus on simplicity. Whether we're talking applicant tracking or talent management systems, vendors slowly realize that they need to make their applications simpler to increase usability and buy-in. No, simpler than that. Keep going...

10) Employers get more sophisticated about social networking sites. Many realize that rather than jumping on the latest Twitter-wagon, it's best to figure out where these sites fit with their recruitment/assessment strategy. Watch for more positions whose sole role is managing social media.

11) Online candidate-employer matching continues to be a jumbled mess. Without a clear winner in terms of a provider, job seekers are forced to maintain 400 profiles on different sites and may give up altogether and focus more on social networking. Meanwhile, employers continue to try to figure out how to reach passives; LinkedIn continues to look good here but needs to expand its reach a la Facebook.

12) More employers face the disappointing results of online training and experience questionnaires. Will they go back to the drawing board and try to improve them (hint: don't use the same scale throughout), or abandon them for more valid methods, such as biodata, SJT, and simulations? More research on T&Es is badly needed, even if we are just putting lipstick on a pig.

13) Decentralized HR shops centralize. Centralized ones decentralize. Particularly in the public sector, these decisions unfortunately continue to be made based on budgets rather than best practice. Hiring supervisors wonder why HR still can't get it right.

14) Fortunately, HR continues to professionalize. With much of the historical knowledge walking out the door and the job market improving, HR leaders are forced to re-conceptualize how they recruit and train recruitment and assessment professionals. This is a good thing, as it means more focus on analytical and consultative skills.

Keep up the good work everybody. And Happy Holidays!

Sunday, December 06, 2009

Setting cutoff scores on personality tests


What's the best way to set a cutoff score for a personality test, knowing that some candidates inflate their score? It all depends on your goal. Are you trying to maximize validity or minimize the impact of inflation?

According to a research study by Berry & Sackett published in the Winter '09 issue of Personnel Psychology, if your goal is to maximize validity, your best bet is to wait until applicants have taken the exam, then set your cut-score (e.g., the top two-thirds); this was particularly true when selection ratios are small (i.e., organization is very selective).

If your goal is to minimize the number of deserving applicants who are displaced by "fakers", you're better off establishing the cut point ahead of time, by using a non-applicant derived sample (e.g., job incumbents, research group). The results were generated using a Monte Carlo simulation.

Interestingly, the authors also replicated the work of other researchers who have shown that the impact of faking on the criterion-related validity of personality measures is relatively low. There are a few other very good points made in this article:

- Expert judgment methods of establishing pass points (e.g., Angoff method) may be difficult to use for personality tests since experts may find it difficult to judge individual items. Methods used to select a certain number of applicants or methods based on a criterion-related validity study (both used as variables in this study) are more appropriate for personality tests.

- There is no consensus of how prevalent faking on personality exams is; estimates range from 5-71%. It likely depends on the situation and how motivated test takers are to engage in impression management.

- Some recommend setting a very low cutoff score for personality tests, which would exclude only those likely not suitable for the position (and not faking), while others prefer a more stringent cutoff to maximize utility.

- A reasonable range of d-values for score inflation on personality inventories is .5-1.0 (used in this study).

- There exists very little research on the skewness of faking score increases. A positively-skewed distribution (meaning most people faked a small amount) was used in this study. (I would think this would also vary on the situation)

So bottom line: where--and how--you set your cutoff score on personality inventories depends on whether you want to maximize the predictive validity or minimize the number of deserving applicants that get left out of the process.

Other good reads in this issue:

- Police officer applicants reactions to promotional assessment methods

- The impact of diversity climate on retail store sales

- The construct validity of multisource performance ratings

- Labor market influences on CEO compensation

Tuesday, November 24, 2009

Want better prediction? Gather more data.


That's the bottom line from a study in the November 2009 issue of the Journal of Applied Psychology.

Oh & Berry looked at how adding personality ratings from peers and supervisors added incremental validity to self-ratings using a five-factor model measure. What were the results? Increases of 50-74% in operational validity across personality facets. They also looked at differential prediction of task and contextual performance (unfortunately those results weren't reported in the abstract). Bottom line? If you're using a personality assessment for promotions, strongly consider gathering data from co-workers.

Speaking of self-presentation, in the same issue Barrick et al. report the results of a meta-analysis of how self-presentation tactics (e.g., appearance, non-verbal behavior) impact interview ratings and later job performance. Results? "What you see in the interview may not be what you get on the job and...the unstructured interview is particularly impacted by these self-presentation tactics." An important reminder of how who the candidate seems to be impacts your assessment, and another reason to collect multiple sources of data.

There are a number of other great articles in this issue, such as:

How Major League Baseball CEO personalities impact important outcomes (like, um, winning).

How SJT and biodata measures add to the prediction of college student performance.

How personality scale validities change over time among a group of medical students.

Differences among letters of recommendation in academia between genders.

Saturday, November 21, 2009

HR: C-level or AAA?

Just when we thought we were C-level, Dogbert reminds us how old-school CEOs view HR.

p.s. I think I like CPO better than HCO.

Friday, November 13, 2009

Explain away

Of all the low-hanging fruit in recruitment and selection, perhaps none are easier to implement than explaining your process. It's shocking how few selection processes are fully (and coherently) explained, not just in terms of getting from point A to point B, but why the points exist in the first place.

Turns out explaining the process to candidates matters. A lot. And while we already knew that applicant perceptions were important, a recent meta-analysis by Truxillo, et al. published in the International Journal of Selection and Testing clarifies the impact that explanations have. Specifically, explanations were related to:

- Fairness perceptions (important in their own right)

- Perceptions of the hiring organization

- Test-taking motivation

- Performance on cognitive ability tests

Furthermore, fairness effects were greater when paired with personality tests rather than cognitive ability exams.

What does all this mean? It's pretty simple, really--communicate, communicate, communicate. Explain in clear terms to all applicants what the full selection process is, and why. Imagine something like this:

"Thank you for your interest in applying for the Blog Reader position. The selection process will consist of the following:

Step 1. You submit your application and work sample by December 5.

Step 2. Your application is reviewed to ensure you meet our minimum qualifications found on the job posting.

Step 3. If so, your work sample is scored by internal subject matter experts. It will be judged on relevance to the position, complexity, and contribution to the profession. The top scorers move on to Step 4.

Step 4. You are contacted by Human Resources to set up an interview. The interview will last approximately 2 hours and will take place at our San Jose campus. They will take place in mid-January.

Step 5. You interview with the hiring supervisor and 2-3 potential co-workers. You will be asked a series of questions designed to measure your knowledge of Blogs. You will also be asked to complete a writing sample which will be judged for style as well as content. Top performers in these steps will be asked for a final interview with the head of the Blogger Division.

Should you fail to proceed to any Step, you will be contacted and told why."

That wasn't so hard, was it?

For more about applicant perspectives in selection, check out the entire December '09 issue.

And while I'm on the subject of research, check out the latest issue of the International Journal of Testing for good stuff on test compromise, DIF, and P-O fit. Oh, and don't forget about the entire issue devoted to test adaptation.

Last but not least, Practical Assessment, Research, and Evaluation (PARE) has put out several things lately worth reading, check 'em out.

Tuesday, November 10, 2009

What employers can learn from Twilight


Not since Harry Potter have I seen such obsessed fans. The buzz started several months ago in anticipation. And November 20th is almost here.

Don't know what happens on November 20th? Ask your daughter. Or granddaughter. Or, heck, pretty much any woman between the age of 16 and 30. That's the day that New Moon, the second installment in the wildly popular Twilight series, hits theaters. Why should employers care about this, other than anticipating that certain staff members will be out of the office that day? Read on.

Like Harry Potter, the Twilight books (written by Stephenie Meyer) are enormously popular, and the movies are too. Also like Harry Potter, fans range the demographic spectrum, although the most rapid fans seem to be women (not surprising given the protagonist and the love triangle she's in the middle of).

Most employers would kill to have the kind of brand devotion that Twilight fans have. If Twilight was an employer, they would be competing with Google for top talent. So what can we learn from this phenomenon that can help us with branding our organization?

1. People like good stories. Twilight was a phenomenon way before Robert Pattinson and Taylor Lautner (the male lead actors). Whether you're in banking, IT, public utilities, or a flower shop, you have stories to tell about how your organization has impacted others--or how your employees have impacted each other. Are you telling these stories, or letting stories be told about you?

2. People still read. Related to #1, the Twilight phenomenon, like Harry Potter, began with the books. There's a lot of hype about video these days, but given something interesting, people have no problem spending time reading it. What does your recruitment material look like--is it entertaining? Educational? Would you read it even if you weren't interested in a job there?

3. People like fantasy. There's an awful lot of reality out there right now--the recession, H1N1, wars--and people like to take a mental break. Don't be afraid to break out of the mold and try telling a story that takes people away from their day-to-day lives.

4. People like contests and "sides". One of the biggest, possibly the biggest, dramas within the world of Twilight is the competition between the two main lead male characters. Fans identify themselves as being on "Team Edward" or "Team Jacob". This isn't something you see employers do very often, and it requires a bit of built-in loyalty, but it's something that can engage fans even more.

5. People like being fans. There's something primal about being part of a group of people who share the same interest. If you give people something to be a fan of, they'll enjoy connecting with others who share their passion. This is what the Facebook fan pages are all about. Google has 300k+ Facebook fans. Twilight has over 4 million.

6. You can brand almost anything. Branding is about more than your website, or your recruitment fliers. It can become part of everything your organization does, if it's strong enough. And it's more than just a logo, it's about the organization's philosophy and accomplishments. Look around and you can probably see Twilight branded on almost everything. I'm surprised they don't have Twilight adhesive bandages. Oh wait, they do.

As November 20th approaches, be prepared for a media onslaught about Twilight. Whether you're a fan or not, use it as an opportunity to think about how your organization could garner that kind of excitement. After all, that's what leads high potentials to want to apply.

Saturday, October 31, 2009

New Job Simulations Report

The U.S. Merit Systems Protections Board (MSPB) just released a great, easily digestible, report on job simulations.

The report includes several things, such as:

- Job simulations defined and advantages/disadvantages

- Types of job simulations (SJT, work samples, etc.) and concrete examples

- Benchmark data on satisfaction with candidate quality as well as how federal agencies currently use simulations (just don't look at GPA compared to job knowledge tests in Figure 2)

- Survey data on why simulations aren't used more often in the federal government (time and expertise, sadly, were the top reasons)

- A 5-step strategy for using job simulations

- References to support the use of simulations (and good selection in general)

A great, free resource for anyone wanting to learn more about one of the best selection mechanisms you can use. And particularly relevant as more and more organizations move to using training and experience (T&E) questionnaires as their first (quick but not particularly valid) hurdle.

Thursday, October 29, 2009

Personality tests: Situation matters


Is a personality test the right selection mechanism for your needs? In trying to answer that question, an important consideration is: Does the job allow for the expression of personality facets?

There's a concept in psychology called situation strength. It refers to how the "strength" of a situation impacts the display of personality via behavior, and it's something important to remember when using measures of personality to predict job performance.

A situation's "strength" refers to the environment under which personality aspects are displayed; think of it like a rulebook. If Job A is described perfectly in exacting detail with very little room for deviation ("Place container A over part B..."), does one's personality really matter when it comes to successfully performing the job? Or thought of another way, if the rulebook repeatedly emphasizes using an aspect of personality (e.g., extraversion), how will people without the ability to express that behavior consistently fare?

If you're a software programmer, it probably matters little in the grand scheme of things how extraverted you are; analytical ability is likely much more important. Similarly, if you work in a call center and follow a script, differences in openness to experience probably don't mean much; extraversion is likely more important.

But what about something like conscientiousness--could the strength of a situation impact the relationship between this aspect of personality and job behavior? That's the question that Meyer and colleagues set out to answer, and an answer is published in a recent issue of the Journal of Organizational Behavior.

The way they went about trying to solve this puzzle was to conduct a meta-analysis at the occupation (rather than job) level. What did they find? A few things:

1) Uncorrected correlations between conscientiousness and performance varied widely between .06 and .23.

2) Correlations appear slightly stronger when using overall performance as the criterion rather than task performance (not surprising given previous research).

3) Stronger correlations were found in "weak" occupations.

What does this mean? Well, for one it reinforces the fact that the answer to "Do personality tests work?" varies greatly depending on what the job is and how you measure performance. But perhaps more interestingly, it suggests that at an occupation level we can expect that for jobs that come with built in rules regarding behavior ("strong" occupations), measures of personality aspects such as conscientiousness may not predict performance as well as for jobs with more flexibility ("weak" occupations).

So the next time you're thinking about using a personality inventory for selection purposes, consider: To what extent will incumbents be allowed to express their personality?

Monday, October 19, 2009

Is recruiting using SNS discriminatory?

I keep reading/hearing about how recruiting using social networking sites (SNS) opens employers up to discrimination lawsuits because of who uses the sites. For the most part, this just plain isn't true.

A recent Pew study is the latest to show that when it comes to using SNS like Facebook, MySpace, and LinkedIn, you really should have one primary demographic concern when it comes to ensuring a diverse candidate pool: age.

Not gender, at least not in traditional sense. While four years ago SNS users tilted slightly male (55%), the balance has essentially flipped today (54% female).

Not race, there simply do not appear to be generalizable differences in racial groups when it comes to these sites (in fact I've seen some data that suggest the user base on these sites is more diverse)--but things change, and this may vary with particular sites, so keep an eye on this one.

But when it comes to age, SNS users are disproportionately younger than the overall Internet population. In the words of the Pew report, "[this] doesn't mean that more older adults aren't flocking to SNS--they are--but younger adults are ALSO flocking to the sites, so the overall representation of the age cohorts in the SNS user population has actually gotten younger."

One demographic difference I don't see a whole lot about: disability status. Are individuals with disabilities more/less likely to use SNS? I think that's an important question we need to address if we're truly trying to diversity our candidate pools.

Tuesday, October 13, 2009

Myths about assessment


Despite plenty of evidence and commentary otherwise, several myths persist about personnel assessment:

1) Some tests are "objective", others are "subjective." This is a myth reinforced by no less than the U.S. Supreme Court on a regular basis. The reality is even the choice to use certain selection methods is a judgment. Sure, certain methods involve more ongoing judgment, but a multiple-choice test can be highly "subjective" and an interview highly "objective" depending on how they are made and used.

2) Only certain selection methods are legally considered "tests" and therefore vulnerable to legal scrutiny. Wrong. Anything you do to narrow down your candidate pool is technically fair game. This includes how you advertise, screen, and interview.

3) Good hiring is an art more than a science. Actually we have decades of research showing the opposite. Human judgment is full of flaws. Combine this with the fact that most people think they are experts, and you have a perfect storm of personal overconfidence. The time and effort spent creating standardized instruments targeting competencies relevant for a particular position will be well spent.

4) Even the best assessments can predict only a small fraction of job performance. It's true it's a fraction, but it's not small. Research indicates that close to 40% of the variation in performance can be predicted with assessments. That's nothing to sneeze at when you consider all the other things that impact performance (organizational climate, quality of supervision, reward structures, team composition, role clarity, resources, mood, etc.).

5) Good assessment will solve all your people problems. Yes, this is sort of the flip side of the above. Consultants like to pretend that with the right assessment instrument every person you hire will be the most productive, friendly, team-oriented person ever. The reality is that performance depends not only on what someone brings to the job, but leadership, organizational norms...all that stuff I mentioned in #4.

Do we have all the answers when it comes to hiring the right person? Nope. Is there enough best practice out there so that any hiring supervisor should be able to get the expertise they need to do significantly better than chance? Yep.

Tuesday, October 06, 2009

Latest EEO Insight


EEO Insight is quickly becoming a great resource for anyone interested in issues related broadly to equal employment opportunity. And this isn't just affirmative action plans--it includes anyone interested in recruitment and assessment.

In the latest issue (v1, #3), you'll read about:

- Alternatives to RIFs such as wage freezes and job sharing and the EEO implications

- Analyzing layoff decisions for statistical evidence of adverse impact

- Using multiple regression to detect race and gender differences in compensation

- Ricci in retrospect and lessons learned

- Reaching out to veterans and individuals with disabilities

- Results of the EEO best practices survey and (very good) recommendations

By the way, if you're interested in EEO issues and you're not already reading OFCCP Blog Spot, I highly recommend starting.

Wednesday, September 30, 2009

SIOP Name Change: Will They or Won't They?

The Society for Industrial and Organizational Psychology (SIOP) is considering a name change. For those that don't know, SIOP is a division of the American Psychological Association and has thousands of members devoted to research into a variety of phenomena, including recruitment and assessment.

SIOP was established in 1982 and a significant number of members have been asking on and off for years whether the name continues to accurately describe what they do. The problem is mainly with the "Industrial" part--it's just not a word that gets as much attention as it used to (remember when everything was "industrial strength"?).

The name change is something they've tossed around for years, but beginning in October they're going to survey their members and ask them to consider three alternatives:

1. The Society for Organizational Psychology (TSOP) - name's okay but continues implication that focus is on cleaning our offices; acronym sounds like a rapper; oh, and URL is taken.

2. Society for Work Psychology (SWP) - a little bland but definitely simpler; how do you say the acronym? Is it like "swap"? Or maybe "swip"? Oh, and restricts the field to "work", which is a little narrow.

3. Society for Work and Organizational Psychology (SWOP) - the most complete name in terms of description; acronym easy to use but has some interesting brethren. Too bad we couldn't come up with SWOT (ya know, as in strengths, weaknesses, opportunities, threats). Oh, and URL is taken. Sort of.

The winner between these will take on SIOP for the final determination.

My bet? Members will split among the three, none will receive a passionate endorsement and it will lose against SIOP.

Any takers?

Wednesday, September 23, 2009

Screening on personality: Legal loophole or pothole?

In a recent post over at ERE, the author mentions a website that provides employers with the ability to screen candidates based on a measure of personality that applicants complete online.

My reading of this article prompted the following internal debate:

Me #1: Boy oh boy, is that ever a bad idea. The Uniform Guidelines clearly state (Q&A #75) that a measure of a trait or construct cannot be validated based on content validity, which is what most employers are likely to rely on in this situation.

Me #2: Ah yes, but because personality tests typically result in much less adverse impact than traditional cognitive tests, are the Uniform Guidelines even likely to come into play?

Me #1: Maybe not, but you never know until you go through a selection process, so why take the risk?

Me #2: What about in cases where the numbers being screened are so small that adverse impact analysis is likely to be wonky? (that's a technical term)

Me #1: Well that may well be different, but you're missing the point.

Me #2: What is the point?

Me #1: That employers should use caution before screening based on constructs such as personality. They need to take validation seriously.

Me #2: But isn't it good that they're using an instrument that's at least based on an evidence-based theory of personality (the Big 5)?

Me #1: Absolutely, and props to them. But it is still incumbent on employers to realize the legal risks as well as the implications of using a self-report personality screen as a first hurdle.

Me #2: Fine, but aren't you being a little hypocritical? Haven't you said one of your goals is a giant database where applicant information can be matched with employer needs?

Me #1: True, but I was thinking more along the lines of verifiable skills testing, not self-report inventories.

Me #2: Actually in that post you specifically refer to Big 5 assessments.

Me #1: Hey! This isn't about me. This is about warning employers to make sure they know what they're doing when they screen based on personality measures.

Me #2: Aren't you making an awful lot of assumptions about the website's process without having actually used it or talked to the owners?

Me #1: Well, yes, but I'm a blogger. That's what we do.

Me #2: Ugh.

Thursday, September 17, 2009

2009 IPMA/IPAC Conference Material


Just got back from Nashville after having attended the 2009 IPMA-HR/IPAC conference. Great information, great people, and really enjoyed Nashville.

For me, the conference was largely about one thing: leadership. Makes sense in times like these, people are focusing on how leaders can help guide organizations through rough waters. People like Bob Hogan made great arguments for the importance of leadership and how far we have to go in doing a good job selecting them.

If you weren't able to attend, you can see many of the presentations here. Here's a sampling of topics:

- The selection interview

- Assessment centers for supervisors and managers

- Employee engagement

- Online testing

Can't wait for next year's IPAC conference in Newport Beach!

Friday, September 11, 2009

Considering employee testimonials? Go video.


In the September 2009 issue of the Journal of Applied Psychology, one study stood out: the authors studied employee testimonials shown on recruitment websites. Results strongly suggest that:

(1) Including some type of testimonials increases your attractiveness as an employer; and

(2) Using more complex multimedia (video with audio) is clearly superior to simply pictures and text in terms of both attractiveness and credibility. This also helps mitigate any perceptual differences that occur when you increase the number of testimonials from minorities.

This is great validation for organizations that have put the time and effort into putting quality videos on their site.

Check out these other studies while you're at it:

Does recruitment method impact turnover? (short answer: yes, in the short run)

Interested in P-E fit? Check out this review and model development.

Like vocational interest inventories and statistics? You'll like this.

Monday, September 07, 2009

R&A Software Failures Hurt Taxpayers, Too

We tend to think of successes and failures of applicant tracking systems and other recruitment- and assessment-related technologies as impacting businesses--much of what's written is about large organizations such as Microsoft and Google and what software they decide to adopt.

But public sector organizations are using these technologies as well. And when they fail, it hurts not only the organization but taxpayers as well.

Case in point: the state of Washington recently decided to abandon their efforts to implement SAP E-Recruiting after nearly three years and millions of dollars. The state will now go with a hosted solution which is estimated to be $700-800,000 a year cheaper (and hopefully much easier) to maintain.

Having been ringside for some of this, I can tell you the problem was not with motivation or energy, or even IT knowledge. I suspect that a lion's share of the problem was related to the complexity of the program. This would match reports I've read that a significant number of organizations are moving away from single-vendor HR solutions and going with simpler, targeted products. It's also possible that businesses find it easier to implement these programs because resources (particularly internal experts) are easier to move around and buy-off is easier to obtain.

I wish them luck on their next purchase, and hope they do due diligence in their research (you can often find others who have had problems). Some type of audit may help them determine exactly what went wrong and how to prevent it the next time around. It's not just a matter of time, energy, and expense on the part of the organization, these failures impact applicants, hiring supervisors, HR staff, and ultimately taxpayers.

Wednesday, September 02, 2009

SNWs for R&A professionals


In the August 2009 issue of IPAC's Assessment Council News I write about how recruitment and assessment professionals can take advantage of social networking websites (SNWs) such as LinkedIn, Facebook, and Ning, and offer some cautionary notes about using them.

The article covers what will be familiar ground to many of you, but I tried to also talk about how we can use these sites for professional development, not just for sourcing or selection.

The article ends with a very Web 1.0 idea--a solicitation of letters to the editor. I'll make the same request here: what do YOU think about these websites--flash in the pan or here to stay? Approach with kid gloves or jump right in?

Saturday, August 15, 2009

Ricci presentation

On August 13th I gave a presentation at a PTC-NC luncheon about the Ricci decision. We had a great discussion about the implications (which remain to be seen) and dissected several passages from the decision.

One of the questions that came up had to do with the pass point. I didn't know the answer at the time, but looking back at the case it turns out that the City charter mandated a 70% pass point for these exams. Which is funny, because I made a joke about how 70% is the magic cutoff score given its ubiquity, particularly in the public sector.

In a perfect testing world would the pass point be based on an analysis of the minimum competency level required for the job? Yep. Did the 5-member majority in this case care? Nope.

You can view the (mostly visible) slides below.

Monday, August 10, 2009

September '09 IJSA

The September 2009 issue of IJSA (International Journal of Selection and Assessment) is chalk full of good stuff. Let's dive in.

1) An important update of the "guidelines and ethical considerations for assessment center operations"--a must for anyone interested in the appropriate use of assessment centers.

2) Speaking of assessment centers, here's a meta-analysis of how they correlate with cognitive ability and personality, as well as the proper way to weight the results.

3) Speaking of cognitive ability, curious about the correlation between ability and faking? Check out this large-sample study of faking on a biodata measure.

4) Worried about what your applicants think of your selection method? Frame it as select in (accept) rather than select out (reject).

5) Want to make sure your raters are rating accurately? You may want to re-think stocking your panel with agreeable people (sounds like a lot of fun for the exam analyst!).

6) Before you put the finishing touches on your new online job application system, make sure you pay attention to its features, user friendliness, and efficiency. I like to think of this as "Googley."

7) Looking for a measure of person-job fit that relates equity of contribution to reward? Check this out.

That's all for now!

Sunday, July 26, 2009

July 2009 J.A.P.: SJTs and more


Situational judgment tests (SJTs) have a long tradition of successfully being used in employment tests. These types of (typically multiple-choice) items describe a job-related scenario then ask the test-taker to endorse the proper response. The question itself usually takes one of two forms:

1) What SHOULD be done in this situation? ("knowledge instruction")

2) What WOULD you do in this situation? ("behavioral tendency instruction")

What are the practical differences between the two? Previous meta-analytic research, specifically McDaniel et al.'s 2007 study, revealed that knowledge instruction items tend to be more highly correlated with cognitive ability, while behavioral tendency items show higher correlations with personality constructs. In terms of criterion-related validity, there appeared to be no significant difference between the two.

But there were limitations to that study, and two of them are addressed in a study found in the July 2009 issue of the Journal of Applied Psychology. Specifically, Lievens et al. addressed the inconsistency in stem content by keeping it the same while altering the response instruction, and also looked at a large population of applicants, rather than incumbents, which tended to dominate McDaniel et al.'s 2007 sample.

Results? Consistent with the 2007 study, knowledge instructions were again more highly correlated with cognitive ability, and there was no meaningful difference in criterion-related validity (the criterion being grades in interpersonally-oriented courses in medical school). Contrary to some research in low-stakes settings, there were no mean score difference between the two response instructions.

Practical implications? The authors suggest knowledge instruction items may be superior due to their resistance to faking. My only concern is that these items are likely to result in adverse impact in many applied settings. Like all assessment situations, the decision will involve a variety of factors, including the KSAs required on the job, the size and nature of the applicant pool, the legal environment, etc. But at least this type of research supports the fact that both response instructions seem to WORK. By the way, you can see an in-press version of this article here.

Other content in this journal? There's quite a bit, but here's a sample:

Content validity <> criterion-related validity

More evidence that selection procedures can impact unit as well as organizational performance

Self-ratings appear to be culturally bound

Tuesday, July 21, 2009

Ricci webcast on August 12

The Ricci v. DeStefano decision continues to generate a lot of interest. To help sort it all out, the Personnel Testing Council of Metropolitan Washington, D.C. (PTC-MW) will host Dr. James Outtz, renowned I/O psychologist and co-author of an amicus brief in the case, on August 12.

Not in D.C.? Not a problem. The luncheon presentation will be webcast at an extremely low price. Check out the website for details.

Coincidentally, a much less well known individual (yours truly) will also be presenting on the Ricci decision at PTC-Northern California (PTC-NC) at their August 13th luncheon.

By the way, check out some great commentary about the decision by several SIOP members here. I find it fascinating that SIOP came out strongly against the validity of the exam, to which the majority of the Supreme Court responded, "yawn."

Sunday, July 12, 2009

HR, comic book style


You've always suspected that HR would make a great comic (graphic novel), right? Well turns out you're right.

Check out Super Human Resources

(first issue here)

(video preview is also here)

Tuesday, July 07, 2009

How can we improve executive selection?


Many of us would agree in the wake of recent financial meltdowns that much of the problem stemmed from poor decision making--presumably from the top down. We know a lot about how to select the right people, yet our best estimates peg leadership failures at around 50%. Are there ways we can use I/O expertise to improve this statistic?

This is the topic of the first focal article in the June 2009 issue of Industrial and Organizational Psychology, written by George Hollenbeck.

The author makes several excellent points, among them:

- The process of selecting executives is significantly dissimilar from how we select, say, entry-level hires. The decisions tend to be based more on "character"--essentially personality aspects with a little morality tossed in--more than standardized testing of competencies.

- I/O psychologists are rarely brought into the executive selection process, in large part because they don't "get" how selection decisions at this level are made. We tend to have an assessment or behavioral bent, whereas these decisions more often are holistic and highly subjective.

The author argues that we need to change our mindset to match more closely that of executives--we need to focus on character rather than competencies. The authors that provide subsequent commentaries agree that the focus on executive selection is timely, but some question the focus on character and others point out that predicting performance at this level is incredibly difficult given all of the environmental factors.

Yet after all this, I can't help but wonder (as do some of the commentary authors)...is it selection professionals that need to change their mindset, or should how we select executives look more like how we select entry-level hires? Maybe we'd all benefit from largely taking the judgment component out and relying more on standardized methods such as ability tests. But is that realistic? Are people at the top willing to admit that their judgment may be inferior to standardized tests?

How can we marry assessment expertise with the political and organizational realities inherent in executive selection? My bet is it lies with establishing quality relationships with the high-level decision makers. Become a trusted adviser, demonstrate the bottom-line value of sound assessment, and be flexible about applying our best practices. This is the kind of partnership that works with first-line supervisors; there's a good chance it will work all the way up the chain.

Wednesday, July 01, 2009

Ricci case: Full of sound and fury...


There's been a lot of hoopla over the last several days over the U.S. Supreme Court's decision in Ricci v. DeStefano. It's been described as a win for "reverse discrimination" cases, a rebuke of written tests, and judicial activism. The way I read it, the decision is completely unsurprising and will likely change absolutely nothing about employment testing.

For anyone who isn't familiar with the case, here's a very brief rundown: the City of New Haven, CT gave promotional tests for Lieutenant and Captain firefighter positions using written multiple choice tests and interviews. When they crunched the results it turned out--not surprisingly--that there was statistical evidence of adverse impact against the Black candidates. The City decided not to use the list, and the White and Hispanic candidates sued, claiming disparate treatment. The Supreme Court ruled in their favor.

A little unusual of a case in terms of who's on what side, and there's a lot of good reading in the decision for anyone wanting to know more about test validation. But the decision itself is totally consistent with three main themes from previous decisions:

(1) There really isn't "reverse discrimination"--there's just discrimination based on a protected classification, such as race, color, or sex. Majority groups are protected just like minority groups.

(2) Employers do not have to go to irrational lengths to validate their selection methods. Although the tests had flaws, the court continued to demonstrate that employers simply need to follow a logical process for developing the exam to show job relatedness; the exams don't have to win any awards.

(3) Disparate treatment by a government entity in order to avoid liability for adverse impact is legal only in certain very specific instances (when there is a "strong basis in evidence"). The court has been trending for years toward "color-blind" selection decisions.

About the only thing this case really points out is employers need to be ready to use the results from whatever test they administer, barring some enormous irregularities. That, and part of a defense against an adverse impact case might be that choosing not to use the exam would have been evidence of disparate treatment (I'll grant you that one's a little confusing).

All in all--and I'm certainly not the only one who feels this way--it doesn't appear to be anything to get excited about.

Want to know more? Check out the scotuswiki page.

Wednesday, June 24, 2009

SIOP Leading Edge Consortium


Feel like going to Denver in October?

That's when SIOP is going to have its annual Leading Edge Consortium. Previous consortiums have focused on executive coaching, innovation, talent, and leadership. This time we're fortunate that they've chosen to focus on Selection and Assessment in a Global Setting.

Speakers include individuals from companies like Cisco, Google, and Merck as well as consulting firms like SHL, Previsor, HumRRO, DDI, and Valtera.

Here are some of the session titles:

- "Global trends in HR"

- "Interviewing across cultures"

- "Cross border hiring"

- "Computerized adaptive testing"

Sure to be interesting stuff, particularly for anyone interested in attracting individuals from other countries and cultures.

Saturday, June 20, 2009

Turn off qualified applicants in one easy step

Looking for a way to turn off qualified applicants in one easy step? The City of Bozeman, MT may have put its finger on it.

Turns out they have had--for several years--a requirement that all applicants seeking a position with the City must, after a conditional job offer that required a background check, turn over their ID and passwords for all social networks they're on, including Facebook and Twitter. After a firestorm of criticism, they decided to suspend the policy pending "a more comprehensive evaluation."

With all due respect to city officials...what were they thinking?

Put aside for the moment the potential problems of violating the terms and conditions of the social networking sites (which generally prohibit sharing passwords), and the potential legal issues inherent in finding information you shouldn't, what high-potential applicant worth her/his salt is going to give over their password information? Its akin to asking someone for their diary--and about as valid and relevant to job performance.

According to the City Manager, "choosing not to disclose log-in information did not hurt candidates’ chances of getting the job." Somehow I find that hard to believe.

I can appreciate wanting to perform your due diligence as part of the hiring process, and gathering as much information as you can, but there are tried and true methods of doing this, including detailed reference checks for every hire.

Maybe the proximity to great fishing interfered with judgment making.

Wednesday, June 17, 2009

Summer '09 Personnel Psychology

The Summer 2009 issue of Personnel Psychology covers a lot of ground. Take a look:

Kuncel & Tellegen demonstrate (with undergrads) that when inflating on personality inventories, people don't always max out their self-presentation; in fact for some traits a moderate level of endorsement is seen as more desirable.

Bledow & Frese describe how a situational judgment test can be used to predict not only overall job performance, but a particular construct--in this case, initiative. Participants were employees and supervisors at six banks in Germany.

This one particularly caught my eye. Yang & Diefendorff discovered (using ~200 employees in Hong Kong), among other things, that agreeableness and conscientiousness seem to moderate the relationship between negative emotions and counterproductive work behaviors (CWBs). Implication? If you're hiring for a job prone to negative emotions (e.g., customer service), consider adding a personality inventory to your screeening process to prevent CWBs.

De Pater, et al. studied both students and employees to determine that challenging job experiences reported by participants predicted promotability ratings above and beyond current job performance and job tenure. This has implications for both career development and performance management.

Want to know more about what executive coaches do? Then check out Bono et al.'s study of similarities and differences between practicing coaches that are also I/O psychologists versus those that aren't. (Turns out they do a lot of the same things)

Last but definitely not least, Aguinis et al. describe a web-based frame of reference training they used to decrease the amount of bias inherent in personality-based job analysis. The article describes in detail how the training was implemented, and it had quite dramatic effects. Useful stuff for anyone looking to add this tool to your assessment procedure (in this case they used Raymark et al.'s personality-related personnel requirements form, which they describe as superior to Hogan & Rybicki's performance improvement characteristics tool (which I've actually used and found quite user friendly).

Friday, June 12, 2009

Fast Company disses interviews


Those of you who know about research in personnel selection know that while interviews have been shown to be predictive of job success, several other types of selection mechanisms often out-perform them. Cognitive ability is often mentioned as the holy grail of predictors, but in terms of overall utility and defensibility, I recommend work sample exams. So do the authors of a recent article in Fast Company.

As the authors (who also penned Made to Stick) point out, interviewers are often snowed by candidate interview skills. Often only when you make them demonstrate their skills do their true strengths and weaknesses reveal themselves. (Of course if you're going to interview--and almost everyone does--make sure it's structured)

A couple strengths the authors leave out: work sample (sometimes called "performance") tests are easier to defend legally, since you're measuring an observable KSA rather than a construct like intelligence, and they give candidates a more realistic preview of the job. Heck, after doing a work sample a candidate may decide the job's not for him/her. Finally, they tend to be well received by candidates, more so than many other types of assessment.

This is my favorite quote:

"...figure out whether candidates can do the job. Research has consistently shown that one of the best predictors of job performance is a work sample. If you're hiring a graphic designer, get them to design something. If you're hiring a salesperson, ask them to sell you something. If you're hiring a chief executive, ask them to say nothing -- but reassuringly."

Wednesday, June 10, 2009

Enthusiasm? I'd rather see cautious optimism.


"I'm really excited about this job. Hurry up and pick me."

How would you feel if a candidate said that to you? A bit...confused? Well that's essentially what Paul Westphal told his new bosses in winning his bid to become the Sacramento King's new head coach.

Granted, Westphal had several things going for him:

1) He has been an NBA coach (Suns, Supersonics).

2) He's led a team to a winning season, something the Kings sure could use.

3) He's coached at the college level and also been an assistant coach; this should add some depth to his experience.

4) To his credit, he seemed to know what his new bosses wanted--i.e., enthusiasm and a reasonable salary. This could pay off in terms of his ability to get along with management (more about this below).

So what's the problem? Several, potentially. Here's how the article summed up the selection: "Westphal won the job largely on his NBA experience and enthusiasm for the job itself."

Here are my concerns:

1) Enthusiasm is not a proven predictor of job performance, yet his active pursuit of the job seems to have been a deciding factor. We know pure interest in the job does a horrible job of prediction. Pure experience isn't a great predictor either.

2) The search, according to the article, took only 47 days (which sounds quick to me). Yet apparently, "Westphal had grown impatient enough that sources say he was close to pulling out of the race." What does this say about an applicant? Maybe nothing. But it could signal something about personality (or desperation).

3) The screening seems to have relied primarily on interviews and "reputation." Is this the best way to pick a coach? What else might we do? (simulations, role plays, talking to previous players, etc.)

4) There's a big assumption being made here: that he was solely (or primarily) responsible for the wins of previous teams he coached. As we know, team performance doth not lie with leader alone. As one article commenter noted, the General Manager may be the common denominator leading in recent years to less-than-stellar team stats for the King. Will a new coach solve the real problem?

To be honest, I don't really want an applicant to be off the charts enthusiastic. It suggests overconfidence, a frightening lack of self-insight, or an attempt to snow me. Are there times where the enthusiastic candidate is the right one? Absolutely. All I'm suggesting is that we be wary. Personally, I'd rather see cautious optimism, which indicates an understanding that what they bring to the job is only part of the equation.

But heck, enough with the negativity. Here's hoping the Kings make it to the finals next year!

Saturday, June 06, 2009

SIOP offers multimedia presentations

As part of its learning center, SIOP is now offering audio and video content from its conferences and events. Price ranges from $100-150 depending on membership status and whether or not you attended the event.

You can hear/see samples here, including presentations on personality in the workplace, reducing turnover using selection, and global talent management.

Good stuff. Hope other professional organizations follow their lead.

Wednesday, June 03, 2009

Emotional competence


I don't write a whole lot about emotional intelligence (EI), mostly because I still haven't seen a consensus around its conceptualization and measurement, but there continues to be significant interest in it. And on that note, there's an excellent article in a recent issue of JOB that I think is worth discussing.

In a nutshell, Kim, et al. studied nearly 200 matched subordinate-supervisor pairs in four South Korean hotels. The employees worked either at the front desk or were waiters--folks that likely would benefit from emotional competence.

Emotional competence, you say, not emotional intelligence? Yes, the authors prefer the term competence for several reasons:

1) Self-report inventories such as those used in this study may not be appropriate for measuring abilities.

2) Self-report measures usually measure typical behavior rather than maximally possible behavior, which an ability test hypothetically does.

3) Self-report measures of EC have low correlations with tests of cognitive ability.

I commend the authors for distinguishing between these concepts. In fact, it leads me to wonder whether we should go a step further and say emotional confidence or emotional report. However, this does raise some troubling issues with respect to the similarity and differences between the concepts and is a good illustration of why many I/O types shy away from this topic (it may also have something to do with there being many instruments that claim to measure EI).

Anyway, back the study. The measure they used for EC was a 16-item scale using 7-point Likert-type scales. An example question was, "I am sensitive to the feelings and emotions of others."

The authors found uncorrected correlations of .15 between EC and the two work performance measures of task effectiveness as well as social integration (both p<.05). In addition, they found support for their hypotheses that the relationship between EC and job performance was mediated by "interpersonal proactive behaviors", measured here by supervisors as the extent to which the employee engaged in feedback seeking behavior and relationship development with the supervisor. So not huge correlations, but useful. The strength of the correlation is in line with what we often see for uncorrected self-report measures such as personality inventories.

To their credit, the authors chose employees who likely would have need of some type of emotional awareness. This of course would be one of the big questions if one were considering this type of selection tool, and the decision as always would rest with the results of a detailed study of the job. What the Uniform Guidelines would have to say about supporting this measure using content validation is another story for another day!

Thursday, May 21, 2009

Hybrid tests and the June '09 IJSA


What do you get when you combine a structured interview with a performance assessment? Perhaps some sort of hybrid with pieces from both sides. In the June 2009 issue of the International Journal of Selection and Assessment we find out more.

Morgeson et al. describe the development of a "performance interview" that combines a structured interview with an on-site performance demonstration. Essentially this involved going to the relevant work area (this study was for parts manufacturers) and asking a series of questions to determine promotability, such as "How do you set up this machine?" It's fascinating stuff, and it worked (using concurrent measures), although it might be challenging to use for less observable performance measures. For more details, check out the in press version here; the recipe book starts on page 12, and check out the example on page 41.

What else is in the issue? Take a look:

Predicting managerial readiness in Chinese workers

Is inflation in personality inventories necessarily a bad thing?

Do occupations tend to have their own personality? (yep)

Leadership effectiveness: Self- versus other-ratings (check out who tends to inflate)

CWBs: The organization plays a role

Biodata continues to shine (this time in healthcare organizations)

Is handwriting analysis popular among European employers? Not so much.

Can you predict military performance using personality inventories? Seems so.

Job experience v. personality measures in a small sample