Friday, August 31, 2007

More games

I've posted before (here and here) about how Google and other companies are literally using boardgames as part of their applicant screening process, and how I'm not a big fan of this technique.

The September, 2007 issue of Business 2.0 has an article titled "Job Interview Brainteasers" that highlights another type of game employers play--this time, it's asking "creative" questions during the interview.

Let's take a look at some interview questions from the article and who's asked them:

How much does a 747 weigh? (Microsoft)

Why are manhole covers round and not, say, square? (Microsoft)

How many gas stations are there in the United States? (

How much would you charge for washing all the windows in Seattle? (

You have 5 pirates, ranked from 5 to 1 in descending order. The top pirate has the right to propose how 100 gold coins should be divided among them. But the others get to vote on his plan, and if fewer than half agree with him, he gets killed. How should he allocate the gold in order to maximize his share but live to enjoy it? (eBay, and, similarly, Pirate Master)

You are shrunk to the height of a nickel and your mass is proportionally reduced so as to maintain your original density. You are then thrown into an empty glass blender. The blades will start moving in 60 seconds. What do you do? (Google)

These questions have been around for quite a while and are used to measure things like creativity and estimation ability. The question is: Are they any better than board games? Probably. But they're still a bad idea.

Why do I say that? Well, first of all, a lot of people find these questions plain silly. And this says something about your organization. Sure, some people think they're fun or different. But many more will scratch their head and wonder what you're thinking. And then they'll wonder if they really want to work with you. Particularly folks with a lot of experience who aren't into playing games--they want to have a serious conversation.

Second, there are simply better ways of assessing people. If you want to know how creative someone is, ask them a question that actually mirrors the job they're applying for.

Want to know how they would tackle a programming question? Ask them. In fact, you can combine assessment with recruitment, as Spock recently did.

Want them to estimate something? Think about what they'll actually be estimating on the job and ask them that question. And so on...

Another advantage of these types of questions? The answers give you information you can actually use. (Hey, you've got them in front of you, why not use their brains)

If you don't really care about the assessment side of things, and in reality are just using these questions as a way to communicate "we're cool and different" (as I suspect many of these companies are doing) there are better ways of doing this. Like communicating in interesting and personal ways (e.g., having the CEO/Director call the person). Like talking about exciting projects on the horizon. Like asking candidates what THEY think of the recruitment and assessment process (gasp!).

My advice? Treat candidates with respect and try your darnedest to make the entire recruitment and assessment process easy, informative, and as painless as possible. Now THAT'S cool and different.

Wednesday, August 29, 2007

Georgia-Pacific fined by OFCCP for using literacy test

In a display of "See? It's not just the EEOC you need to worry about", the U.S. Department of Labor's Office of Federal Contract Compliance Programs (OFCCP) has fined the Georgia-Pacific Corp. nearly $750,000.

Why? During a "routine audit of the company's hiring practices", the OFCCP discovered that one of Georgia-Pacific's paper mills was giving job applicants a literacy test that resulted in adverse impact against African-American applicants (saw that one coming a mile away). The $749,076 will be distributed to the 399 applicants who applied for a job while the mill was using the test.

The test required applicants to read "bus schedules, product labels, and other "real-life" stimuli." The OFCCP determined that the test was not backed by sufficient evidence of validation for the particular jobs it was being used for.

The company defended itself by saying it promotes heavily from within and wanted workers to be able to move around easily.

A sensible policy, but completely irrelevant in terms of defending the legality of a test. In fact it works against an employer, since (as one of the attorneys points out) you're in effect testing people for higher-level positions, which is a no-no.

Several attorneys are quoted in the article, and they mention the importance of the Uniform Guidelines, which really only apply when a test has adverse impact, as in this case. It does make me wonder what sort of validation evidence G-P collected (if any)...

Note: the article states incorrectly that "all federal contractors" are subject to OFCCP rules. Actually only certain ones are, and details can be found here.

Hat tip.

Tuesday, August 28, 2007

A funny employment lawyer

Of course they exist. If you don't know one, you do now.

Mark Toth is the Chief Legal Officer at Manpower and he's just started a blog on employment law that so far is highly amusing.

For example, he sings a song about employment law.

A song.

About employment law.

I mean, you gotta be into this stuff to go that far.

He's also got a REALLY BAD hiring interview up for you to watch, along with his top 10 "employment law greatest hits."

My personal favorite? #6: "Communicate, communicate, communicate (unless you communicate stupidly)"

One of the more creative blogs I've seen. Here's to hoping it lasts.

And no, I won't be singing a song about assessment. Unless you really want me to (and trust me, you don't want me to).

Hat tip.

Monday, August 27, 2007

National Work Readiness Credential

Have you heard about the National Work Readiness Credential?

It's a 3-hour pass-or-fail assessment delivered over the web that is designed to measure competencies critical for entry-level workers, and consists of four modules:

1. Oral language (oral language comprehension and speech)
2. Situational judgment (cooperating, solving problems, etc.)
3. Reading
4. Math

I love the idea of a transferable skills test; kinda like the SAT of the work world. I think this approach, combined with assess-and-select-yourself notions are two of the truly creative directions we're going in.

(1) Right now it's not available in all areas of the country.
(2) A searchable database (either as a recruiting tool or as a verification) would be great.
(3) Last but not least, employers have to be cautious that the position they're hiring for truly requires the competencies measured by this exam.

But all that aside, a promising idea. It will be interesting to see where this goes.

Here are links to some of the many resources available:

Training guide
Candidate handbook
Assessment sites
Appropriate uses

Thursday, August 23, 2007

Big Disability Discrimination Decision for California Employers

On August 23, 2007, the California Supreme Court published an important decision in the case of Green v. State of California. The decision should be reviewed by any employer covered by California's Fair Employment and Housing Act (FEHA), which like the Americans with Disabilities Act (ADA) prohibits discrimination against individuals with disabilities.

What'd they say? Rather than muddy the waters, I'll quote directly from the case:

"The FEHA prohibits discrimination against any person with a disability but, like the ADA, provides that the law allows the employer to discharge an employee with a physical disability when that employee is unable to perform the essential duties of the job even with reasonable accommodation. (§ 12940, subd. (a)(1); 42 U.S.C. § 12112(a).) After reviewing the statute's language, legislative intent, and well-settled law, we conclude the FEHA requires employees to prove that they are qualified individuals under the statute just as the federal ADA requires." (pp. 1-2)

"...we conclude that the Legislature has placed the burden on a plaintiff to show that he or she is a qualified individual under the FEHA (i.e., that he or she can perform the essential functions of the job with or without reasonable accommodation)." (p. 5)

What does this mean? It means employers covered by FEHA can breathe a little easier, and employees bringing suit under FEHA for a disability claim may have a slightly more uphill battle. The court has now made clear that in these cases it is the plaintiff/employee who has the burden of showing they are "qualified" under FEHA, not the defendant/employer. And if the plaintiff can't satisfy this "prong" of their case, they won't win.

...unless this case is appealed to the U.S. Supreme Court...

Stop playing games

First, Google and PricewaterhouseCoopers have prospective candidates playing with Lego blocks.

Now, another company has candidates playing Monopoly (see minute 1:50) to judge multi-tasking ability.

C'mon people. You don't need to play games. Spend just a little time putting together a good assessment. Just follow these simple steps:

1. Study the job. Figure out what the key KSAs/competencies needed day one are. And spend more than 5 minutes doing it.

2. Think about what JOB TASK you could re-create in a simulation that would measure the required competencies.

3. Spend some time putting together the exercise and how you will rate it. Spend some more time on it. Practice it. Then spend some more time preparing.

4. Give it. Rate it. Treat candidates with respect throughout the process.

5. Gather performance data once people are on the job and see if it predicts job performance.

6. Hire a professional to fix your mistakes. No, I'm kidding. If you've done the other steps right, you should be golden.

Stop playing games and stop making candidates play them. If you want to know how well an Office Manager candidate multi-tasks, put them in a scenario that matches what they would really face on the job. Phones ringing, Inbox filling up, managers at your door. Not playing with phony money.

Tuesday, August 21, 2007

August ACN

The August, 2007 issue of Assessment Council News is out and Dr. Mike Aamodt provides his usual great writing, this time in article titled, "A Test! A Test! My Kingdom for a Valid Test!" where he goes over what you need to look for when selecting a commercially available two easy steps!

Some of my favorite quotes:

"Previously, [the] clients had their supervisors create their own tests, and we advised them that this was not a good idea." (I just like the idea of saying that to clients, aside from the fact that it's true 99% of the time)

"Creating a reliable, valid, and fair measure of a competency is difficult, time consuming, frustrating, costly, and just about any other negative adjective you can conjure up. Think of the frustration that accompanies building or remodeling a home and you will have the appropriate picture." (So it ISN'T a coincidence that I enjoy testing and home remodeling. Whew.)

" is essential to remember that no test is valid across all jobs and that criterion validity is established by occupation, and depending on who you talk (argue) with, perhaps by individual location." (Just don't tell this to Schmidt and Hunter.)

More info about ACN, including links to past issues, here.

And by the way...major kudos to Dr. Aamodt for offering so much of his work online. This is rare and to be commended.

Monday, August 20, 2007

OPM Has New Assessment Website

The U.S. Office of Personnel Management (OPM) continues to show what a professional assessment shop should be doing with it's new personnel assessment page.

There's some great stuff here, including:

- A very detailed decision guide, including a great overview of pretty much all the major topics

- Reference documents

- Assessment resources

There's even a survey built in to gather feedback on the guide, as well as a technical support form.

Major tip 'o the hat.

Saturday, August 18, 2007

September 2007 issue of IJSA

The September, 2007 issue (vol. 15, #3) of the International Journal of Selection and Assessment is out, with the usual cornucopia of good reading for us, particularly if you're into rating formats and personality assessment. Let's skim the highlights...

First, Dave Bartram presents a study of forced choice v. rating scales in performance ratings. No, not as predictors--as the criterion of interest. Using a meta-analytic database he found that prediction of supervisor ratings of competencies increased 50% when using forced choice--from a correlation of .25 to .38. That's nothing to sneeze at. Round one for forced choice scales--but see Roch et al.'s study below...

Next up, Gamliel and Cahan take a look at group differences with cognitive ability measures v. performance measures (e.g., supervisory ratings). Using recent meta-analytic findings, the authors find group differences to be much higher on cognitive ability measures than on ratings of performance. The authors suggest this may be due to the test being more objective and standardized, which I'm not sure I buy (not that they asked me). Not super surprising findings here, but it does reinforce the idea that we need to pay attention to group differences for both the test we're using and how we're measuring job performance.

Third, Konig et al. set out to learn more about whether candidates can identify what they are being tested on. Using data from 95 participants who took both an assessment center and a structured interview, the authors found results consistent with previous research--namely, someone's ability to determine what they're being tested on contributes to their performance on the test. Moreover, it's not just someone's cognitive ability (which they controlled for). So what is going on? Perhaps it's job knowledge?

Roch et al. analyzed data from 601 participants and found that absolute performance rating scales were perceived as more fair than relative formats. Not only that, but fairness perceptions varied among each of the two types. In addition, rating format influenced ratings of procedural justice. The researchers focus on implications for performance appraisals, but we know how important procedural justice is for applicants too.

Okay, now on to the section on personality testing. First up, a study by Carless et al. of criterion-related validity of PDI's employment inventory (EI), a popular measure of reliability/conscientiousness. Participants included over 300 blue-collar workers in Australia. Results? A mixed bag. EI performance scores were "reasonable" predictors of some supervisory ratings but turnover scores were "weakly related" to turnover intentions and actual turnover. (Side note: I'm not sure, but I think the EI is now purchased through "getting bigger all the time" PreVisor. I'm a little fuzzy on that point. What I do know is you can get a great, if a few years old, review of it for $15 here).

Next, Byrne et al. present a study of the Emotional Competence Inventory (ECI), an instrument designed to measure emotional intelligence. Data from over 300 students from three universities showed no relationship between ECI scores and academic performance or general mental ability. ECI scores did have small but significant correlations (generally in the low .20s) with a variety of criteria. However, relationships with all but one of the criteria (coworkers' ratings of managerial skill) disappeared after controlling for age and personality (as measured by the NEO-FFI). On the plus side, the factor structure of the ECI appeared distinct from the personality measure. More details on the study here.

Last but not least, Viswesvaran, Deller, and Ones summarize some of the major issues presented in this special section on personality and offer some ideas for future research.


Wednesday, August 15, 2007


Humor break.

I've posted before about's "de-motivational" posters. They're a (funny) version of the ubiquitous "motivational" posters you see all over the place that mostly make you roll your eyes.

Well, now has Do It Yourself posters. Here are the three that I've done so far:

The only thing I don't get is why they don't offer printing of these. Seems like a natural money maker.

Anyhoo, hope you enjoy!

Tuesday, August 14, 2007

Great July 2007 Issues of Merit

The U.S. Merit Systems Protections Board (MSPB) puts out a great newsletter focused on staffing called Issues of Merit.

The July 2007 edition has some great stuff in it, including:

- Risks inherent with using self-assessment for high-stakes decisions, such as hiring (hint: people are horrible at it)

- Tips for workforce planning

- How to write good questions

- Analyzing entry hires into the federal workforce

- An introduction to work sample tests

Good stuff!

Saturday, August 11, 2007

Class certified in Novartis gender discrimination suit

Bad news for Novartis Pharmaceuticals.

On July 31, 2007 Judge Gerald Lynch of the Southern District of New York granted class certification status to "[a]ll women who are currently holding, or have held, a sales-related job position with [Novartis] during the time period July 15, 2002 through the present."

The plaintiffs are seeking $200 million in compensatory, nominal, and punitive damages, claiming that Novartis discriminates against women in a variety of ways, including compensation, promotions, performance appraisals, and adverse treatment of women who take pregnancy leave.

The case in instructive for us because of how the judge viewed expert opinion in this case. One of the plaintiffs' experts noted that Novartis' performance evaluation system was flawed because ratings were subject to modification by higher-level supervisors and because ratings had to fit into a forced distribution. In addition, appeals by employees went to either the manager who made the original rating or an HR person with no real authority to change ratings.

Another plaintiffs' expert noted that male sales employees are 4.9 times more likely to get promoted to first-line manager than female sales employees. In addition, 15.2% of male employees were selected to be in the management development program compared to only 9.1% of eligible female employees--a difference of 6.0 standard deviations.

What these statistics really signify and whether the plaintiffs end up ultimately winning the suit is anyone's guess. The important thing here is to keep in mind that what you may think is a logical way to make promotion decisions may look "subjective" to others and riddled with potential for bias to enter the equation.

Bias (and risk) can be reduced by implementing practices such as:

1 - Having raters undergo intensive training, including a discussion of potential biases and several "dry runs" of the process.

2 - Having a standardized rating form with clear benchmarks based on an analysis of job requirements.

3 - Considering carefully the use of a "forced distribution" system. If you do use one, make sure raters and ratees alike understand why--and how--this is being done.

4 - Making performance in the current job only part of the promotional criteria--give applicants a chance to show their stuff through ability tests, work sample tests, personality tests, and the like.

5 - Taking complaints seriously. If someone believes there is an opportunity for abuse of the system, investigate.

6 - Track, track, track those applicant flow statistics, including selection into training programs. Uncover discrepancies before they uncover you.

7 - Get HR involved--not just as gatekeepers but as partners. Hold HR accountable for providing best practices.

8 - If you have something like a management academy, make the criteria for entry transparent and have a point person for questions.

You can read the order here, and read more analysis of the case here.

Thursday, August 09, 2007

Spock launches

I've posted a couple times about Spock, a new people search engine. I'll be honest, I'm pretty excited about it.

I won't go into (again) why I'm excited, but suffice to say a search engine that gives us rich data about folks that we can use for recruitment and (potentially) assessment is pretty promising.

Yesterday they had their official public beta launch and you can now check it out, although it's so popular that it looks like their servers are struggling.

And no, they're not the only game in town. They compete directly with other sites like Wink and PeekYou, and indirectly with sites including LinkedIn, ZoomInfo, and Xing. Oh yeah, and WikiYou (although that's user-generated).

As I said, I'm pretty excited about it. Maybe it's just the name. And keep in mind I bought Webvan stock, so take my opinions with a grain of salt.

Tuesday, August 07, 2007

New feed: IPMAAC website updates

Can't get enough news about assessment?

Wish there were more feeds you could track?

Well, your wish has been granted. Now you can keep track of major changes to the IPMAAC website via the new RSS feed. This includes:

- Job openings

- New conference presentations available

- New items added to the library

- Announcements of new issues of the Assessment Council News (ACN)

Not familiar with feeds? Check out Google Reader or Feedreader. There are a ton of applications out there you can use to track feeds (including most web browsers), but these are two I've found to be darn easy to use.

Maybe this will encourage SIOP and SHRM to do the same...

Monday, August 06, 2007

2007 Academy of Management Conference

There have been some news stories about one of the presentations at this year's Academy of Management (AOM) conference--about an online survey where a majority of respondents said that bad bosses either get promoted or have nothing happen to them. But there's a heck of a LOT of other good stuff at this year's conference. So take a deep breath and let's take a look...

First up, a whole set of presentations devoted to selection, including:

- Hiring for Retention and Performance
- Work Sample Test Ethnic Group Differences in Personnel Selection: A Meta-analysis
- Stigmatizing Effects of Race-Based Preferential Selection
- Longitudinal Changes in Testing Applicants and Labor Productivity Growth

Next, a session devoted to recruitment and selection, including:

- The Role of Sociolinguistic Cues in the Evaluation of Job Candidates
- Recruitment as Information Search: The Role of Need for Cognition in Employee Application Decisions
- A House Divided: Cooperative and Competitive Recruitment in Vital Industries
- The Practice of Sense-Making and Repair during Recruitment Interviews
- Overqualified Employees: Too Good to Hire or Too Good to Be True?

Next up, a session devoted to recruitment. Included topics:

- Customizing Web-Based Recruiting: Theoretical Development and Empirical Examination
- Network-based Recruiting and Applicant Attraction: Perspective from Employer and Applicants
- Fancy Job Titles: Effects on Applicants' Job Perceptions and Intentions to Apply
- Recruitment and National Culture: A Value-Based Model of Recruitment

Next, a set devoted to person-organization (P-O) fit, including:

- Going Beyond Current Conceptualizations of P-E Fit and Presenting a Status Report on the Literature
- Outcomes of Multidimensional Misfit: An Empirical Test of a Theoretical Model
- FIT: Scale Development and Initial Validation of a New Measure
Considering the Contextualized Person: A Person-In-Content Approach to Goal Commitment

Next, a set on predictors of individual performance, including:

- An Examination of Ability-based Emotional Intelligence and Job Performance
- Predicting NFL Performance: The Role of Can-do and Will-do Factors
- A Fresh Perspective on Extraversion and Automobile Sales Success
- Auditor Effectiveness and Efficiency in Workpaper Review: The Impact of Regulatory Focus

Last but not least, one of my favorite topics, how organizations and individuals perceive selection. Topics include:

- Understanding Job Applicant Reactions: Test of Applicant Attribution Reaction Theory
- Effects of Ingratiation and Similarity on Judgments of P-O Fit, Hiring Recommendations and Job Offer
- The Effects of Resume Contents on Hiring Recommendations: The Roles of Recruiter Fit Perceptions
- Organization Personality Perceptions and Attraction: The Role of PO Fit and Recruitment Information

This is just a sample of what the conference has to offer; if you went, or otherwise know of other presentations we should know about, please share with us.

And no, most of the presentations aren't available on-line but the presenters' e-mail addresses are provided and most folks are willing to share.

Thursday, August 02, 2007

Is USAJobs enough?

Check out this article that came out recently on It's about how the federal government may need to branch out and start using other advertising venues besides, which it relies on heavily.

Some individuals quoted in the article, which happens to include a manager at CareerBuilder, point out that:

- Opportunities are not automatically posted on other career sites, like CareerBuilder, Monster, and HotJobs.

- Job openings are not "typically" searchable through search engines like Google. (Although look what happens when I search for an engineering job with the federal government).

- You can't expect people to automatically look for jobs on USAjobs.

The Office of Personnel Management (OPM), the fed's HR shop, fires back:

- USAJobs gets 8 million hits a month. This compares to CareerBuilder's 1.2 million searches a month for government jobs.

- USAJobs is well known and marketing efforts have been ramped up (e.g., last year's television commercials, which unfortunately didn't work with my version of Firefox).

So who wins the argument? I don't think the feds need to panic just yet. But it can't hurt them to investigate other posting opportunities, particularly given how much traffic the heavy hitters like Monster and CareerBuilder get compared to USAJobs:

By the way, don't overlook the comments on that page; in some ways they are more telling than the article. Readers point out that the application process is overly complicated--to the point that one of the readers makes his/her living guiding people through the process (reminds me of a guy that does the same thing for the State of California). My bet is the application process is equally, if not more, important than how the feds are marketing their opportunities.

I would also be willing to bet that it isn't just the feds that have this issue. As more organizations implement automated application and screening programs, they risk falling in love with the technology at the expense of the user experience. I may love the look of your job, but if it takes me 2 hours to apply, well...I may just look elsewhere.