Friday, June 29, 2007

New SIOP Journal in 2008

In 2008 SIOP will begin publishing its own scholarly journal, Industrial and Organizational Psychology: Perspectives on Science and Practice.

Here are the vitals:

Cost: Free to members (unknown fee for nonmembers)

Editor: Dr. Paul Sackett

Focus: Articles will cover basic science, applied science, practice, public policy, and (most likely) a blend.

Format: Focal article-peer response. Each article will be followed by 8-10 responses. From the website: "[Peer responses] could challenge or critique the original article, expand on issues not addressed in the focal article, or draw out implications not developed in the focal article. The goal is to include commentaries from various perspectives, including science, practice, and international perspectives. These commentaries will be followed by a response from the original author of the focal paper."

Publisher: Blackwell (which hopefully means we'll have online access)


The first two article abstracts are available here, with members having full access:

The meaning of employee engagement - Macey and Schneider (a particularly hot topic)

Why assessment centers don't work the way they're supposed to - Lance

Wednesday, June 27, 2007

EEOC issues FY 2006 Federal Workforce Report

The EEOC has released its Annual Report on the Federal Work Force for FY 2006 (10/o5-09/06).

For federal agencies it's a treasure trove of benchmark information covering everything from EEO policies to ADR statistics.

Not in the federal workforce, or otherwise find this a yawner? Check it out anyway. The report is filled with tips on topics such as:

- Reasonable accommodation procedures
- Sexual harassment policies
- Barrier analysis

- Improving participation rate of individuals with disabilities

And if you like tables and graphs...Woah, Nelly, you're in for a treat.

The report is available in HTML or PDF.

Tuesday, June 26, 2007

New blog to watch

Michael Harris, previously of EASI-HR Blog, has started a new blog titled HRMplus.

Dr. Harris is a professor at the University of Missouri-St. Louis where he teaches HRM. He has also served as an expert witness on discrimination issues, as a trainer, and a consultant. He presented at the most recent IPMAAC conference on Disparate Impact and Employment Testing: A Legal Update.

Check it out!

Monday, June 25, 2007

June 2007 IJSA

Yep, it's journal time again...the June 2007 issue of the International Journal of Selection and Assessment is out...

This time the articles fall into two main camps: those focused on applicant reactions to selection procedures, and those focused on personality testing.


Applicant reactions

First up, Landon and Arvey looked at ratings of test fairness by 57 individuals with graduate education in HRM. The test characteristics included validity coefficient, mean test score difference between minority and majority candidates, and test score adjustment for the minority group. Results? Fairness ratings depended almost as much on how raters rated as they did on the test characteristics! Implications? "Extensively cross validated with a global random sample of over 3 million with criterion-related validities exceeding .90!" may not be worth a hill of beans to some folk.

Next, Bertolino and Steiner with a similar study of test fairness using responses from 137 university students (ahh, where would our research be without students). Students rated 10 selection methods on 8 procedural justice dimensions. Results? Work sample tests were rated highest (as they usually are), followed by resumes, written ability tests, interviews, and personal references. All things that are expected by most job seekers. What wasn't perceived as well? Graphology--thankfully. The most important predictors of procedural justice were opportunity to perform and perceived face validity. Nothing earth shatteringly new, but some international confirmation of previous, mostly U.S., findings.

Speaking of international support for the importance of opportunity to perform and face validity among an international sample judging fairness of selection methods (say that five times fast), Nikolaou and Judge analyzed responses from 158 employees and 181 students in Greece. Methods rated highest (drumroll...): interviews, resumes, and work samples, across both groups. However, students reported more positive reactions to "psychometric tests" (i.e., ability, personality, and honesty tests) than did employees--an important distinction. Also, although there does appear to be individual differences in rating (see previous study), core self evaluations didn't appear to explain much.

In summary: Work samples and interviews--high fairness ratings and (potentially) high validity. A great combination.


Personality testing

First up in the personality section is Kuncel and Borneman's study of a new method for detecting "faking" during personality tests by looking at the particular way "fakers" respond. The authors found support for this method (sample was not described), with 20-37% of fakers identified. May not seem like a lot, but they had a false positive rate (incorrectly labeling someone a faker when they're not) of only 1% with a baserate of 56% honesty. Not too shabby. Interestingly, the "faker" pattern did not correlate with personality or cognitive ability test results.

Second, a very interesting study of the "dark side" of personality by Benson and Campbell. Using two independent samples of managers/leaders (N=1306 and 290), the authors found support for a non-linear (inverted U) relationship between "dark side" scores and assessment center ratings as well as performance ratings. So, for example, having a moderate amount of skepticism or caution is good--but too little or too much creates a problem. The instruments used were the Global Personality Inventory and the Hogan Development Survey.


Okay, there's a third category

Okay, one more study. de Mejier et al. analyzed results from over 5,000 applicants to Dutch police officer positions. The researchers were interested in rating differences when comparing ethnic minority and non-minority applicants. Results? Similar (if not more) assessment information was used to judge the two groups, but a large number of "irrelevant cues" were used to judge minority applicants. One other difference: when rating minority candidates, assessors relied more on the ratings of others. Another argument for a standardized rating process!

Saturday, June 23, 2007

JobAdWords Update

Update on the JobAdWords Survey

In a previous post I introduced a survey I'm using to answer some questions about the words used in job advertisements. The survey can be taken here, or if that's full, here or here.

Results so far? Based on responses from a mixture of HR professionals and job seekers (I'll break them out when I have a large enough sample):

The words most often seen in job advertisements:

- Motivated
- Flexible
- Customer-focused

The words least often seen in job advertisements:

- Independent
- Smart
- Friendly

The words with the most positive emotional response:

- Conscientious
- Reliable
- Strong work ethic

The words with the least positive emotional response:

- Flexible
- High-energy
- Creative

The words associated with highest probability of applying:

- Professional
- Detail-oriented
- Reliable
- Friendly
- Conscientious

The words associated with the lowest probability or applying:

- High-energy
- Flexible

So here's an interesting question (worthy of further research): do the words in job advertisements cause a reaction because they say something about what the job would be like, or because they cause people to self-assess? Or both? For example, "strong work ethic" received a high emotional response, but was not one of the highest-rated words when it came to applying. Hmmm...

Friday, June 22, 2007

June '07 JOOP

The June 2007 Journal of Occupational and Organizational Psychology is out, and while it has several articles of interest, there's really only one directly related to recruitment/assessment.

In the study, Piasentin and Chapman looked at how perceptions of person-organization (P-O) fit come about--whether they stem from feeling like the organization is similar to you, complements you, or some combination of both.

Using a sample of data from 209 employees "of various occupational and organizational backgrounds", the authors found support for both the similarity and complementary effects. In addition, perceptions of fit were found to correlate (and mediate the relationship) with several other important feelings, including:

- job satisfaction
- organizational commitment
- turnover intention

So what are the implications? How people perceive the match between their own skills, values, and goals and those of the organization matter--and not just to current employees but applicants as well. Organizations have to make sure they give applicants enough information to make this judgment, however. Too often job seekers are provided with minimal, or irrelevant, information about the position and the organization, such as long lists of tasks. Yes, people want to know what the salary is and where the job is located, but they also want to know who they'll be working with, what their career growth opportunities will be, and what the organization's take on work-life balance is.

This is low-hanging fruit from a staffing perspective, and organizations that get it are providing job seekers with this rich form of information.

...speaking of fit...there's a new book out on the topic, called Perspectives on Organizational Fit edited by Ostroff and Judge. It includes recruitment and selection as topics, but also covers others, such as leadership and teamwork.

Thursday, June 21, 2007

Police depts relax hiring standards

In response to serious recruiting challenges, many U.S. police departments are "lowering" their standards for hiring.

The reasons behind the shortage are many, including a strong job market, the Iraq war, and a high number of retirements.

Departments are using whatever means they have at their disposal, including upping their advertising. Case in point: while driving down 880 the other day in Oakland, CA, I noticed a sign promoting the $69,000 starting salary for Oakland Police Officers (and people wonder why it's hard to hire in the Bay Area).

The article cited above describes many steps departments are taking, some of which may initially seem like cause for concern. Let's take a look at them:

1. Forgiving minor criminal convictions, particularly old ones. If someone got busted 10 years ago for doing Ecstasy in college, and hasn't been in trouble since, is that still relevant?

2. Relaxing the 2-year college degree requirement, or allowing experience substitutions. I'm familiar with some research indicating a relationship between college education and officer performance, but if an officer has relevant experience (and performed well), this seems like a wash.

3. Raising the age limit. Age and job performance has been a hot topic in I/O psychology for a long time. While there are some declines over age (e.g., working memory), my reading is that they aren't practically significant in most situations. And we're talking about raising the limit to 40 or 44, not 85.

4. Relaxing fitness requirements. To me this comes back to plain 'ol validation. Granted, it's not always easy to determine where a pass point should be set (do they have to run 300 meters in 55 seconds or 56 seconds?), but do the study. Find out where a reasonable point would be. Run the numbers. See if it makes sense.

A lot of the concerns that go along with these changes--hiring people with low integrity, hiring people physically or mentally unable to perform the job--can be mitigated with good assessment, such as memory tests, physical ability testing, integrity testing, and reference and background checks.

Overall, I think this is a good thing--minimum qualifications (MQ) are often barriers to employment for certain ethnicities, women, and individuals with disabilities. And the situation is even worse when they aren't based on any rigorous study of the necessity for the MQs to being with.

On the other hand, I have heard anecdotally that similar changes in standards for U.S. Army recruits has resulted in more challenges for training.

What do you think--big deal or not?

Monday, June 18, 2007

2007 IPMAAC Conference Presentations

The 2007 IPMAAC Conference in St. Louis was great this year, with tons of informative and entertaining presentations, including great talks by Wayne Cascio, Bob Hogan, and Nancy Tippins.

The presentations are starting to roll in. Right now there's only one up--Michael Harris' great presentation titled Disparate Impact and Employment Testing: A Legal Update--but I expect more to show up this week and the next, so keep checking.

Next year I am co-chairing the program committee along with Carl Swander of Ergometrics. We're going to attempt to match the quality of this year's conference and plan on putting together an outstanding program. It's never too early to start thinking about presenting at next year's conference, to be held on June 8-11 in beautiful Oakland, California. Mark it on your calendars!

Saturday, June 09, 2007

Project JobAdWords

I'll be in St. Louis at the IPMAAC conference for most of the week, then taking a little time off...In the meantime I thought I would gather a little data...

I know there's research out there that looks at applicant reactions to various aspects of a job advertisement, such as information on selection procedures, pictures, and clear descriptions of work climate. But I'm not aware of any that specifically looks at the effect of WORDS commonly contained in job ads (although there is a recent study about word frequency).

You know the ones I'm talking about:
- "motivated"
- "creative"
- "works well under pressure"
etc.

But what exactly do these words mean to a reader? What reaction do they cause?

Today I'm starting Project JobAdWords, an attempt to try to answer these questions.

I've created a very brief survey designed to shed a little light on what these words mean. I would greatly appreciate your participation in this project. Simply go to one of the following websites--if survey #1 is full, go to survey #2, if survey #2 is full, please try survey #3:

Survey 1
Survey 2
Survey 3

As soon as I have enough respondents--judged completely arbitrarily and in no way using statistical sophistication--I will post the results. If I get enough results, I may continue this project by looking at other similar issues.

Thank you! Please pass the word.

Thursday, June 07, 2007

March '07 issue of Journal of Business and Psychology

The March issue of the Journal of Business and Psychology has two articles we should take a look at...

1) A study by Topor, Colarelli, and Han of 277 HR professionals found that evaluations of hypothetical job applicants were influenced by the traits used to describe the applicant as well as the assessment method used to measure those traits. Specifically, candidates described as conscientious were rated highest, as were those assessed using an interview--and the combination of the two was the highest-rated combination. (Other traits included intelligence and agreeableness, other assessment methods were paper-and-pencil test and assessment center)

How does this stack up against known evidence? Could be better. In general (and there are variations depending on the job), intelligence has been shown to be the best predictor of job performance. The value of a particular assessment method, on the other hand, depends not only on the job, but on the nature of the instrument as well as the construct being measured. But the research on measuring personality using interviews is relatively new, and certainly not as established as evidence supporting paper-and-pencil measures of cognitive ability (or personality, for that matter). Another example of HR folk not knowing the evidence as well as they should.

2) A meta-analysis by Williams, McDaniel, and Ford of compensation satisfaction research. The authors looked at 213 samples from 182 studies and came up with some interesting results.

- First, measures of satisfaction with direct pay (e.g., pay level, pay raises) were highly related to one another, while satisfaction with benefits showed only modest correlations with direct pay (suggesting people see direct pay and benefits as distinct compensation categories).

- Second, both the perception of the pay raise as well as the objective amount of the pay raise were important in predicting pay raise satisfaction. This fits well with what we know about the importance of procedural justice.

- Third, a modest, negative relationship was found between employee costs and benefit satisfaction. This suggests benefit costs may not be a particularly important factor when trying to lure candidates. It also suggests that asking employees to pick up more of their benefit costs may be a viable strategy as long as attention is paid to other forms of compensation (which apparently have a larger impact on overall satisfaction).

Wednesday, June 06, 2007

Recruiting metrics

One of my readers asked me to address some recruiting metrics, which I think is a good topic to visit.

Although there are some strong feelings out there about the usefulness of metrics, the fact is that traditional measures of recruiting activity are still out there being used, both as measures of success and for planning purposes.

The two benchmarks that were specifically asked about were:

- Number of fills per recruiter
- Number of hours per recruitment

Unfortunately a lot of the benchmark recruiting metrics out there are not free. Several organizations offer detailed benchmarking data, but ya gotta pay for it. Examples include:

- Staffing.org's Recruiting Metrics and Performance Benchmark Report ($425)

- Society for Human Resource Management (SHRM)'s Human Capital Benchmarking Service ($425 for non-members)

- The Saratoga Institute publishes a Human Capital Effectiveness Report

Even so, I've gathered what data I could get my hands on, so let's take them one at a time.


Fills per recruiter

How many fills, or placements, does a typical recruiter make? Of course it depends on the time frame and what type of placement we're talking about, which makes this statistic challenging to nail down. But here's what I found:

- IPMA-HR's 2006 Recruiting and Selection Benchmarking Report: The median number of recruitments per HR staffer was in the 4-6 category, closely bordering on the 7-10 category, so six is probably a good overall estimate given this data.

The other sources I found were more anecdotal. For example:

- JBN Consulting notes a typical recruiter makes .75 placements per month

- Cluff & Associates state: "A dedicated recruiter in a corporate environment can average 8 fills per month if activity levels warrant. Agency recruiters, however, are likely to generate 2 or 3 placements per month."

-
A Google search on the topic brings up quite a few recruiter job advertisements, which gives us some idea of the expected workload. In general, 1-3 fills per month seems a standard expectation.

Conclusion? The typical number of fills per recruiter seems to vary quite a bit depending upon the type of job, job market, etc. For example, Dave Lefkow noted a few years back that 100 fills a month for customer service jobs was considered successful, while 5-10 analyst/developer fills a month might be acceptable. If I had to pick an "average" number, I'd go with the IPMA-HR figure of six (although consider that's public sector organizations).


Hours per recruitment

Fortunately there's a lot more information about benchmarks for how long it takes to conduct a recruitment (here used synonymously with "time to fill"), although here too the figure varies with the type of recruitment:

- Surveys conducted by the Employment Management Association (affiliated with SHRM) consistently show an average time to fill of 42-45 days. Source

- 2004 Workforce Diagnostic System Benchmark by the Saratoga Institute indicated 48 days.

- 2006 SHRM metric for same-industry hires of 37 days (from opening of requisition to offer acceptance) Source SHRM also offers customized reports (for a fee), available here. Samples indicate median time to fill of 35 and 40 days, depending on industry.

- 2006 Corporate Executive Board survey indicated 51 days Source: The Economist

- 2006 IPMA-HR benchmark report: average of 49 days between vacancy announcement and first day on the job, with a low of 44 for labor/maintenance and a high of 57 for public safety

- Staffing.org and HR Metrics Consortium's 2003 Recruiting Metrics and Performance Benchmark: 70 days (this includes time between offer acceptance and showing up to work). Source

Conclusion? Again, depends on the job (and how you define the metric), but we can estimate around 40 days between vacancy announcement and offer acceptance, with another 10-20 days between offer acceptance and first day on the job.

If you have additional data, please share! Inquiring minds want to know...

Tuesday, June 05, 2007

Feds host virtual career fair

The U.S. Office of Personnel Management (OPM), in conjunction with the Partnership for Public Service, will host a "virtual career conference" to highlight opportunities with the federal government today, Wednesday, and Thursday. You can view the live web cast here.

The event will kick off at 11 a.m. on June 5th with an overview of federal employment by the Director of OPM, Linda Springer. Subsequent panel sessions will focus on finding and applying for jobs, insight from new federal employees, student programs, IT jobs, opportunities in medicine and public health, how to host a career fair, and several other topics.

According to the news release, videos of all twelve panels will be available through both OPM and the Partnership for Public Service for the remainder of the year. The event is specifically targeted at the 600 colleges and universities around the U.S. who participate in the Call to Serve initiative.

Kudos to these organizations for being creative about highlighting opportunities in the public sector and taking advantage of technology to get the word out.

Monday, June 04, 2007

Summer 2007 Personnel Psychology + free content

The Summer 2007 issue of Personnel Psychology (v. 60, #2) is here and it's got some good stuff, so let's jump right in!

First off is the aptly titled, A review of recent developments in integrity test research by Berry, Sackett, and Wiemann, the fifth in a series of articles on the topic. This is an extensive review of research on integrity tests since the last review, which was done in 1996. There's a lot here, so I'll just hit some of the many highlights:

- It appears that integrity tests can vary in their cognitive load depending on which facets are emphasized in the overall score.

- It is likely that aspects of the situation impact test scores (in addition to individual differences); more research is needed in this area.

- Although there have been no significant legal developments in this area since the last review, concerns have been raised over integrity tests being used to identify mental disorders. The authors do not seemed concerned, as these tests (e.g., Reid Report, Employee Reliability Index) were not designed for that purpose thus likely do not violate EEOC Guidelines.

- Research on subgroup scores (e.g., Ones & Viswesvaran, 1998) indicate no substantial differences on overt integrity tests; no research has addressed personality-based tests.

- Test-takers do not seem to have particularly positive reactions to integrity tests, although this appears to depend upon the type of test, items on the test, and response format.

Next, Raymond, Neustel, and Anderson investigate certification exams and whether re-taking the same exam or a parallel form results in different score increases. Using a sample of examinees taking ARRT certification exams in computed tomography (N=79) and radiography (N=765), the authors found no significant difference in score gains between the two types of tests, suggesting exam administrators may wish to re-think the importance of alternate forms for certification, particularly given the cost of development (estimated by the authors at between $50K and $150K). The authors do point out that the generalizability of these results is likely limited by test type and examinee characteristics.

Third, Henderson, Berry, and Matic investigate the usefulness of strength and endurance measures for predicting firefighter performance on physically demanding suppression and rescue tasks. Using a sample of 287 male and 19 female fire recruits hired by the city of Milwaukee, the authors found that both measures (particularly strength measures such as lat pull-down and bench press) predicted a variety of criteria, including a roof ladder placement exercise, axe chopping, and "combat" test. The authors suggest continued gathering of data to support the use of these types of tests (while acknowledging the ever-present gender differences), and discuss several problems with simulated suppression and rescue tasks, now used by many municipalities in light of previous legal challenges to pure strength and endurance measures.

Lastly, LeBreton, et al. discuss an alternate way of demonstrating the value of variables in I/O research. Traditionally researchers have focused on incremental validity, essentially the amount of "usefulness" that a variable adds to other variables already in the equation. (Allows you to do things like determine if a personality test would help you predict job performance above and beyond the test(s) you already use.) Instead, the authors present the idea of relative importance, which shifts the focus to the importance of each variable in the equation. Fascinating stuff (and far more than I can describe here), and something I'd like to see more of. I believe the authors are correct in stating it would be much easier to talk to managers about how useful each test in a battery is rather than the fact that overall they predict 35% of performance. The article includes a fascinating re-analysis of Mount, Witt, & Barricks' 2000 study of the use of biodata with clerical staff.

----

This issue also includes reviews of several books, including the third edition of Ployhart, Schneider and Schmitt's Staffing Organizations (conclusion: good but not great), Weekley and Ployhart's Situational Judgment Tests (conclusion: good as long as you already know what you're doing), and Griffith and Peterson's A Closer Examination of Applicant Faking Behavior (conclusion: good for researchers, not so good for managers).

---

But wait, there's more...the Spring 2007 issue, which had some interesting stuff as well, is free right now! So get those articles while you can. Hey, it's worth surfing over there just for McDaniel et al.'s meta-analysis of situational judgment tests!

Saturday, June 02, 2007

Google buys FeedBurner; introduces Street View

A couple news items today about Google.

First, they continued their buying spree and purchased FeedBurner, a popular feed syndicating service (and the one I use). This will allow Google to advertise in new ways, and also gives them somewhat of a blog end-to-end now that it owns both Blogger and FeedBurner. More information about the purchase is available on FeedBurner's blog, and they have this FAQ. Presumably this means a marriage of their analytic tools, which hopefully is good news for anyone that publishes content.

Implication for us? If you already publish a blog (or other type of feed) or are thinking about it, you'll want to keep close tabs on what this purchase will mean for publishers (e.g., ad distribution, fees, etc.).

Second, Google has introduced Street View, an add-on to Google Maps that allows for 360-degree viewing at the street level. You can even follow arrows that will lead you down the street. Right now it's available only in San Francisco, Las Vegas, Denver, New York, and Miami.

Why do we care? Seems to me a good way to preview the local area for potential applicants. I'm all about the realistic job previews. Also, probably a good opportunity for us to see how our area is being presented--what do candidates see when plopped down in front of our buildings? Would you want to be there?

Friday, June 01, 2007

Becoming passive employers

Let's take a moment and think about what job search could be.

Right now, job search is static. Someone searches for a job, and either a vacancy exists or it doesn't. But what if we were a little more creative?

What if instead of getting "zero results for your search", the candidate received something like:

There are no current openings that match your search. However, the following positions exist that may have openings in the future.

What followed would be a detailed description of current positions in the organization that matched the search criteria--jobs people actually had. And you would allow people to submit a job interest request so they would be notified when that job (or similar job) became open. Yes, some systems already have job interest requests, but too often it's based on broad job titles and it fails to provide the rich information a job seeker needs (e.g., who they will work with, learning opportunities).

What else could we do with this feature? We could profile the individuals that are in the current job. Okay, maybe not everyone, but a sample. At the very least we could provide a basic job description (and not a boring one).

This idea fits with a concept I think we all need to focus more on. In addition to seeking passive candidates, we should be passive employers. Passive job seekers aren't looking for a job, but they could be. Passive employers don't have that particular opening--but they could. But unless you tell candidates that, how will they know? How do they know that a perfect match exists in your organization, and if they just had waited another week to search, they would have seen it?

Why do we make applicants the servants of the ATS, not the other way around?

Let's take this a step further.
Let's say I'm an attorney in Seattle looking to relocate to Boston. I know I'd like to work for a smallish firm with decent billable hours, co-workers that know their stuff and are good at their jobs but value work-life balance.

How the HECK am I supposed to find that firm? Sure, I can look for current vacancies on job boards. Or maybe I just happen to know someone who works for such a firm and they have an opening. Or I might be able to find some information through a Google search or services such as Vault or Hoover's (although that information is very limited, you still have to know the company name, and information on public sector agencies is anemic). But that'll only get me so far. Then what?

There is no general database of employer qualities to search through (sites like Jobfox are trying a similar idea but it's still based on vacancies). No easy way to punch in the above criteria and have a system spit out, "Here are all the firms that meet your criteria. Here are the ones that currently have openings, here are the ones that don't currently may may in the future."

People search is getting more and more sophisticated. What about employer search? If we expect applicants to take an active role in managing their career, we should give them the information they need to do it. We can, and should, do better.