Tuesday, December 28, 2010

Relax, researchers


Researchers make a lot of choices: What to study. How to measure. What statistic(s) to use. Researchers conducting meta-analyses (essentially a study of studies) face similar choices. So it's nice to know that any agonizing meta-analysts go through may be largely unimportant in terms of the results.

In the January 2011 issue of the Journal of Management, Aguinis et al. report the results of their study of 196 meta-analyses published in places like Journal of Applied Psychology, Academy of Management Journal, and Personnel Psychology. Specifically, they looked at the impact that various methodological choices had on the overall effect sizes. Things like:

- Should we eliminate studies from the database?
- Should we include a list of the primary-level studies used in the meta-analysis?
- What meta-analytic procedure should we use?

Here, in their words, were the results:

"Our results, based on 5,581 effect sizes reported in 196 published meta-analyses, show that the 21 methodological choices and judgment calls we investigated do not explain a substantial amount of variance in the meta-analytically derived effect sizes. In fact, the mean effect size...for the impact of methodological choices and judgment calls on the magnitude of the meta-analytically derived effect sizes is only .007. The median [effect size] value is an even smaller value of .004."

So not only does this suggest researchers should spend less time worrying about methodological choices, it raises a question about the value of including too much of this history in published articles--if it doesn't make any difference, do I really need to read about why you chose to eliminate studies due to independence issues or missing information?

The article has another, perhaps even more interesting, finding. And it's something that us personnel research nerds like to argue over: corrections. Things like correcting for range restriction (e.g., because you only hire people that get the top scores) and criterion unreliability (e.g, because supervisor performance ratings are less than perfect). For every "corrections were necessary to understand the conceptual-level relationships" you'll get a "that's great, but in the real world the data's the data."

So that's why it's fairly amusing to note the differences these types of corrections tended to make. Again, in the authors' own words:

"across 138 meta-analyses reporting approximately 3,000 effect sizes, the use of corrections improves the resulting meta-analytically derived effect size by about .05 correlation coefficient units (i.e., from about .21 to about .25). Stated differently, implementing corrections for statistical and methodological artifacts improves our knowledge of variance in outcomes by about only 25%."

So are all going to stop arguing and hold hands? Not likely. For a variety of reasons, including the fact that opinions are hard to change, and social science researchers are not immune. You could also argue that in some cases an increase from .21 to .25 has significant implications--and to be sure that's true. But I agree with the authors that the number of cases where this greatly increases the practical usefulness of a theory is small.

Does this mean we should forget about refining our results to make them more accurate? Absolutely not. It just means that our current approaches may not warrant a lot of hand-wringing. It also means we should focus on high-quality primary studies to avoid the garbage-in, garbage-out problem.

So let's take a deep breath, congratulate ourselves for a productive year, and look forward to 2011, which I'm sure will bring with it all kinds of exciting and provoking developments. Thanks for reading.

By the way, you can access the article (at least right now) here.

Sunday, December 12, 2010

Haste makes waste


"Here you go, way too fast
don't slow down you're gonna crash
you should watch -- watch your stay here
don't look out you're gonna break you neck"
- The Primitives

There are many pieces of advice I could give the average employer when it comes to recruiting and hiring the right way. Make your job ads more attractive and concise. Use realistic job preview technology. Conduct thorough job analyses. Reduce your over-reliance on interviews.

But I'd be hard pressed to come up with a more important piece of advice than this: SLOW DOWN.

Too often organizations rush through the recruitment and selection process, relying on past practice and not giving it the attention it deserves. The result is often poor applicant pools and disappointing final selection choices.

Here are some classic warning signs things are going the wrong way:

"We don't have time to re-do the advertisement"
"Let's just do interviews like we did last time"
"The questions we used last time will be fine"

When you hear these types of comments, your reaction should be: "Because they worked so well last time?" Okay, maybe that's what you think. What you say is: "What information do we have about their prior success?" Hiring decisions are too important to be left to hunches and cognitive laziness--we all know this. Yet it's surprising how often folks fail to put in the effort they should.

Why do people fall into this trap? Mostly because it's easier that way (although naivete and lack of organizational processes play a role). Decision makers naturally gravitate toward the path of least resistance (we do love our heuristics), and it takes resilience to put in the effort each time. But it's not just because humans are lazy. It's because we're busy, and because other factors tend to overshadow sound selection--like organizational politics or feeling that someone is "owed" the job.

Decision making as a field of study tends to be overlooked when it comes to hiring, and that's a shame (ever heard of escalation of commitment?). Fortunately there is a large body of research we can learn from, and this cross-fertilization is the subject of the first focal article by Dalal, et al. in the December 2010 issue of Industrial and Organizational Psychology. They point out that the field of I/O psychology is hurting itself by not taking advantage of the theories, methods, and findings from the field of judgment and decision-making.

One of their main recommendations is to make "a concerted effort to consider the benefits of adopting courses of action other than the favored one." This could mean things like having devils advocates and group decision making automatically part of your hiring plan. It could even be as simple as a checklist that hiring supervisors fill out to ensure they're not rushing it. Or--heaven forbid--we could hold supervisors accountable for the quality of their hiring decisions.

There are several interesting commentaries following the article, making many different points. One of my favorites is from Kristine Kuhn, who I'll now quote from liberally:

"...evidence-based recommendations to use statistical models to select employees rather than more holistic, subjective assessments meet substantial resistance. The ambiguous criterion of "fit" is advanced by many experienced practitioners as a reason for not relying solely on validated predictive indices."

"Despite considerable evidence that typical interviews do not add predictive validity, managers often resist attempts to impose even minimal structure." (Consider this the next time you follow-up a structured interview with a more casual unstructured one)

"Some managers may be receptive to training and even willing to implement structural changes in selection procedures. But this will only be the case if the primary goal is in fact to hire the people most to likely to perform well and not those with whom they will be most comfortable interacting."

Amen.

As for me, I'm going to make a New Years resolution to take more deep breaths and slow down. Most involved in hiring would do well to do the same.

I should point out there is another focal article in this issue by Drasgow, et al. that you methodology folks will like. In it, the authors argue that rating scale methods derived from Likert's approach (the 5-point response scale) are inferior to ones that have evolved from the (older) ideas of Thurstone.

In a nutshell, the authors describe how the latter approach focuses on an "ideal point" that describes an individual's standing on a particular trait. It involves, as part of the rating scale design, asking people to provide ratings that might seem unnaturally forced or incongruous (do you like waffles or Toyotas?). But the authors argue strongly that this approach offers tangible improvements for things like personality inventories.

The commentators are...shall we say...skeptical. But the back-and-forth makes for some interesting reading if this is your cup of tea.

Saturday, December 04, 2010

More on personality: Empathy and genetic links

The research on personality inventories continues unabated with two new studies.

The first, by Taylor et al. in the December 2010 JOOP, found that empathy plays an important role in explaining the relationship between Big 5 traits and organizational citizenship behaviors.

The second, by McCrae, et al. in the December 2010 Journal of Personality and Social Psychology, was an attempt to better explain the genetic underpinnings of personality. The effects found in this study were small but significant, suggesting further research is needed to better understand this relationship.

Speaking of personality, don't miss Bob Hogan's most recent post to his blog; it features a wonderfully simply explanation of the value of even "small" correlations between assessment instruments and job performance.

Sunday, November 28, 2010

Research potpourri


Here's a very quick round-up of several recent pieces of research:

Bott et al. on how individual differences and motivation impact score elevation on non-cognitive measures (e.g., personality tests)

Ackerman, et al. on cognitive fatigue during testing

Robie, et al. on the impact of coaching and speeding on Big Five and IM scores

Evers, et al. on the Dutch review process for evaluating test quality (pretty cool; in press version here)

And now, back to figuring out why I eat too much on Thanksgiving. Every year.

Saturday, November 20, 2010

New issues of Personnel Psych and IJSA: Muslim applicants, selection perceptions, and more

Two of the big journals have come out with new issues, so let's take a look at the highlights:

First, in the Winter 2010 issue of Personnel Psychology:

King and Ahmad describe the results of several experiments that found interviewers and raters altered their behavior for confederates exhibiting obvious Muslim-identified behavior (e.g., clothing) depending on whether applicants exhibited stereotype-inconsistent behavior. For those that didn't, reactions were shorter and more negative. On the other hand, no difference was found in offers between those dressed in Muslim-identified clothing and those that weren't. So behavior--specifically its stereotypicality--and not simply something obvious like dress, may be key in predicting/preventing discriminatory behavior.

Do you consistently read the "Limitations" section of journal articles? Brutus et al. did, for three major I/O journals from 1995 to 2008 and found that threats to internal validity were the most commonly reported limitation. Interestingly, they also found that the nature of limitations reported changed over time (e.g., more sampling issues due to volunteers, variance issues). You can see an in press version here.

Next up, Henderson reports impressive criterion-related validity (combined operational = .86) for a test battery consisting of a g-saturated exam and a strength/endurance exam after following a firefighter academy class for 23 years. He suggests that employers have considerable latitude in choosing exams as long as they are highly loaded on these two factors, and also suggests approximately equal weighting.

Struggling to communicate the utility of sound assessment? Last but not least, Winkler et al. describe the results of sharing utility information with a sample of managers and found that using a casual chain analysis--rather than simply a single attribute--increased understanding, perceived usefulness, and intent to use.


Let's switch now to the December 2010 issue of the International Journal of Selection and Assessment (IJSA). There's a lot of great content in this issue, so take a deep breath:

Interested in unproctored internet testing? You'll want to check out Guo and Drasgow's piece on verification testing. The authors recommend using a Z-test over the likelihood ratio test for detecting cheating, although both did very well.

Walsh et al. discuss the moderating effect that cultural practices (performance orientation, uncertainty avoidance) have on selection fairness perceptions.

Speaking of selection perceptions, Oostrom et al. found that individual differences play a role in determining how applicants respond--particularly openness to experience. The authors recommend considering the nature of your applicant pool before implementing programs to improve perceptions of the assessment process.

Those of you interested in cut scores (and hey, who isn't) should check out Hoffman et al.'s piece on using a difficulty-anchored rating scale and the impact it has on SME judgments.

Back to perceptions for a second, Furnham and Chamorro-Premuzic asked a sample of students to rate seventeen different assessment methods for their accuracy and fairness. Not surprisingly, panel interviews and references came out on top in terms of fairness, while those that looked the most like a traditional test (e.g., drug, job knowledge, intelligence) were judged least accurate and fair. Interestingly, self-assessed intelligence moderated the perceptions (hey, if I think I'm smart I might not mind intelligence tests!).

And now for something completely different (those of you that get that reference click here for a trip down memory lane). A study by Garcia-Izquierdo et al. of information contained in online job application forms from a sample of companies found on the Spanish Stock Exchange. A surprisingly high percentage of firms asked for information on their applications that at best would be off-putting, at worst could lead to lawsuits, such as age/DOB, nationality, and marital status. The authors suggest this area of e-recruitment is ripe for scientist-practitioner collaboration.

Last but not least, a piece that ties the major topics of this post together: selection perceptions and recruitment. Schreurs et al. gathered data from 340 entry-level applicants to a large financial services firm and found that applicant perceptions, particularly of warmth/respect, mediated the relationship between expectations and attraction/pursuit intentions. This reinforces other research that has underlined the importance of making sure organizational representatives put your best foot/face forward.

Friday, November 12, 2010

Observer ratings of personality: An exciting possibility?

Using measures of personality to predict job performance continues to be one of the most active areas in I/O psychology. This is due to several things, including research showing that personality measures can be used to usefully predict job performance, and a persistent interest on the part of managers in tools that go beyond cognitive measures.

Historically the bulk of research on personality measures used for personnel assessment has been done using self-report measures—i.e., questionnaires that individuals fill out themselves. But there are other ways of measuring personality, and a prominent method involves having other people rate one’s personality. This is known as “other-rating” or “observer rating.”

Observer ratings are in some sense similar to self-report measures that ask the rater to describe their reputation (such as the Hogan Personality Inventory) in that the focus is on visible behavior rather than internal processes; in the case of observer ratings this takes on even greater fidelity since there is no need to “guess” at how behavior is perceived.

Research on observer ratings exists, but historically has not received nearly the attention given to self-ratings. Fortunately, in the November 2010 issue of Psychological Bulletin, Connelly and Ones present the results of several meta-analyses with the intent of clarifying some of the questions surrounding the utility of observer ratings and, with some caveats, largely succeed. The study was organized around the five-factor model of personality, namely Openness to Experience, Conscientiousness, Extraversion, Agreeableness, and Emotional Stability.

The results of three separate meta-analyses with a combined 263 independent samples of nearly 45,000 individuals yielded several interesting results, including:

- Self- and other-ratings correlate strongly, but not perfectly, and are stronger for the more visible traits, such as Extraversion and Conscientiousness, compared to those less visible (e.g., Emotional Stability).

- Each source of rating seems to contribute unique variance, which makes sense given that you’re in large part measuring two different things—how someone sees themselves, their thoughts and feelings, compared to how they behave. Which is more important for recruitment and assessment? Arguably other-ratings, since we are concerned primarily not with inner processes but with behavioral results. And it certainly is not uncommon to find someone who rates themselves low on a trait (e.g., Extraversion) but is able to “turn it on” when need be.

- Self/other agreement is impacted by the intimacy between rater and ratee, primarily the quality of observations one has had, not simply the quantity. This is especially true for traits low in visibility, such as Emotional Stability.

- Inter-rater reliability is not (even close to) perfect, so multiple observers is necessary to overcome individual idiosyncrasies. The authors suggest gathering data from five observers to achieve a reliability of .80.

- Observer ratings showed an increased ability to predict job performance compared to self-ratings. This was particularly true for Openness, but for all traits except Extraversion, observer ratings out-predicted self-ratings.

- Just as interesting, observer ratings added incremental validity to self-ratings, but the opposite did not hold true. This was especially true for Conscientiousness, Agreeableness, and Openness.

- Not only were observer ratings better at predicting job performance, the corrected validities were considerably higher than have been documented for self-report measures, and in one case—Conscientiousness—the validity value (.55) was higher than that reported previously for cognitive ability (.50), although it should be noted that the corrections were slightly different. Values before correcting for unreliability in the predictor were significantly lower.

Why do observer ratings out-perform self-ratings? Perhaps other-ratings are more related to observable job performance. Perhaps they do not contain as much error in the form of bias (such as self-management tendencies). Or perhaps because the measure is completely contextualized in terms of work behavior, whereas the questions in most self-report measures are not restricted to the workplace.

Despite these exciting results, some caution is in order before getting too carried away. First, the authors were unable to obtain a very large sample for observer ratings—sample sizes were around 1,000 for each trait. Second, there are currently a limited number of assessment tools that offer an observer rating option—the NEO-PI being a prominent example. Finally, there is an obvious logistical hurdle in obtaining observer ratings of personality. It is difficult to see how in most cases an employer would obtain this type of data to use in assisting with selection decisions.

However, even with that said the results are promising and we know of at least one way that technology may enable us to take advantage of this technique. What we need is a system where trait information can be gathered from a large number of raters relatively quickly and easily and the information is readily available to employers. One technology is an obvious solution, and it’s one of my favorite topics: social networking websites. Sites like Honestly (in particular), LinkedIn, and (to a lesser extent) Facebook have enormous potential to allow for trait ratings, although the assessment method would have to be carefully designed and some of the important moderators (such as degree of intimacy) would have to be built in.

This is an intriguing possibility, and raises hope that employers could soon have easy access to data that would allow them to add substantial validity to their selection decisions. But remember; let’s not put the cart before the horse. The question of what assessment method to use should always come after an analysis of what the job requires. Don’t jump headlong into other-ratings of personality when the job primarily requires analytical and writing skill.

This is just another tool to put in your belt, although it’s one that has a lot of exciting potential.

Saturday, November 06, 2010

Multimedia on the cheap: Xtranormal and Toondoo

When it comes to recruiting, one of the most important things an organization can do is make themselves stand out from other employers. And one of the best ways to do this is by using creative multimedia technologies that demonstrate both creativity and an openness to innovative approaches.

I've written before about some of the exciting technologies out there that employers can use for this purpose, such as simulations and realistic videos. But today I'd like to talk about two more abstract--but perhaps more fun--websites that are easy to use and have a lot of potential.

Oh yes, and they're free (for the basic version). Free is good.

The first is Xtranormal.com. You may have seen some of these videos on YouTube or elsewhere, as it seems to rapidly have become the technology of choice for creating quick animated videos. They have both a web-based design option ("text-to-movie") as well as a more fully featured downloadable version called State.

As the designer, you simply choose a setting, the number of actors, and type in the script. The website adds computer-generated voices for you. You also have control over other features, such as the camera angles for each scene, emotional expressions, and sounds.

Now I'm no designer--as you will quickly see--but I was able to make this video in all of about 10 minutes*.



If you're intrigued, also try GoAnimate.com which is similar but I found to be slightly more cumbersome to use.

The other technology is slightly more old school--panel cartoons. Yes, like the ones you see in the paper.

Here's an example, which took me about 10 minutes using Toondoo:

Recruit1


Neither of these websites are perfect. You'll find there's a small learning curve, and you'll wonder why they made certain design decisions. But there really is no reason not to at least try some of these tools.

*Note: In my day job I'm an HR Manager for the California Attorney General's office, but this blog is neither sanctioned nor supported by my employer. For better or worse, these animations were entirely my creation. But what kind of recruiter would I be if I didn't use this opportunity to promote, promote, promote!

Tuesday, November 02, 2010

November '10 J.A.P.

The November 2010 issue of the Journal of Applied Psychology is out, so let's take a look at the relevant articles:

Woods & Hampson report the results of a fascinating study looking at the relationship between childhood personality and adult occupational choice. The participants (N ~ 600) were given a 5-factor personality inventory when they were between 6 and 12 years old, then reported their occupation 40 years later using Holland's vocational types. Results? Openness/Intellect and Conscientiousness scores were correlated with occupational choice as adults. Implication? Career choice may be determined fairly early on in some cases and be influenced by how we're hard-wired as well as through friends, family, etc.

Perhaps even more interesting, the authors found that for the most strongly sex-typed work environments (think construction, nursing), the results related to Openness/Intellect were moderated by gender, supporting the idea that gender stereotyping in these jobs is impacted by individual differences as well as other factors (e.g., cultural norms).


Next up, a study by Maltarich, et al. on the relationship between cognitive ability and voluntary turnover, with some (what I thought were) counter-intuitive results. The authors focused on cognitive demands as the moderating factor, so think for a second about what you would expect to find for a job with high cognitive demands (think lawyer)--who would be most likely to leave, those with high cognitive ability or those with low? I naturally assumed the latter.

Turns out it was neither. The authors found a curvilinear relationship, such that those low and high in cognitive ability were more likely to leave than those in the middle.

What about jobs lower in cognitive demands? Who would you expect to have higher voluntary turnover--those with high cognitive ability or lower cognitive ability? I assumed high, and again I was wrong. Turns out the relationship the authors found in that case was more of a straight negative linear relationship: the higher the cognitive demands, the less likely to leave.

What might explain these relationships? Check out I/O at Work's post about this article for more details including potential explanations from the authors. It certainly has implications for selection decisions based on cognitive ability scores (and reminds me of Jordan v. New London).


Do initial impressions of candidates matter during an interview? The next article, by Barrick et al. helps us tease out the answer. The authors found that candidates that made a better impression in the opening minutes of an interview received higher interview scores (r=.44) and were more likely to receive an internship offer (r=.22). Evaluations of initial competence impacted interview outcomes not only with the same interviewer, but with separate interviewers, and even separate interviewers who skipped rapport building.

But perhaps more interestingly, the authors found that assessments of candidate liking and similarity were not significantly related to other judgments made by separate interviewers. Thus, while these results support the idea that initial impression matter, they also provide strong support for using a panel interview with a diverse makeup, so that bias unrelated to competence is less likely to influence selection decisions.


Finally, Dierdorff et al. describe the results of a field study that looked at who might benefit most from frame-of-reference training (FOR). As a reminder, FOR is used to provide raters with a context for their rating and typically involves discussing the multi-dimensional nature of performance and rating anchors, and conducting practice sessions with feedback to increase accuracy and effectiveness (Landy & Conte, 2009).

In this case, the authors were interested in finding out whether individual differences might impact the effectiveness of FOR. What they found was that the negative impact of having a motivational tendency to avoid performance can be mitigated by having higher levels of learning self-efficacy. In other words, FOR training may be particularly effective for individuals that believe they are capable of learning and applying training, and overall results may be enhanced by encouraging this belief among all the raters.


References:

Landy, F. and Conte, J. (2009). Work in the 21st century: An introduction to industrial and organizational psychology (Third Edition). Hoboken, NJ: Wiley-Blackwell.

Monday, October 25, 2010

Don't ask candidates to judge themselves


Imagine you're buying a car. The salesperson throws out a price on the car you're interested in. And here are the questions you ask to try to determine whether it's a good deal:

- Is this a good price?
- How good of a salesperson are you?
- Compared to other sales you've made, how good is this one?

Think this is silly? Well it's essentially what many employers are doing when they interview people or otherwise rely on descriptions of experience when screening. They rely way too heavily on self-descriptions when they should be taking a more rigorous approach. Think these types of questions, which should be stricken from your inventory:

"What's something you're particularly good at?"

"How would you describe your skills compared to other people?"

There are two main problems with asking these types of questions in a high-stakes situation like a job interview (or buying a car):

1) People are motivated to inflate their answers, or just plain lie, in these situations. You know that. I know that. But it's surprising how many people forget it.

2) People are bad at accurately describing themselves. We know this from years of research, but if you're interested, check out a recent study published in the Journal of Personality and Social Psychology that compared Big 5 personality ratings from four countries and found that people generally hold more favorable opinions of themselves compared to how others see them.

But it's even worse than it appears. It's not just that people inflate themselves, it's that some of your best candidates deflate themselves. Think about your star performers: if you asked them how good they were in a particular area, what do you think they'd say?


You essentially want to know two things about candidates:
1) What they've done
2) What they're capable of doing


To answer the first issue, you have several options, including:

1a) Asking them to describe what they've done--the so-called "behavioral interviewing" technique. Research shows that these types of questions generally contribute a significant amount of validity to the process. But they're not perfect by any means, particularly with people with bad memories about themselves. And keep in mind at that point you're taking their word for it.

1b) Asking them for examples of what they've done. Best used as a follow-up to a claim, but tricky in any situation where there's even a remote possibility that someone else did it or did most of it (so practically everything outside of the person being videotaped).

1c) Asking others (e.g., co-workers, supervisors) what the candidate's done. Probably the most promising but most difficult data to accurately capture. Hypothetically if the person has any job history at all they've left a trail of accomplishments and failures, as well as a reliable pattern of responding to situations. This is the promise of reference checks that so often is either squandered ("I don't have time") or stymied ("They just gave me name, title, and employment dates"). Don't use these excuses, investigate.

As for the second issue, you have several options as well, including:

2a) Asking knowledge-based questions in an interview. For whatever reason these seem to have fallen out of favor, but if there is a body of knowledge that is critical to know prior to employment, ask about it. At the worst you'll weed out those who have absolutely no idea.

2b) Using another type of assessment, such as a performance test/work sample, on-site writing exercise, role play, simulation, or written multiple choice test (to name a few). Properly developed and administered, these will give you a great sense of what people are capable of--just make sure the tests are tied back to true job requirements.

2c) Using the initial period of employment (some call it probation) to throw things at the person and see what they're capable of. It's important not test their ability to deal with overload (unless that's critical to the job), but get them involved in a diverse set of projects. Ask for their input. Ask them to do some research. See what they are capable of delivering, even if it's a little rough.


Whatever you do, triangulate on candidate knowledge, skills, and abilities. Use multiple measures to get an accurate picture of what they bring. Consider using an interview for a two-way job preview as much as an assessment device.

But above all, don't take one person's word for things. Unless you like being sold a lemon.

Wednesday, October 20, 2010

Unvarnished now Honestly.com, opens up, rates Meg and Carly


Unvarnished, the Web 2.0 answer to reference checks that I've written about before, has changed its name to the more web-friendly Honestly.com.

But that's not all. They also unveiled two other big changes:

1) $1.2m in seed funding to hire more engineers and to do more product development.

2) The site is now available to anyone with a Facebook account that is 21 years or older (due to the focus on professional achievements, not inappropriate content). Previously it was invitation only from existing users.


The other interesting development going on is they've weighed into politics. You can now see what previous co-workers are saying about California gubernatorial candidate (and former E-Bay CEO) Meg Whitman as well as senatorial candidate (and previous head of HP) Carly Fiorina.

So how are they doing? Last time I checked, Meg had an overall rating of 3 out of 5 stars, with ratings of 7, 6, 6, and 5 (out of 10) for Skill, Relationships, Productivity, and Integrity, respectively. Carly has an overall rating of 2.5 out of 5, with ratings on the dimensions of...all ones. That could be because she only has 3 raters so far compared to Meg's 20. No word yet on whether Jerry Brown (Whitman's opponent) and Barbara Boxer (Fiorina's opponent), both with long careers in public service, will have pages.

Don't (presumably irrelevant) political opinions taint the ratings? It's a strong possibility. But the reviews I've read were surprisingly balanced, which is what the site owners are seeing as a general pattern. Will it stay this way? Only time will tell. I suspect the ratings have much to do with the current user community.

Overall I'm a fan of the name change, although there was something refreshingly complex about Unvarnished. I can't see or hear the word "honestly" without thinking of Austin Powers. Mostly that's a good thing, and I think these changes will be too.

Saturday, October 16, 2010

Q&A with Piers Steel: Part 2

Last time I posted the first part of my Q&A with Piers Steel, co-author of a recent piece in Industrial and Organizational Psychology (that I wrote about here) on synthetic validity and a fascinating proposition to create a system that would greatly benefit both employers and candidates. Read on for the conclusion.

Q4) Describe the system/product--what does it look like? For applicants? Employers? Governments?

A4) How do we do it? Well, that’s what our focal article in Perspectives on Science and Practice was about. Essentially, we break overall performance into the smallest piece people can reliably discern, like people, data, things (note: our ability to do this got some push back from one reviewer – that is, he was arguing we can’t tell the difference if people are good at math but not good at sales and vice-versa – it is a viewpoint that became popular because researchers assumed that “if it ain’t trait, it’s error”). We get a good job analysis tool that assesses every relevant aspect of the job, such as job complexity. We get a good performance battery, naturally including GMA and personality. We then have lots of people in about 300 different jobs take the performance battery, have their performance on every dimension as well as overall assessed to a gold standard (i.e., train those managers!), and have their jobs analyzed with equal care with that job analysis tool. From that, we can create validity coefficients for any new job simply by running the math. It is basically like validity generalization plus a moderator search, where once we know the work, we can figure out the worker. Again, read the article for more details, but this was basically it.

Once built all employers need to do to get a top-notch quality selection is describe their job using the job analysis tool and then as fast as electrons fly through a CPU, you get your selection system, essentially instantly. It is several orders of magnitude better in almost every way from what we have now from almost every criterion.

Q5) What are the benefits--to candidates, employers, and society?

A5) Everyone had a friend who struggled through life before finding out what they should have been doing in the first place. Or changed a job for a new company only to find they hated it there. Or never found anything they truly excelled at and just tried to live their lives through recreational activities. Everyone has experienced lousy service or botched jobs because the employee wasn’t in a profession that they were capable of excelling in. Everyone has heard of talented people who were down and out because no one recognized how good they really were.

Synthetic validity is all about making this happen less. How much less is the real question. If we match people to jobs and jobs to people wonderfully now, then perhaps not at all. But of course, we know that presently it's pretty terrible.

Now synthetic validity won’t be able to predict people’s work future perfectly, but it will do a damn sight better job than what we have now. Also, the best thing about synthetic validity is that it is going to start off good and then get better every year. Because it is a consolidated system, incremental improvements, “technical economies,” are cost effective to pursue and once discovered and developed, they are locked in every time synthetic validity is used.

Right now, we have a system that can only detect the largest and most obvious of predictors (e.g., GMA) because of sample size issues, but can’t pursue other incremental predictors because they aren’t cost effective for just one job site. By the very nature of selection today, we are never going to get much better. As I mentioned, nothing major has changed in 50 years and nothing major will change in the next 50 if we continue with the same methodology. Synthetic validity is a way forward. With synthetic validity, the costs are dispersed across all users, potentially tens of millions, making every inch of progress matter.

So, what we will get? Higher productivity. If synthetic validity results in just a few thousand dollars of extra productivity per employee each year, multiply that by 130 million, the US work force. Take a second to work out the number – it’s a big number.

Also, people should be happier in their jobs too, creating greater life satisfaction. They will stay in their jobs longer, creating real value and expertise. Similarly, unemployment will go down as people more rapidly find new work appropriate to their skills. In fact, I can only think of one group that won’t like this – the truly bad performer. They are the only group that wouldn’t want better selection.

Q6) Finally, what do you need to move forward?

A6) So far, no one I know is doing this. There are some organizations who think they are doing synthetic validity, though it is really just transportability and they aren’t interested in pursuing the real thing. Partly, I think it is because the real decision makers don’t know about synthetic validity or don’t understand it. I could do more to communicate synthetic validity, though I have done quite a bit already. I have sent a few press releases, received a dozen or two newspaper interviews (Google it), contacted a few government officials on both sides our border, and pursued a dozen or so private organizations. Part of my reason to do this interview here is to try to get the word out. So far, all I got back was a few “interesting” but no actual action.

I used to think this lack of pursuit was because synthetic validity was so hard to build, requiring 30,000 people -- but we know a lot more now. In the Perspectives article, McCloy pointed out we could allow ourselves to use subject matter experts to estimate some of the relationships. That won’t be as good as if we gathered the data ourselves, but we could get something running real quick, though later we would upgrade with empirical figures. Consequently, the reason why this isn’t built isn’t because it is too difficult. Also, the payoff would eventually be cornering the worldwide selection and vocational counseling market. I am not sure what that is worth but I imagine you could buy Facebook with change left over for MySpace if you wanted to. The value of it then isn’t the problem either.

I am coming to the conclusion that despite the evidence, to most people it is just my word as an individual. I’m a good scientist, winner of the Killam award for best professor at my entire University, but it still isn’t enough. You need the backing of a professional association and so far ours [SIOP] hasn’t yet taken a stand. As a professional organization, we should be promoting this, using the full resources of our association. I admit that I am “a true believer,” but this seems to be one of the bigger breakthroughs in all the social sciences in the last 100 years. Alternatively to the backing of a professional association, we need a groundswell where hundreds of voices repeat the message. I will do my bit but hopefully I will have a lot of company.

If you think I am overstating the case regarding synthetic validity, show me where I’m wrong. We handled all the technical critiques and issues in the Perspectives article. Right now, you have to make the argument that “human capital” doesn’t matter, that being good or bad at your work doesn’t matter. And if you try to make that case, I don’t think you are the type of person who would be even worth arguing with.


I'd like to thank Dr. Steel for his time and energy. I truly hope this idea sees the light of day. If you are interested in moving this forward, leave a comment and I can put you in touch with him.

Monday, October 11, 2010

Q&A with Piers Steel: Part 1

A few weeks ago I wrote about a research article that I think proposes a revolutionary idea: The creation of synthetic validity database that would generate ready-made selection systems that would rival or exceed the results generated through a traditional criterion validation study.

I had the opportunity to connect with one of the articles authors, Piers Steel, Associate Professor of Human Resources and Organizational Dynamics at the University of Calgary. Piers is passionate about the proposal and believes strongly that the science of selection has reached a ceiling. I wanted to dig deeper and get some details, so I posed some follow-up questions to him. Read on for the first set of questions, and I'll post the rest next time:

Q1) What is the typical state of today's selection system--what do we do well, and what don't we?

A1) Here is quote from well a respected selection journal, Personnel Psychology: “Psychological services are being offered for sale in all parts of the United States. Some of these are bona fide services by competent, well-trained people. Others are marketing nothing but glittering generalities having no practical value.... The old Roman saying runs, Caveat emptor--let the buyer beware. This holds for personnel testing devices, especially as regard to personality tests.”

Care to try and date it? It is from the article “The Gullibility of Personnel Managers,” published in 1958. Did you guess a different date? You might of, as the observation is as relevant today as yesterday -- nothing fundamental has changed. Just compare that with this more recent 2005 excerpt from HR Magazine, Personality Counts: “Personality has a long, rich tradition in business assessment,” says David Pfenninger, CEO of the Performance Assessment Network Inc. “It’s safe, logical and time-honored. But there has been a proliferation of pseudo tests on the market: Caveat emptor.”

Selection is typically terrible with good being the exception. The biggest reason is that top-notch selection systems are financially viable only for large companies with a high-volume position. Large companies can justify the $75,000 cost and months to develop and validate and perhaps, if they are lucky, have the in-house expertise to identify a good product. Most other employers don’t the skill to differentiate the good from the bad as both look the same when confronted with nearly identical glossy brochures and slick websites. And then the majority of hires are done with a regular unstructured job interview – it is the only thing employers have the time and resources to implement. Interviews alone are better than nothing but not much better – candidates are typically better at deceiving the interviewer than the interviewer is at revealing the candidate.

The system we have right now can’t even be described as being broken. That implies it once worked or could be fixed. Though ideally we could do good selection, typically, it is next to useless, right up there with graphology, which about a fifth of professional recruiters still use during their selection process. For example, Nick Corcodilos reviews how effective internet job sites are getting people a position. He asks us to consider “is it a fraud?”

Q2) What's keeping us from getting better?

A2) Well, there are a lot of things. First, sales and marketing works, even if the product doesn’t. When you have a technical product and untechnical employer or HR office, you have a lot of room for abuse. I keep hearing calls for more education and that management should care more. You are right they should care more and know more. People should also care and know more about their retirement funds as well. Neither is going to change much.

Second, the unstructured job interview has a lot of “truthiness” to it. Every professional selection expert I know includes a job interview component to the process even when it doesn’t do much, as the employer simply won’t accept the results of the selection system without it. There are some cases where people “have the touch” and are value added but this is the exception. Still, everyone thinks they are gifted, discerning, and thorough. This is the classic competition between clinical and statistical prediction, with evidence massively favoring the superiority of the latter over the former but people still preferring the former over the latter (here are few cites to show I’m not lying, as if you are like everyone else, you won’t believe me: Grove, 2005; Kuncel, Klieger, Connelly, & Ones, 2008).

Third, it just costs too much and takes too much time to do it right. Also, most jobs aren’t really large enough to do any criterion validation.

Q3) What might the future look like if we used the promise of synthetic validity?

A3) Well, to quote an article John Kammeyer-Mueller and I wrote, our selection systems would be "inexpensive, fast, high-quality, legally defensible, and easily administered.” Furthermore, every year they would noticeable improve, just like computers and cars. A person would have their profile taken and updated whenever they want, with initial assessments done online and more involved ones conducted in assessment centers. Once they have the profile, they would get a list of jobs they would likely be good at, ones that they would be likely good at and enjoy, and ones they would be likely good at, enjoy and that are in demand.

Furthermore, using the magic of person-organization fit, you inform them what type of organization they would like to work for. If someone submitted their profile to a job database, every day job positions would come to them automatically, with the likelihood of them succeeding at it. These jobs would come in their morning email if they wanted it. Organizations would also automatically receive appropriate job applicants and a ready built selection system to confirm that the profile submitted by the applicant was accurate.

Essentially, we would efficiently match people to jobs and jobs to people. I would recommend people update their profile as they get older or go through a major life change to improve the accuracy of the system, but even initially it would be far more accurate than anything available today -- a true game changer.

Follow-up: Some might see a contradiction here. You cite an article that bashes internet-based job matching, yet this is what you're suggesting. Would your system be more effective or simply supplement traditional recruiting methods (e.g., referrals)?

A: Yup, we can do better. The internet is just a delivery mechanism and no matter how high-speed and video enabled, it is just delivering the same crap. This would provide any attempt to match people to jobs or jobs to people with the highest possible predictiveness.


Next time: Q&A Part 2

References:
Grove, W. M. (2005). Clinical versus statistical prediction: The contribution of Paul E. Meehl. Journal of Clinical Psychology, 61(10), 1233-1243. doi: 10.1002/jclp.20179

Kuncel, N. R., Klieger, D., Connelly, B., & Ones, D. S. (2008, April). Mechanical versus clinical data combination in I/O psychology. In I. H. Kwaske (Chair), Individual Assessment: Does the research support the practice? Symposium conducted at the annual meeting of the Society for Industrial and Organizational Psychology, San Francisco, CA.

Stagner, R. (1958). The Gullibility of Personnel Managers. Personnel Psychology, 11(3), 347-352.

Sunday, October 03, 2010

How to hire an attorney


What's the best way for an organization to hire an attorney with little job experience? What should they look for? LSAT scores? Law school grades? Interviewing ability? A multi-year project that issued its final report in 2008 gives us some guidance. And while the study focused on ways law schools should select among applicants, it's also instructive for the hiring process. (By the way, individuals looking for personal representation may find the following interesting as well.)

Recall that the formalization of the "accomplishment record" approach occurred in 1984 with a publication by Leaetta Hough. She showed, using a sample of attorneys, that scores using this behavioral consistency technique correlated with job performance but not with aptitude tests or grades, and showed smaller ethnic and gender differences.

But in my (limited) experience, many hiring processes for attorneys have consisted of a resume/application, writing sample, and interview. Is that the best way to predict how well someone will perform on the job?

Assessment research would strongly point to cognitive ability tests being high predictors of performance for cognitively complex jobs. This is at least part of the logic of hurdles like the Law School Admissions Test (LSAT), a very cognitively-loaded assessment. When you're at the point of hire, however, LSAT scores are relatively pointless. Applicants have--at the very least--been through law school, and may have previous experience (such as an internship) you can use to determine their qualifications.

So what we appear to have at the point of hire is a mish-mash of assessment tools, relying heavily on un-proven filters (e.g., resume review) followed by a measure of questionable value (the writing sample) and the interview, which in many cases isn't conducted in a structured way that would maximize validity.

So what should we do to improve the selection of attorneys (besides using better interviews)? Some research done by a psychology professor and law school dean at UC Berkeley may offer some answers.

The investigators took a multi-phase approach to the study. The first part resulted in 26 factors of lawyer effectiveness--things like analysis and reasoning, writing, and integrity/honesty. In the second phase they identified several off-the-shelf assessments they wanted to investigate for usefulness, and they developed three new assessments--a situational judgment test (SJT), a biodata measure (BIO), and other measures, including optimism and a measure of emotional intelligence (facial recognition). In the final phase, they administered the assessments online to over 1,000 current and former law students and looked at the relationship between predictors and job performance (N for that part of about 700, using self, peer, and supervisor ratings).

Okay, so enough with the preamble--what did they find?

1) LSAT scores and undergraduate GPA (UGPA) predicted only a few of the 26 performance factors, mainly ones that overlapped with LSAT factors such as analysis and reasoning, and rarely higher than r=.1. Results using first-year law school GPA (1L GPA) were similar.

2) The scores from the BIO, SJT, and several scales of the Hogan Personality Inventory predicted many more dimensions of job performance compared to LSAT scores, UGPA, and 1L GPA.

3) The correlations between BIO and SJT and job performance were substantially higher-- in the .2-3 range compared to LSAT, UGPA, and 1L GPA. The BIO measure was particularly effective in predicting a large number of performance dimensions using multiple rating sources.

3) In general, there were no race and gender subdifferences on the new predictors.

These results strongly suggest that when it comes to hiring attorneys with limited work experience, organizations would be well advised to use professionally developed assessments, such as biodata measures, situational judgment tests, and personality inventories, rather than rely exclusively on "quick and dirty" measures such as grades and LSAT scores. Yet another proof of the rule that the more time spent developing a measure, the better the results.


On a final note, several years back I did a small exploratory study looking at the correlation between law school quality and job performance. I found two small to moderate results: law school quality was positively correlated with job knowledge, but negatively correlated with "relationships with people."

References:
Here is the project homepage.
You can see an executive summary of the final report here.
A listing of the reports and biographies is here.
The final report is here.

Friday, September 24, 2010

September mega research update


I'm behind in posting updates of the September journals. Really behind. So instead of posting a series describing the detailed contents of each issue, I'm going to give you links to what's come out in the last month or so and let you explore. But I'll try to hit some high points:

Journal of Applied Psychology, v95(5)

Highlights include this piece from Ramesh and Gelfand who studied call center employees in the U.S. and India and found that while person-job fit predicted turnover in the U.S., person-organization fit predicted turnover in India.


Human Performance, v23(4)

This issue has some good stuff, including Converse et al.'s piece on different forced-choice formats for reducing faking on personality measures, Perry et al.'s piece on better predicting task performance using the achievement facet of conscientiousness, and Zimmerman et al.'s article on observer ratings of performance.


Journal of Business and Psychology, v25(3)

Another issue filled with lots of good stuff, but I'm almost 100% positive abstract views are session based, so use the title link for article on response rates in organizational research (full text here), the importance of using multiple recruiting activities, and the importance of communicating benefit information in job ads.


Journal of Applied Social Psychology, v40(9)

The article to check out here is by Proost et al. and deals with different self-promotion techniques during an interview and their effect on interviewer judgments.


Journal of Occupational and Organizational Psychology, v83(3)

Check this issue out for articles on organizational attraction, communication apprehension in assessment centers, and the impact of interviewer affectivity on ratings.

Saturday, September 18, 2010

Every once in a while, an idea comes along...


Once in a while a research article comes along that revolutionizes or galvanizes the field of personnel assessment. Barrick & Mount's 1991 meta-analysis of personality testing. Schmidt & Hunter's 1998 meta-analysis of selection methods. Sometimes a publication is immediately recognized for its importance. Sometimes the impact of the study or article isn't recognized until years after its publication.

The September 2010 issue of Industrial and Organizational Psychology contains an article that I believe has the potential to have a resounding, critical impact for years to come. Will it? Only time will tell.

The article in question is by Johnson, et al. and is on its face a summary of the concept of synthetic validation and champions its use. As a refresher, synthetic validation is the process of inferring or estimating validity based on the relationship between components of a job and tests of the KSAs needed to perform those components. It differs from traditional criterion-related validation in that the statistics generated are not based on a local study of the relationship between test scores and job performance. Studies have shown that estimates based on synthetic validity closely correspond to local validation studies as well as meta-analytic VG estimates. Hence it has the potential to be as useful as criterion-related validation in generating estimates of, for example, cost savings, without requiring the organization to actually gather sometimes elusive data.

But the impact of the article, if I'm right, will not be felt based on its summary of the concept, but on what it proposes: a giant database containing performance ratings, scores from selection tests, and job analysis information. This database has the potential to radically change how tests are developed and administered. I'll let the authors explain:

"Once the synthetic validity system is fully operational, new selection systems will be significantly easier to create than with a traditional validation approach. It would take approximately 1-2 hours in total; employers or trained job analysts just need to describe the target job using the job analysis questionnaire. After this point, the synthetic validity algorithms take over and automatically generate a ready-made full selection system, more accurately than can be achieved with most traditional criterion-related validation studies."

Sound like a mission to Mars? Maybe. But the authors are incredibly optimistic about the chances for such a system, and it appears that it is already in the beginning stages of development. The commentaries following this focal article are generally very positive about the idea, some authors even committing resources to the project. The authors respond by suggesting that SIOP initiate the database and link it to O*NET. They point out, correctly, that this project has the potential to radically improve the macro-level efficiency of matching jobs to people; imagine how much more productive a society would be if the people with the right skills were systematically matched with jobs requiring those skills.

So as you can probably tell, I think this is pretty exciting, and I'm looking forward to seeing where it goes.

I should mention there is another focal article and subsequent commentaries in this issue of IOP, but it's (in my humble opinion) not nearly as significant. Ryan & Ford provide an interesting discussion of the ongoing identity crisis being experienced by the field of I/O psychology, demonstrated most recently by the practically tie vote over SIOP's name. I found two things of particular interest: first, the fact that they come out of the gate using the term "organizational psychology" which deserves only a footnote (a fact pointed out by several commentary authors). Second, they take an interesting approach to presenting several possible futures for the field, from the strengthening of historic identity to "identicide."

Finally, I want to make sure everyone knows about the published results of a major task force that looked at adverse impact. It too has the potential to have a significant impact on the study and legal judgment of this sticky (and persistent) issue.

Friday, September 10, 2010

Personnel Psychology, August 2010

The August, 2010 issue of Personnel Psychology came out a while ago, so I'm overdue in taking a look at some of the content:

Greguras and Diefendorff write about their study of how "proactive personality" predicts work and life outcomes. Using data from 165 employees and their supervisors across three time periods, the authors found that proactive individuals were more likely to set and attain goals, which itself predicted psychological need satisfaction. It was the latter that then predicted job performance and OCBs as well as life satisfaction.

Speaking of personality, next is an interesting study by Ferris et al. that attempts to clarify the relationship between self-esteem and job performance. Using multisource ratings across two samples of working adults, the authors found that the importance participants placed on work performance to their self-esteem moderated this relationship. In other words, this suggests that whether self-esteem predicts job performance depends on the extent to which people's self-esteem exists outside of their performance. Interesting.

Lang et al. describe the results of a relative importance analysis of GMA compared to seven narrower cognitive abilities (using Thurstone's primary mental abilities). Using meta-analysis data, the authors found that while GMA accounted for between 10 and 28% of the variance in job performance, it was not consistently the strongest predictor. Add this study to a number of previous ones suggesting that one solution to the validity-adverse impact dilemma may be in part to use narrower cognitive abilities (e.g., verbal comprehension, reasoning).

Last but definitely not least, Johnson and Carter write about a large study of synthetic validity (a topic Johnson writes more about in the August issue of IOP). For those that need a reminder, synthetic validity is the process of inferring validity rather than directly analyzing predictor-criteria relationships. After analyzing a fairly large sample, the authors found that synthetic validity coefficients were very close to traditional validity coefficients--in fact within the bounds of sampling error for all eleven job families studied. Validity coefficients were highest when both predictors and criterion measures were weighted appropriately.

So what the heck does that mean? Essentially this provides support for employers (or researchers) who lack the resources to conduct a full-blown criterion validation study but are looking for either (a) a logical way to create selection processes that do a good job predicting performance, or (b) support for said tests. Good stuff.

Monday, September 06, 2010

Catbert tackles HR initiatives

In honor of Labor Day in the U.S., let's take a humor break from research and high-tech developments. In case you're not a regular Dilbert reader, Catbert (Evil Director of Human Resources) has recently gotten involved in three popular HR initiatives, with varying levels of success:

Workforce skill assessment ("strengths" fans take note)

Internal promotions

Employee surveys

Saturday, August 28, 2010

September 2010 IJSA (those considering SHRM certification, read on)

The September issue of the International Journal of Selection and Assessment (IJSA) is out with a boatload of content. Let's check out some of the highlights:

First up, a piece by Gentry, et al. that has implications for self-rating instruments. The authors studied self-observer ratings among managers in Southern Asia and Confucian Asia and found an important difference: the discrepancy between the ratings was greater in Southern Asia. Specifically, the difference appears in self-ratings rather than observer ratings, indicating differences in how managers in the different areas perceived themselves. Implication? Differences in self ratings may be due to cultural differences in addition to things like personality and instrument type.

The second article is a fascinating one by Saul Fine in which the author analyzed differences in integrity test scores across 27 countries. Fine found two important things: first, there are significant differences in test scores across countries. Second, test results were significantly correlated (r= -.48) with country-level measures of corruption as well as several aspects of Hofstede's cultural dimensions.

Next, an article by De Corte, et al. that describes a method for creating Pareto-optimal selection systems that balance validity, adverse impact, and predictor constraints. This article continues the quest for balancing utility and subgroup differences. A link to the article is here but it wasn't functional at the time I wrote this; hopefully it will be soon.

Next, in an article that SHRM will probably place on their homepage if they haven't already, Lester et al. studied alumni from three U.S. universities to analyze the relationship between attainment of the Professional in Human Resources (PHR) certification offered by SHRM and early career success. Results? Those with a PHR were significantly more likely to obtain a job in HR (versus another field) BUT possession was not associated with starting salary or early career promotions. I'll let you decide if you think it's worth the time (and expense).

If you need another reason to focus on work samples and structured interviews, here ya go. Anderson, et al. provide us with the results of a meta-analysis of applicant reactions to selection instruments. Drawing from data from 17 countries, the authors found results similar to what we've seen in the past: work samples and interviews were most preferred, while honesty testing, personal contacts, and graphology were the least preferred. In the middle (favorably evaluated) were resumes, cognitive tests, references, biodata, and personality inventories.

Fans of biodata and personality testing may find the article by Sisco & Reilly reassuring. Using results from over 700 participants, the authors found that the factor structures of a personality inventory and biodata measure were not significantly impacted by social desirability at the item level. Implication? The measures seemed to hold together and retain at least an aspect of their construct validity even in the face of items that beg inflation.

Speaking of personality tests, Whetzel et al. investigated the linearity of the relationship between the OPQ and job performance. Results? Very little departure from linearity and where present the departure was small. This suggests that utility gains may be obtained across the spectrum of personality test results.

Are you overloading your assessment center raters? Melchers et al. present the results of a study that strongly suggests that if you are using group discussions as an assessment tool, you need to be sensitive to the number of participants that raters are simultaneously observing.

There are other articles in here you may be interested in, including ones on organizational attractiveness, range shrinkage in cognitive ability test scores, and staffing services related to innovation.

Thursday, August 19, 2010

The personality echo


Psychologists have known for a while about something called perceiver effects, which refer to general tendencies to judge others in a particular way. For example you may tend to see people as generally self-serving or selfless, open-minded or closed minded, etc.

It turns out that these perceiver effects say something about you. For example, one of the ways Machiavellianism is measured is by asking whether you generally see a lack of sincerity or integrity in others. In a sense, the judgments you make about others echo back and, when interpreted properly, can say something about your personality.

In the July 2010 issue of the Journal of Personality and Social Psychology, Wood, et al. describe the results of several studies of this phenomenon. Most of the previous studies have used an "assumed similarity" paradigm, where the researchers have attempted to confirm that how one views themselves is assumed to transfer to how one views others.

This study, on the other hand, made no such assumptions. The researchers were interested in what impact self-ratings of personality had on perceptions, regardless of whether it was the same trait. The primary relationship they looked at was the correlation between how one scored on a personality inventory and how they tended to rate others.

In three different studies of college students, the strongest trend was related to agreeableness: those that rated others high in agreeableness tended to rate themselves high in the same trait (r's of .19 to .29 depending on the sample).

The second highest was conscientiousness, and in the same direction, but for different traits: those that rated others high in conscientiousness tended to rate themselves high in agreeableness (r's varied from .10 to 25). Rating others high in openness was also associated with higher self-rating scores of agreeableness (r's from .12 to .27).

Interestingly, in the third study the authors also identified moderate positive correlations between perceiving others in a positive light and several individual characteristics, including self-rated agreeableness, fit with peers, and organizational goals, and negative correlations with need for power, social dominance orientation, and depression.

So what does this mean? Essentially this study suggests that if someone (say, a job applicant) tends to describe others in a positive light--specifically, as agreeable, conscientious, and open--there is a significant chance that they themselves will rate highly on agreeableness. Anyone that's ever interviewed someone who makes negative stray marks about previous co-workers likely has intuited this.

There are several important caveats:

1) The effect sizes (correlations) were modest

2) The sample was restricted to university students

3) There are likely important differences between jobs, impacting both the value of agreeableness as well as how these attitudes are formed.

By the way, an in press version is available here.

Saturday, August 07, 2010

Generational differences in work values: Fact or fiction?


There's been a lot written over the years about so-called "generational differences" among different age groups--e.g., Baby Boomers, Gen X, and GenMe (i.e., Gen Y, Millennials, Net Generation). Various authors have claimed important differences, such as GenMe valuing altruistic employers and social experiences much more than, say, Boomers. Take a look at the business section of your local bookstore and you're sure to find examples.

The problem isn't with the issue--if there truly are differences, they would have important implications for attracting and retaining different segments of the workforce. The problem is with the data.

Turns out most of the previous research is either qualitative (anecdotes) or based on cross-sectional studies done at a single point of time. The problem with these studies is they make it impossible to separate generational differences from career stage differences. In other words, younger applicants/employees may indeed differ from older ones at any point in time--but that could be purely due to factors that impact one's age, not factors related to being born or experiencing a particular point in time.

Luckily for us, in the September issue of the Journal of Management, Twenge, et al. report the results of a longitudinal study that allows us to answer these generational questions with some authority.

The authors used data collected from a nationally representative sample of over 16,000 U.S. high school seniors taken in 1976, 1991, and 2006 (from the Monitoring The Future project).

The results may surprise you. Let's look at each of the work values studied by the authors:

Leisure (e.g., vacation, work-life balance): This became progressively more important over the generations, with GenMe valuing it the most. The difference between GenMe and Boomers was the largest reported in the study (d>.50).

Intrinsic (e.g., interesting and challenging work): While GenY did not differ significantly from Boomers, GenMe were significantly less likely to value this compared to either GenY or Boomers.

Altruistic (e.g., ability to help others and society): While it's commonly reported that GenMe values this more highly than previous generations, results did not support this. No significant differences were found between the three groups.

Social (e.g., job gives feeling of belonging and being connected): This is another area where some have suggested that with the skyrocketing success of sites like Facebook, the younger generations more highly value--indeed, insist on--a workplace that allows social interaction. The results? Not so much. In fact, GenMe placed less value on this compared to both GenY and Boomers.

Extrinsic (e.g., job pays highly or is prestigious). This is an interesting example of non-linearity. Turns out this value peaked with GenY. While GenMe valued this more than Boomers, the difference was more pronounced between Boomers and GenY.

Among the items with the biggest differences were:

- GenMe valuing having 2+ weeks of vacation compared to Boomers
- Boomers valuing a job that allows you to make friends compared to GenMe
- GenY valuing having a job with prestige/status compared to Boomers
- GenMe reporting that work is just a way to make a living compared to Boomers
- GenX valuing being able to participate in decision making compared to Boomers

Overall, intrinsic reward items had the highest means across the generations, with a job that is "interesting" having the highest item mean. Altruistic and social values were also valued more highly, with extrinsic rewards having lower mean values and leisure rewards having the lowest.

The authors summarize the results by saying the data suggest "small to moderate generational differences." If you aren't surprised, kudos to your observational skills. At the very least this is important data to consider when evaluating your recruiting and retention efforts. And it certainly calls into question some of the conclusions being drawn in the popular press.

By the way, a full version of the article is available (at least it was at the time I published this) here.