Home Home | About Us | Sitemap | Contact  
  • Info For
  • Professionals
  • Students
  • Educators
  • Media
  • Search
    Powered By Google

Informed Decisions:  Research-Based Practice Notes

Steven G. Rogelberg
Bowling Green State University 

Welcome to this encore edition of Informed Decisions. This past April, I, along with Allan Church, Janine Waclawski, and Jeff Stanton, wrote an Informed Decisions column examining two survey practices that have become quite banal. The two practices examined involve data interpretation through normative comparisons and data reporting via percent favorables. The informal feedback on the article has varied from I agree with what was said, we will implement changes in the next survey cycle to I agree in spirit with what was written, but it is not feasible in my organization. I also received a more formal letter of response from Larry Eldridge. In the spirit of discussion, Larry has agreed to allow his letter of response to be published here. After Larrys letter, I have inserted a short response to the response. If you have any comments or questions concerning this column please contact me at rogelbe@bgnet.bgsu.edu.

Response to Steven Rogelberg et al.s article Problems with
and Potential Alternatives to Two Common Survey Practices:
Data Reporting Via Percent Favorables and Normative

Larry D. Eldridge
Genesee Survey Services, Inc.

I was surprised when I read Steven Rogelberg et al.s article in the April 2001 issue of TIP, for I have been doing employee surveys since the mid- to late 1970s, and with regard to normative comparisons, I have reached almost the opposite conclusion. I have concluded that norms are extremely desirable if not imperative for the effective interpretation and use of employee survey data. This is not to say that all norms are created equal, or that sound judgment in their use isnt warranted, but they provide a point of reference that is invaluable in deciding where an organization should put its limited resources for improvement.

Across hundreds of surveys, the pattern that people express high satisfaction with job content and tend to be least positive about pay, opportunities, and recognition is repeated over and over. At the same time, some organizations score relatively higher on these dimensions while others score relatively lower. I would argue that for the purposes of improving the overall work environment, the relative standing of the organization against norms is much more important than the absolute score. An example stands out in my mind to illustrate the case. An organization conducted a survey and, like many others, found that the employees were the least positive about recognition compared to the other dimensions that they had measured. Without referring to norms, they selected this as their primary area for improvement. Had they looked at the normative comparison, they would have known that they scored well above average in this area. To the extent that they investigated other companies recognition systems and reinvented their own based on this work, they would have created a system that moved them more toward the norma decline from where they were before. This is not to say that their employees, like most employees, are satisfied with the level of recognition in their work situationthey are not. But it does show that recognition is a difficult area to tackle and that if the organization is to score still higher, they will need to take a creative approach that builds on the strengths of their existing program rather than turning to other organizations and seeking to find better practices to copy.

This case is made even stronger by the relatively narrow range of scores observed across organizations. Using the percent favorable as the metric, we have calculated the observed range of scores on the same item across many organizations that have at least 100 respondents and found that in general 80% of organizations will fall within 10 or 15 percentage points of the norm. That means that the upper 10% of organizations are usually only about 1015 points above the mean and the bottom 10% of organizations are only 1015 points below. While it is still possible for an organization to score anywhere along the theoretical limits of the metric (from 0% to 100% favorable), the vast majority of organizations will fall in a much more limited range. Consider two questions that deal with very different dimensions. From our National Work Opinion Survey, 83% agree or strongly agree with the statement I like the kind of work I do. On the other hand, when asked How satisfied are you with the recognition you receive for doing a good job? 49% say they are satisfied or very satisfied. In an organization, assume that 75% said they liked their work and 55% said they were satisfied with the recognition they received. Where should the organization put its effort? Without referring to the norms, they would probably work on recognition, putting time and resources into improving an area where they were already strong. With the comparison to the norms, they would realize that there is more opportunity in enhancing the work itself.

Many of the cautions noted in the Rogelberg article are warranted, for the comparison to normative data is highly dependent on the quality of measurement both in the organization and in the normative database used for reference and often needs informed judgment. The authors make the point that the questions should be worded the same. This is imperative, for small wording changes can change the result by 510 percentage points. As noted above, a shift of 5 points is a big shift. They also argue that organizational composition may be different from one organization to another. This is certainly true and for certain items this can play an important role in selecting the right norm for comparison or using the norm correctly. For example, it is clear that people in production types of jobs are commonly less positive about their work than people who are in supervisory or managerial positions. It does make sense to keep the appropriate norms in mind when analyzing an organization that is composed predominantly of one job type, perhaps even weighting the norm to match the make-up of the target organization. Of course, it raises important questions about the norms themselves. Unfortunately norms are often no more than the cumulative results of data collected from clients doing business with the firm providing the norms. It is possible for these norms to shift when a large client surveys with the firm, for the norms to go out of date by reaching too far back in history, or for the norms to reflect only a particular clientele that the particular firm works with.

Rogelberg et al. also raise some concerns that should be kept in mind, but they seem to deal with other issues besides the use of norms. The issue of item context and its influence on survey results is well documented, though the degree of shift is normally more in the 23 percentage point range than the example of a 37% shift cited (p. 101). Clearly, shifts in measurement that are due entirely to the measurement instrument itself are threats to the value of surveys. It is true that these context effects could influence your conclusion when making comparisons to norms, but they would influence your conclusions regardless of whether norms are used or not. As we move to electronic administration through Web-based approaches, we begin to have the opportunity to present questions randomly, thus eliminating item-context effects.

The suggested alternatives to norms are of interest, but fall short of what empirical norms can provide. The notion of expectation norming was presented, where a segment of the organization (e.g., management) is asked how they think the employees will respond. This type of comparison can reflect the extent to which management is in touch with the employees and may indeed be a valuable piece of information. But it is an entirely different question than the one you seek to answer by making comparisons to empirical norms. The idea of goal normingasking a segment of the organization (e.g., management) to respond how they hope the employees will answerlooks very helpful as an exercise for management to wrestle with the relationship between the survey results and their vision of the organization. Realistically, management could rank the categories measured in terms of the importance of achieving their mission/vision, but once they have done that, there is no substitute for empirical norms to help them gauge what is a good score. Having managerment estimate what a good score is seems to be trying to provide pseudo-norms. But it is hard to argue that someones estimate of what a good score is would be better than knowing what a good score is. In this arena, I think we also have very good tools, like structural equation modeling, to help management identify the key drivers of important organizational outcomes (like retention and performance).

Based on nearly 30 years experience working with employee surveys in organizations, I have concluded that having a normative comparison is extremely valuable, if not essential, in survey interpretation, but it is critical that the norms be of high quality and that the user be aware of the strengths and limitations of the norms base they use. It is perhaps unfortunate that the norms that are available are primarily seen as a source of competitive advantage for the firms that work in this field. A challenge that SIOP might undertake would be to sponsor the collection of high quality norms and make them available to I-O practitioners in our field. It could have a dramatic influence on our field and in the process provide some excellent data for other research projects as well.

Response to Larry Eldridges response to Steven Rogelberg
et al.s article Problems with and Potential Alternatives to
Two Common Survey Practices: Data Reporting Via Percent
Favorables and Normative Comparisons

Steven G. Rogelberg
Bowling Green State University

As we wrote in the April 2001 TIP Informed Decisions column, rather than calling for the discontinuation of norming we point the reader instead to some factors to consider which can impact the validity and utility of such efforts. We then go on to point out various empirically studied methodological issues (data equivalence, item-wording effects, and item-context effects) that if ignored, can easily and substantially compromise the validity of normative database comparisons. In speaking with Larry, I am quite confident that he would agree that these are important methodological issues to consider when choosing a normative database.

In a nutshell, I believe that normative comparisons if done correctly and appropriately (which in my opinion is not the majority of the time) can facilitate data interpretation. They can indeed provide some contextual insight into the data; however, I would also argue that an organization should compare observed data not only to what others have obtained, but to what is theoretically desired and plausible. After all, dissatisfied employees are still dissatisfied, regardless of whether their dissatisfaction is consistent with external satisfaction norms. Similarly, just because an organizations poor ratings of senior leadership may be higher than the benchmark for senior leaders in the same industry in general, this does not mean that leadership itself is not a significant issue for the organization conducting the survey. Furthermore, if an organization merely writes off a low-satisfaction area (e.g., recognition) as not being worthy of action because it is consistent with a normative database, they will have failed to address the employees perceptual reality and as a result run the risk of alienating respondents (my organization did not really listen to my opinion). Taken together, the norms do not define reality for the employees who completed the surveys, therefore why should they solely define the reality of those that are evaluating observed data?

I further believe that external normative comparisons are sometimes used as an easy and convenient replacement for good hard thinking about survey data. Consequently, external norm comparisons should be viewed as one of many possible interpretative tools at a survey practitioners disposal. In our April column we introduced some additional interpretative tools (e,g., expectation norming and goal norming). We positioned these tools not as replacements for database norming, but more options for the survey practitioner when trying to interpret and engender commitment and acceptance toward the data and its implications. Reliance on any one tool (especially if done incorrectly) leads to potential misinterpretation, denial, wasted employee effort, and perhaps the development of an inappropriate action plan. While these actions obviously have a negative impact upon the client organization, they also damage the reputation and credibility of our field.

July 2001 Table of Contents | TIP Home | SIOP Home