Home Home | About Us | Sitemap | Contact  
  • Info For
  • Professionals
  • Students
  • Educators
  • Media
  • Search
    Powered By Google

An E-Mail Letter to a Friend1 

Allen I. Kraut
Baruch College, CUNY and Kraut Associates

1 Reprinted by permission. Kraut, A. I. (2001). MetroNews, The Metropolitan New York Association For Applied Psychology, New York, New York.

Dear Terry, 

Your voice-mail message said you are thinking of using a Web-based technology for your firms next survey of employees. Well, you would be part of a fast-growing trend. I recently did a benchmarking study of 20 large companies that regularly use surveys and got some surprising results. About 77% still use paper and pencil for their surveys, but 83% do them electronically, and the majority does both.

Your message also asked what the published research shows about differences between Web-based and more traditional paper-and-pencil-based employee surveys. I thought this note might be a good way to leave you with a record of what I know. Although there is not a lot of research on the topic, the findings so far seem to be reasonably clear-cut. Ill comment on what seem to be the most important findings, in terms of practical applications.

General apprehensions. First, I recognize that you seemed a bit uncomfortable in considering this change. Most survey practitioners, both internal and external, admit to grappling with a dont mess with success form of resistance. But when proper groundwork is done, these concerns turn out to be unwarranted. A clear account of this is given by Scott Speras (2000) experience with NCR involving 30,000 employees worldwide in 24 languages. Like many others, he found the survey to be quicker, cheaper, and preferred for the next survey by 99% of those who used it. Similar results are indicated by Michelle Donovans (2000) account of a survey done in a university. (Their reports, like the other studies noted below, were presented at SIOPs Annual Conferences over the last 3 years.)

Measurement equivalence. Do you know this term? This is the issue of whether the two types of survey seem to measure the same concepts, using the two modes (paper versus Web). That is, do the two modes seem to generate or capture different underlying concepts even when the questions seem to be identical?

The answer is pretty clearly no difference, even if the analytic techniques used to establish this are highly specialized. Hezletts research (2000) shows this is true across four different countries. Fenalsons study also says this is true, using data from a 360 survey. A study by Magnan, Lundby and Fenlason (2000) also shows measurement equivalence.

Differences in mean ratings. This issue is critical to most practitioners. It asks whether we get more or less positive answers just by using a different mode to gather responses. If that were true, we could not compare responses from the two methods, or even track trends properly if another mode was used at a later time.

Yost and Homer (1998) did the best study I have seen on this issue, using a large sample at Boeing Aircraft. Their conclusion is that mode makes no difference if you correct for job level. But they do note an artifactual difference you get when you compare answers only by mode.

As is well known, lower-level employees are less likely to have access to the Web and also more likely to have lower satisfaction scores. Thus, people who are less likely to have access to the Web are also going to have lower scores, but the lower satisfaction is not produced by the mode used to collect their viewpoints!

The study by Magnan, Lundby, and Fenlason (2000, showing measurement equivalence) also seems to show lower item mean scores for those not responding by Web. But if one backs out (or adjusts) the lower-level employees data to take job level into account, this clearly eliminates any mean differences in favorability on account of mode. (I did this in my role as discussant in the symposium where their paper was presented.) Among those respondents who had a choice of survey mode, there is virtually no difference between Web and paper respondents.

Missing data. Yost and Homer (1998), as well as Fenlason (2000), report that the number of missing responses or incomplete answers to fixed- response items is a trifle higher for the Web but inconsequential from any practical point of view. Church and Waclawski (2000) had a similar finding about missing responses in a study of over 1,600 government employees given a choice of responding by paper or Web. (They also found little difference in the mean response and a preference for responding online.)

Write-in comments. This is one area where the evidence clearly points to considerably fuller replies to open-ended questions. Fenlason (2000) found the Web yields replies that are 50% longer. Yost and Homers (1998) study found increases of 150%.

One must note that in addition to longer, presumably fuller and more useful replies, the technology gives you comments already keyed in (by the respondent) and thus all ready for processing, compilation, and analysis. And when text-mining programs are more readily available to help us manipulate this data, watch out!

Participation rates. My own experience finds that the participation rates are pretty much the same. Yost and Homer (1998) report that as well. On the other hand, Spera (2000) found the NCR response rates went up from the low 70s% to a record participation of 77%. Other practitioners have told me that the Web has a slightly positive effect.

I suspect that participation rates are more likely to go up but not just because the method is perceived as easier and quicker. The introduction of a new mode is a wonderful communications opportunity. It is a great time to add some ballyhoo and excitement and to remind people of the positive reasons to take part. When this is done properly, it can surely reinvigorate a lackluster program. Of course, ultimately this has to be backed up by a high-level management commitment and responsive action.

Summary. Well, I hope that answers your concerns based on what we know so far. Overall, Web-based surveys seem to produce the same results as paper-based surveys in terms of average favorability of the responses and in their underlying measurement characteristics. The Web also seems to result in fuller write-in replies and slightly higher participation rates.

It is common among practitioners to worry about what could go wrong with the introduction of a new data collection technology. Indeed, this is usually healthy and appropriate, especially if these concerns are consciously dealt with in the implementation process. But the evaluation of those who have gone through the experience of using Web-based technology seems to be quite positive.

So, good luck to you! And let me know how it works out. 

Best Regards, 



Church, A. H. and Waclawski, J. (2000, April). Is there a method to our madness? Survey and feedback method effects across five different settings. Presented at the 15th annual conference of the Society for Industrial and Organizational Psychology, New Orleans, LA.

Donovan, M. A. (2000, April). Web-based attitude surveys: Data and lessons learned. Presented at the 15th annual conference of the Society for Industrial and Organizational Psychology, New Orleans, LA.

Fenlason, K. J. (2000, April), Multiple data collection methods in 360-feedback programs: Implication for use and interpretation. Presented at the 15th annual conference of the Society for Industrial and Organizational Psychology, New Orleans, LA.

Hezlett, S. A. (2000, April) Employee attitude surveys in multi-national organizations: An investigation of measurement equivalence. Presented at the 15th annual conference of the Society for Industrial and Organizational Psychology, New Orleans, LA.

Magnan, S. M., Lundby, K. M., & Fenlason, K. J. (2000, April) Dual media: The art and science of paper and internet employee survey implementation. Presented at the 15th annual conference of the Society for Industrial and Organizational Psychology, New Orleans, LA.

Spera, S. D. (2000, April), Transitioning to Web survey methods: Lessons from a cautious adopter. Presented at the 15th annual conference of the Society for Industrial and Organizational Psychology, New Orleans, LA.

Yost, P. R. & Homer, L. E. (1998, April), Electronic versus paper surveys: Does the medium affect the response? Presented at the 13th annual conference of the Society for Industrial and Organizational Psychology, Dallas, TX.

April 2001 Table of Contents | TIP Home | SIOP Home