Home Home | About Us | Sitemap | Contact  
  • Info For
  • Professionals
  • Students
  • Educators
  • Media
  • Search
    Powered By Google

Announcing a Special Issue of Personnel Psychology on Quasi-Experimentation

John R. Hollenbeck
Michigan State University

John Stuart Mill established three criteria for inferring causality, (a) covariation, (b) temporal precedence, and (c) elimination of alternative explanations. Although philosophers of science still debate many aspects of what is meant by the term "cause," there is widespread consensus among working scientists regarding the appropriateness of these three rules. When attempting to infer cause within this framework, Mill emphasized the need for a scientist's active control over the independent variable. Active manipulation is critical in science because it allows one to clearly establish temporal precedence, and it is highly instrumental in the process of eliminating alternative explanations for empirically documented covariation.

Although not necessarily familiar with Mill or the classic works in the philosophy of science regarding causation, practicing managers and professionals within contemporary organizations are also concerned with causal relationships. Publications aimed at practitioners note the need for "learning organizations" or "knowledge-creating companies," where knowledge is operationalized in terms that would be very familiar to Mill. That is, the competitive advantage that accrues from knowledge is manifested in the ability to control certain outcomes (e.g., customer satisfaction, employee satisfaction or shareholder satisfaction) via the manipulation of policies and programs in a manner that is superior to one's rivals (Nonake, 1991; Gavin, 1993).

Applied psychologists help bridge the gap between psychological science and organizationally based psychological practice, and much of applied psychological research takes place within organizations. Even though these settings rarely afford the luxury of manipulating variables and then randomly assigning participants to conditions, there are a whole host of formal quasi-experimental designs that do not require random assignment. In addition, many of these designs make excellent use of "naturally occurring" manipulations for inferring causal relationships, and hence point directly to potential applied interventions. Given the boundary-spanning role of applied psychologists, and the joint concern for establishing causal relationships among scientists and practitioners, one might think that the use of quasi-experimental research designs that involve active manipulation (or exploit naturally occurring manipulations) would be widespread in this discipline. This, however, is not generally the case.

Cook, Campbell, and Peracchio (1990) noted that "in reviewing the major journals devoted to industrial and organizational psychology, we have been struck by the relative paucity of field experiments." My experience over the last 3 years as editor of Personnel Psychology has led me to the same conclusion. Rather than using structural features of research design to eliminate alternative explanations for results, applied psychologists rely more heavily on statistical adjustments and modeling to perform the same function. In some cases, this is as simple as partialling the effects or demographic variables prior to examining the effects for purported causes, and in other cases, this involves highly sophisticated approaches based upon structural equations.

Although there is real value in these passive approaches to control, it is easy to forget that partial correlations and weights derived from structural equation modeling use covariance evidence to estimate relationships that are presumed to be causalthey do not directly test causality. Even the most recent versions of LISREL, while powerful, do not permit one to go back in time and establish temporal precedence from cross-sectional data. Thus, even while acknowledging the value of statistical control procedures, it seems that a case can be made that we should supplement our science with more advanced use of formally structured aspects of research design.

In order to help stimulate and promote the more frequent use of formal quasi-experimental research designs, we are devoting a special edition of Personnel Psychology to publishing research that uses these approaches to infer cause in applied settings. The substantive area of the research can deal with any topic that falls under the broad heading of applied psychology, but should focus on manipulations or interventions that are evaluated via:

1. Formal two group designs such as the "untreated control group design," the "untreated control group design with proxy pre tests," the "untreated control group design with separate pre and post tests," or the "untreated control group design, with reverse treatment."

2. Formal single group designs such as "the nonequivalent dependent variable design" the "removed treatment design with pre and posttests," the "repeated treatment design," and the "regression discontinuity design."

3. Original or hybrid quasi-experimental designs that may differ in structure from those described above, but are similar in their spirit of deriving rigorous causal inferences based upon active manipulation of variables in field settings that are not conducive to random assignment. This would exclude, however, studies based upon the "one group pretest-posttest design" and "posttest only design with nonequivalent groups."

All of these designs are described in detail by Cook and Campbell (1979), as well as Cook, Campbell, and Peracchio (1990). When these formal designs are supplemented with the type of rich contextual knowledge held by practitioners, and sound substantive theory possessed by scientists, it is often possible to establish covariation, temporal precedence, and eliminate alternative explanations for results (such as selection or history artifacts) despite the lack of random assignment. It is our belief that many scientists and practitioners have access to (or can generate) data that is structured in this fashion, and that framing this data in quasi-experimental terms will allow meaningful contributions to the discipline's knowledge base regarding causal relationships among the phenomena we study and manage.

The submission deadline for the papers that will be published as part of this special edition is March 31, 2001.

References

Nonak, I. (1991). The knowledge creating company. Harvard Business Review, 79, 97-109

Garvin, D. A. (1993). Building a learning organization. Harvard Business Review, 81, 78-91.

Cook, T. D. and Campbell, D. T. (1979). Quasi-experimentation: Design and analysis issues for field settings. Boston, MA: Houghton Mifflin.

Cook, T. D., Campbell, D. T., and Peracchio, L. (1990). Quasi-experimentation. In M. D. Dunnette and L. M. Hough (Eds.) Handbook of industrial and organizational psychology. Palo Alto, CA: Consulting Psychologists Press.

 


April 2000 Table of Contents | TIP Home | SIOP Home