Featured Articles
Jenny Baker
/ Categories: 612

“It Was Science at Its Best”: A Look Back at a Path-Defining Study in Open Science From…I-O Psychologists‽

Christopher M. Castille, Nicholls State University

In this entry of Opening Up—TIP’s column for all things open scienceI look back at an older study from our field that stands out as an exemplar of open science despite being published nearly 30 years before discussions of open science took off. But before I tell you more about this exemplar, I must begin in the spirit of, well, transparency.

When TIP Editor Adriane Sanders announced the theme for this particular TIP issue—From Hugo to AI: Memorial Moments in SIOP—I admittedly doubted whether I could produce something relevant because many open science innovations are widely seen as “current,” popularized in the wake of replication failures emerging across the sciences. Indeed, the open science movement has given rise to a buffet of practices that represent drastic changes in the way our science is conducted (e.g., preregistration of hypotheses; Uhlmann et al., 2019), communicated (Nosek & Bar-Anan, 2012), and incentivized (Nosek et al., 2012; see Castille et al., 2022). Several of these practices seem beneficial for our science but require thoughtful application (Guzzo et al., 2022). Indeed, questions about the value and relevance of open science practices are evident in our field (see Torka et al., 2023).

This brings me to this particular entry in Opening Up. I often look outside of I-O psychology for open science practices that might benefit our science. Recently, while surveying the literature regarding the prospect of big team science for I-O psychology (see also Castille et al., 2022), I came across an interesting book chapter on adversarial collaborations (see Rakow, 2022). Adversarial collaborations are research initiatives carried out by two or more individuals or groups who have conflicting theories, predictions, or hypotheses, and reach a consensus through empirical testing (Rakow, 2022). Such collaborations have become quite popular over the past few years, particularly in social psychology (e.g., the Many Smiles collaboration, Coles et al., 2022), cognitive psychology (Oberauer et al., 2018), and other areas of interest to psychology (e.g., gender bias in academic science; see Ceci et al., 2023). This chapter described a path-defining study of critical importance for furthering psychological science. Surprisingly, it was conducted by scholars we all know. Clearly, this paper has been influential, inspiring more scholars to embark on adversarial collaborations that continue to this day.

The paper I’m referring to was published in none other than the Journal of Applied Psychology by Gary Latham, Mirian Erez, and Edwin Locke in 1988. As Latham and colleagues note, their work appears to be the first published adversarial collaboration in the psychological sciences. It both outlines and illustrates how to execute such a collaborative exercise, subsequently guiding studies in other areas of psychology (see Rakow, 2022). I immediately saw its relevance for this column. So in this entry into TIP, I’ll briefly describe the study and its place in the ongoing discussion of open science. I close with a call for you to share more bright spots in our science that have been overlooked.

A Brief Look Back at Latham et al. (1988)

In 1988, Latham et al. outlined the adversarial collaboration method, which involved identifying methodological differences and then creating experiments designed to resolve any significant disagreements. They then published a series of experiments illustrating how scientific disagreements may be resolved using this method.

Latham et al.’s (1988) substantive focus was on the effect of participation on goal commitment and performance. Whereas Latham argued that active participation in goal setting did not substantially impact goal commitment or task performance, Erez argued the opposite: that goal acceptance following group discussion and goal commitment both predict performance. Both had evidence supporting their views obtained from prior studies.

The scientists decided to set their reputations aside for the benefit of science and resolve their dispute via crucial experiments with a third party (Edwin Locke) acting as a mediator. Each antagonist systematically reviewed the others’ studies. Latham and Erez, with Locke present, then brainstormed differences in experiments that might account for different results. Five hypotheses were generated, several of which were not necessarily considered prior to this discussion and could only have been uncovered via collaboration. They were

  1. Task importance. Whereas Latham’s experiments were consistently framed as important (e.g., brainstorming, real-life jobs), Erez’s experiments were judged by Latham as involving less important tasks (e.g., simulated scheduling, evaluating job descriptions). Latham hypothesized that participation may have greater effects in Erez experiments because the tasks—on their own—were not particularly important.
  2. Group discussion. Whereas Latham’s participative goal-setting experiments involved a supervisor or experimenter and a subject or study, Erez’s participative conditions always involved group discussions with five or six people. At the time, research established that group-set goals led to higher goal commitment and performance than self-set goals.
  3. Instructions. In their brainstorming, it became apparent that all methodological details for carrying out experiments do not always make their way into the published article. In reviewing each other’s published method sections, both Erez and Latham discovered differences in their typical instructions. Whereas Latham’s instructions were given in a polite, friendly manner to engender supportiveness, Erez’s instructions did not possess these elements. These two instruction sets are below.
    1. Latham et al.: Thank you for agreeing to participate in this study. Weyerhaeuser Company has employed us to _____. You are now familiar with the task. I would like you to do the following _____. This goal is difficult but attainable.
    2. Erez et al.: Now that you have already had a practice session to get familiar with the task, you are asked to next attain a score of _____. You will have _____ minutes.
  4. Setting self-set goals prior to experimental manipulations. In Erez’s work, half of the subjects set their own goals before the assigned or participative manipulation occurred. Erez and colleagues found that commitment was higher when goals had not been set. It was surmised that subjects who set their own goals might have been upset about being misled, particularly when the new goals were very high.  
  5. Cultural value differences. Erez’s experiments were conducted in Israel, a more collectivistic society, whereas Latham’s studies—many of which were field experiments—were conducted in the United States and Canada.

The authors would go on to explore other factors that emerged as potentially important: (a) a two-phase design used by Erez designed to manipulate goal difficulty, (b) self-efficacy instructions used by Erez in the participative condition only, and (c) instructions given to reject goals with which the subjects did not agree.

Having identified the factors that may explain the divergent findings, Latham and Erez jointly designed experimental procedures to resolve their scientific dispute. Each designed two studies that were directed by the antagonist and run by research assistants. They initially designed just two experiments but agreed to execute two additional experiments if needed. A total of four experiments were executed. The authors conclude by agreeing that tell-and-sell goals are as effective as participative set goals as both impact goal commitment and performance. However, the other factors (e.g., task importance, group decision, values, two-phase design) either had little or no effect on either outcome or affected commitment but not performance (e.g., goal difficulty, setting vs. not setting a goal, offering self-efficacy instructions, or instructions to reject unacceptable goals).

Several uncommon features of this paper must be noted. Unique for any journal article, indeed most papers published in our field, is that mistakes made by experimenters were openly disclosed in the publication. We can all identify with something not going quite right when we try something new. What else is research if not trying something new? The paper also ends with commentary from each contributor where each author offers their perspective on the findings. Although disagreements were not entirely resolved, progress was evident.

To quote Latham: “Conducting the present series of studies was as exciting as it was illuminating. It was science at its best.” Erez agreed, noting that the adversarial collaboration was useful for resolving a dispute empirically and was as important as the outcome they had arrived at because the adversarial collaboration process is broadly replicable. Locke was struck by the number of procedural differences that can vary broadly for scholars allegedly studying the same phenomenon.

Fostering More Adversarial Collaborations

Whereas disagreements in the sciences often simply fade away over time, Latham et al.’s study is noteworthy for formalizing a process designed to generate consensus so we can move forward as a science. Consensus is important for a number of reasons, one of which I should highlight is promoting evidence-based practice (Rynes et al., 2012). As I reflect on Latham et al., I wonder what we can do to spur more of these kinds of collaborations in our science. Latham et al. provide some essential conditions that drive adversarial collaborations. I will share a few and add commentary.

First, collaborators must be willing to admit that they could be wrong. Epistemic humility—being humble about our assumptions and understanding—is essential. As Richard Feynman (1974) once noted: “The first principle is that you must not fool yourself, and you are the easiest person to fool.” Adversarial collaborations help us to avoid fooling ourselves and to act with the best information on hand while doubting that which we would like to be true (Erez & Grant, 2014). Such collaborations may be particularly helpful for developing PhD researchers (see also Schwab et al., 2023). As the great observer James Randi once noted in a public lecture:

When people get a PhD.…there is a magical moment…as the paper hits the hand…a genetically engineered chemical goes into the flesh, into the bloodstream, directly into the brain, and paralyzes the part of the brain in the speech center…the part that enables that person up until that moment to pronounce two sentences: I was wrong and I don’t know. (see https://tinyurl.com/2c99n7z9).

Second, collaborators must not dislike each other personally. This helps the two parties work together for the benefit of our science. This is striking at a time when scholars widely believe that open science, although identifying meaningful challenges that we must overcome, has perhaps made us more skeptical and less trusting of each other. Although furthering our science benefits from organized skepticism (Castille et al., 2022), we must not let such skepticism prevent us from effectively collaborating so that we may jointly understand the world at work from a scientific perspective.

Third, collaborators must remain curious about the reasons for contradictory findings. Curiosity is important because studies are rarely reported with sufficient details for building replications that are constructive for the field (Köhler & Cortina, 2021), and these details may often only be unearthed via joint discussion with a trusted mediator.

This brings me to a fourth notable factor: the presence of a trusted colleague who can act as a mediator of the conflict. Such a mediator can help the two leaders resolve their differences over discussion and help design consensus-generating experiments. Leaders in our field can and (perhaps) should play a role in promoting such consensus-generating work so that our research can be put to wider spread use. Indeed, their work would answer a call for more adversarial collaborations in our science (Edwards, 2008). Such collaborations move us closer to the Poperrian ideal of critical testing of falsifiable hypotheses via severe tests (Mayo, 2018). Such work would look much more like a progressive research program that identifies main effects, mechanisms, and boundary conditions (Lakatos, 1976).

What Bright Spots Should I Highlight in a Future TIP Entry of Opening Up?

As a proponent of open science who often looks outside our field for inspiration, I wonder what other open science gems I am overlooking in our field. What other exemplars exist, where we clearly are doing “science at its best”? Please share your colleagues’ work—or openly brag about your own contribution to illustrating our science at its best. If there are any that you think deserve mention, please share them with me (christopher.castille@nicholls.edu), so I can shine a bright light on this work.


Castille, C. M., Kreamer, L. M., Albritton, B. H., Banks, G. C., & Rogelberg, S. G. (2022). The open science challenge: Adopt one practice that enacts widely shared values. Journal of Business and Psychology, 37(3), 459–467. https://doi.org/10.1007/s10869-022-09806-2

Ceci, S. J., Kahn, S., & Williams, W. M. (2023). Exploring gender bias in six key domains of academic science: An adversarial collaboration. Psychological Science in the Public Interest, 152910062311631. https://doi.org/10.1177/15291006231163179

Coles, N. A., March, D. S., Marmolejo-Ramos, F., Larsen, J. T., Arinze, N. C., Ndukaihe, I. L. G., Willis, M. L., Foroni, F., Reggev, N., Mokady, A., Forscher, P. S., Hunter, J. F., Kaminski, G., Yüvrük, E., Kapucu, A., Nagy, T., Hajdu, N., Tejada, J., Freitag, R. M. K., … Liuzza, M. T. (2022). A multi-lab test of the facial feedback hypothesis by the Many Smiles Collaboration. Nature Human Behaviour, 6(12), 1731–1742. https://doi.org/10.1038/s41562-022-01458-9

Edwards, J. R. (2008). To prosper, organizational psychology should … overcome methodological barriers to progress. Journal of Organizational Behavior, 29(4), 469–491. https://doi.org/10.1002/job.529

Erez, A., & Grant, A. M. (2014). Separating data from intuition: Bringing evidence into the management classroom. Academy of Management Learning & Education, 13(1), 104–119. https://doi.org/10.5465/amle.2013.0098

Feynman, R. (1974, June 14). Cargo-cult science speech, Caltech. Speakola. https://speakola.com/grad/richard-feynman-caltech-1974

Guzzo, R., Schneider, B., & Nalbantian, H. (2022). Open science, closed doors: The perils and potential of open science for research in practice. Industrial and Organizational Psychology: Perspectives on Science and Practice. https://doi.org/10.1017/iop.2022.61

Köhler, T., & Cortina, J. M. (2019). Play it again, Sam! An analysis of constructive replication in the organizational sciences. Journal of Management, 47(2), 488–518. https://doi.org/10.1177/0149206319843985

Lakatos, I. (1976). Can theories be refuted? In S. G. Harding (Ed.), Falsification and the methodology of scientific research programmes (pp. 205–259). https://doi.org/10.1007/978-94-010-1863-0_14

Latham, G. P., Erez, M., & Locke, E. A. (1988). Resolving scientific disputes by the joint design of crucial experiments by the antagonists: Application to the Erez–Latham dispute regarding participation in goal setting. Journal of Applied Psychology, 73(4), 753–772. https://doi.org/10.1037/0021-9010.73.4.753

Mayo, D. G. (2018). Statistical inference as severe testing: How to get beyond the statistics wars. Cambridge University Press.

Nosek, B. A., & Bar-Anan, Y. (2012). Scientific utopia: I. Opening scientific communication. Psychological Inquiry, 23(3), 217–243. https://doi.org/10.1080/1047840X.2012.692215

Nosek, B. A., Spies, J. R., & Motyl, M. (2012). Scientific utopia: II. Restructuring incentives and practices to promote truth over publishability. Perspectives on Psychological Science, 7(6), 615–631. https://doi.org/10.1177/1745691612459058

Oberauer, K., Lewandowsky, S., Awh, E., Brown, G. D. A., Conway, A., Cowan, N., Donkin, C., Farrell, S., Hitch, G. J., Hurlstone, M. J., Ma, W. J., Morey, C. C., Nee, D. E., Schweppe, J., Vergauwe, E., & Ward, G. (2018). Benchmarks for models of short-term and working memory. Psychological Bulletin, 144(9), 885–958. https://doi.org/10.1037/bul0000153

Rakow, T. (2022). Adversarial collaboration. In W. O’Donohue, A. Masuda, & S. Lilienfeld (Eds.), Avoiding questionable research practices in applied psychology (pp. 359–378). Springer Nature.

Rynes-Weller, S. L. (2012). The research-practice gap in I/O psychology and related fields: Challenges and potential solutions. In S. W. J. Kozlowski (Ed.), The Oxford handbook of organizational psychology, (Volume 1, 1st ed., pp. 409–452). Oxford University Press. https://doi.org/10.1093/oxfordhb/9780199928309.013.0013

Schwab, A., Aguinis, H., Bamberger, P., Hodgkinson, G. P., Shapiro, D. L., Starbuck, W. H., & Tsui, A. S. (2023). How replication studies can improve doctoral student education. Journal of Management Scientific Reports, 1(1), 18–41. https://doi.org/10.1177/27550311231156880

Torka, A.-K., Mazei, J., Bosco, F. A., Cortina, J. M., Götz, M., Kepes, S., O’Boyle, E. H., & Hüffmeier, J. (2023). How well are open science practices implemented in industrial and organizational psychology and management? European Journal of Work and Organizational Psychology, 32(4), 461–475. https://doi.org/10.1080/1359432X.2023.2206571

Uhlmann, E. L., Ebersole, C. R., Chartier, C. R., Errington, T. M., Kidwell, M. C., Lai, C. K., McCarthy, R. J., Riegelman, A., Silberzahn, R., & Nosek, B. A. (2019). Scientific utopia III: Crowdsourcing science. Perspectives on Psychological Science, 14(5), 711–733. https://doi.org/10.1177/1745691619850561

539 Rate this article:
No rating
Comments are only visible to subscribers.


Information on this website, including articles, white papers, and other resources, is provided by SIOP staff and members. We do not include third-party content on our website or in our publications, except in rare exceptions such as paid partnerships.