Featured Articles
Jenny Baker
/ Categories: 603

Opening Up: The Low-Hanging Fruit of a Big Team Open Science Collaboration in I-O Psychology

Christopher M. Castille, Nicholls State University

In the fall 2022 issue of Opening Up, TIP’s column for all things open science, I asked TIP’s readers to consider the idea that a big team science initiative might have a place within I-O psychology. Big team science allows researchers to pool their resources to solve larger field-wide challenges (e.g., low replicability and low generalizability; see Forscher et al., 2022). Such challenges are pervasive across both the supposedly soft and harder sciences (see Uhlmann et al., 2019), including applied social sciences such as I-O psychology and management (see Banks et al., 2016). Others have gone on to argue that big team science promotes career development and mentoring, and may even enhance diversity, equity, and inclusivity within our discipline (Moshontz et al., 2018). Such benefits and opportunities are ones that my colleagues and I deeply considered in a recent proposal we called “ManyOrgs” (see Castille et al., 2022), which outlined one such potential big team science collaboration for our discipline.

One reason I think a big team science initiative may be helpful concerns the thoughtful uptake of open science practices in applied disciplines such as ours (e.g., management). Although there is a need for openness and transparency in our research (see Banks et al., 2016), uptake of open science practices (i.e., in articles published in our academic journals) has been uneven, particularly in applied disciplines such as ours (Hensel, 2021). Additionally, there are many challenges to opening up our research, particularly for work conducted in applied settings. For instance, field researchers do not want to (wittingly or unwittingly) disclose competitive advantages (see Guzzo et al., 2022) or compromise the confidentiality of employees (Pratt et al., 2020). Of course, it should be stated that journals have played a key role in improving the uptake of open science, such as by encouraging scholars to use those tactics that fit the purpose of their studies (e.g., Journal of Business and Psychology) or using a methods checklist to report which open science tactics are being used (see Eby et al., 2020) as is required for publication at the Journal of Applied Psychology. Such activities make salient the broader challenge that we face as a field: how to apply open science practices thoughtfully without unintentionally harming our discipline’s connection to practice (Guzzo et al., 2022).

Although these points are well taken, consider the notion that our science is, as colleague Rick Guzzo put it to me, normatively open in practice, albeit locally (Guzzo, under review). Organizationally based research encourages our practitioners to maintain open data, collaborate transparently, and ensure insights are replicable. Importantly, such insights may not be accessible or reproducible globally that is, shared widely in the field—for a variety of understandable reasons (e.g., giving away competitive advantages, violating confidentiality agreements). In my view, Guzzo’s point about local open science reveals how important it can be for early career scholars, particularly those who wish to enter practice, to build up their open science skillset. How then might we as educators (I teach at a business school) spur greater interaction with the open science skillset within undergraduate and graduate training programs?

I want to propose an initiative that is (perhaps) the “low-hanging fruit” of spurring greater—and, I think, more thoughtful—use of open science practices in our field: a big team science initiative that services undergraduate and graduate training in I-O psychology (and related disciplines, such as management).1 As with many things in the open science movement, there are precedents for such an initiative. Consider the Collaborative Replications and Education Project (CREP) initiative (pronounced “crayp”; see Wagge et al., 2019). This initiative exists to provide training, support, and professional growth opportunities for students and instructors who are engaged in replication projects (see crep-psych.org). It should be noted that this initiative draws on resources from the Framework for Open and Reproducible Research Training (FORRT), which provides pedagogical infrastructure and resources for supporting the teaching and mentoring of open and reproducible science (see Pownall et al., 2021). One such resource I wish to highlight here is Hawkins et al. (2018), who provide a framework for improving the replicability of psychological science through pedagogy; that is, embedding replication efforts into both undergraduate and graduate course requirements.2 A similar initiative has recently taken off in management; termed the Advancement of Replications Initiative in Management, led by Andreas Schwab (see arimweb.org). There are even publicly available resources for getting a big team science initiative up and running (see https://debruine.github.io/big-team-setup/). Why not give a big team science initiative a try in I-O psychology but focus our efforts on graduate and undergraduate students, with the aim of facilitating the thoughtful uptake of open science practices?

How Big Team Science Facilitates the Uptake of Open Science Practices

This particular big team science initiative would primarily assist with exposing undergraduate and graduate students to I-O psychology theory and methods via replication research. I-O psychology has a rich set of theories that are widely seen as important, scientifically valid, and practically useful, such as goal setting theory, job characteristics theory, and transformational leadership (see Miner, 2003). These domains seem ripe for identifying robust and replicable phenomena that are relevant for organizational settings and provide fodder for replication. Alternatively, as a crowdsourcing initiative, ideas can be sourced from instructors who wish to contribute their class time to the broader initiative and practitioners who want to support student training, just to name a few (for other ideas for, well, crowdsourcing effects to be replicated, see Uhlmann et al., 2019). There are also several commonly relied-upon theories that can, and perhaps should, be applied for the purposes of building a robust methodological skillset (e.g., classical test theory, item response theory, generalizability theory, and psychometric network theory).

How might students contribute to such a collaborative effort, and why might their participation spur the uptake of open science practices? Students can contribute through activities that include (but are not limited to) writing analytic code,3 designing surveys, conducting power analyses, facilitating preregistration, assisting with the publication of the registered report (if applicable), gathering data according to agreed-upon protocols, or in executing any open science tactic that comes from the broader buffet of tactics (see Castille et al., 2022). It should match the aims of the course to which students are assigned. For instance, students completing a course on psychometrics can contribute by applying different latent variable models for test-scoring purposes or assist with the planning and execution of a meta-analysis (e.g., controlling for different sources of measurement error). In contributing to the replication effort process, students would have the opportunity to learn about several key concepts in our field, such as how to execute a study that contributes to a subsequent meta-analysis, sampling error, methodological moderators, the importance of replication, and the value of the open science skillset (to name just a few). Perhaps most importantly, it will embed open science practices firmly into undergraduate and graduate training in I-O psychology (and adjacent disciplines who wish to contribute to the effort, such as management, organizational behavior, data analytics, and human resources), promoting their use in later professional capacities (e.g., research or applied).4,5

It is worth taking some time to define the shape that these replication efforts may take, as there are many forms of replication. I will discuss three that are top of mind. “Direct” or “exact” replications can be considered as “same materials, new observations” and have been given a great deal of attention in open science discussions (see Open Science Collaboration, 2015). By contrast, “conceptual” replications can involve replications that vary some feature of a study (variables), design, and populations, in ways that test some prior theory or proposition in new ways (see Guzzo et al., 2022). Last “constructive” replications range from executing replications that make incremental advancements (e.g., increasing statistical power) to comprehensive advancements (e.g., addressing all key methodological shortcomings of prior literature; Köhler & Cortina, 2021). It should be noted that scholars have argued that conceptual replications are more valuable for applied disciplines such as ours (see Guzzo et al., 2022 for a brief review). However, conceptual replications may only be constructive when designs make explicit mention of overcoming the methodological shortcomings of prior attempts (e.g., enhancing statistical power; see Köhler & Cortina, 2021). Perhaps focusing more explicitly on how a replication effort is constructive for the field will be a more fruitful form of big team replication research for our students to engage in. If you, as a reader, disagree, then please respond by sharing your perspective.

On the “Paradox of Replication” and Big Team Science Collaborations (Guzzo et al., 2022)

Guzzo and colleagues (2022) recently argued that replication research, such as the kind that I have promoted in this article, may unwittingly promote less robust findings via simple hypothesis testing on small samples with few variables in lab (not applied) settings, which they term the paradox of replication. These points are made salient as illustrated by certain big team science efforts occurring within psychology (e.g., the Many Labs studies; see Ebersole et al., 2016; Klein et al., 2014), several of which leveraged research designs that are relatively easy to execute across labs (e.g., studies could be easily executed in an online setting), allow data to be gathered quickly and efficiently via a single survey (single-source, single time-point), and test simple treatment effects (treatment vs. control design).6

As I-O psychologists, we should be rightly concerned that easy-to-execute studies can bring about biased findings (see Podsakoff et al., 2012). We should also be mindful of whether a big team science initiative will be useful for our field. I definitely agree that we need to carefully consider whether to adopt or encourage certain open science practices (e.g., making data available in its rawest form, or so-called “born open” data; see Rouder, 2016), such as starting a big team open science initiative focused on training the next generation of psychologists. We do not want to unintentionally harm our discipline’s connection to practice. Additionally, there are several barriers and risks associated with carrying out a big team science initiative, a few of which are mentioned by Guzzo and others are discussed by big team science advocates (e.g., being overly conservative in testing our theories; see Forscher et al., 2022).

I simply wonder if the paradox of replication is necessarily a feature of big team science or a bug in its application to certain areas of the social sciences (e.g., Many Labs). Let’s consider a positive case in a different research context: behavioral genetics and tests of the candidate gene hypothesis (for an overview, see Ritchie, 2020). As a field, behavioral geneticists once vigorously searched for specific genes that might explain variation in human traits (e.g., personality, intelligence). That a specific gene may be linked causally to a certain behavioral trait is broadly referred to as “the candidate gene hypothesis.” In the 2000s, several studies were published that supposedly identified effects linking specific genes to outcomes (e.g., cognitive test scores, depression, psychological resilience). Samples in these published studies were often quite small, typically involving no more than a few hundred individuals. However, by the mid-2010s, almost all of the candidate gene effects were discredited when the genome-wide association study (GWAS) methodology emerged. GWAS involved large numbers of genetic variant–behavioral trait linkages and very large sample sizes (e.g., tens to hundreds of thousands). In addition to generating new insights for the field, it is crucial to note that GWAS studies could not be executed without the collaboration of many researchers and organizations, (Forscher et al., 2022; Uffelmann et al., 2021). Collaborations such as these lead me to wonder if there is value in creating a big team science initiative for I-O psychologists.

Conclusion

Psychology’s founding in 1879 by Wilhelm Wundt required students—including Hugo Münsterberg (our subfield’s founder), James McKeen Cattell, Charles Spearman, and G. Stanley Hall (early influencers in our discipline)—to engage in replication research (Mülberger, 2022). More recently, such replication efforts are occurring but are much larger in scale (see Uhlmann et al., 2019; e.g., the Psychological Science Accelerator, see Moshontz et al., 2018). Can we—or should we—create such an initiative for our field that serves our purposes as an applied discipline? Should we start with our students? If you have thoughts, please share them with me at christopher.castille@nicholls.edu.

Notes

1 I do not have a name for this initiative, so if you have ideas, please share them.

2 I wish to credit Don Zhang, who both shared this pedagogy with me in a previous entry into TIP’s Opening Up column. He has also executed a similar course at the graduate level and can speak to the pros and cons of doing so.

3 Indeed, the University of Glasgow’s School of Psychology and Neuroscience created the PsyTeachR series for helping both undergraduate and graduate students in psychology learn how to code (see psyteachr.github.io).

 4 I must credit Steven Rogelberg with the idea of creating a consortium of I-O psychology undergraduate and graduate students devoted to replication research. He shared this idea with me years ago as a participant in a panel on open science that I cochaired with Michael Morrison (Morrison & Castille, 2019). Such an initiative may be more useful for training master’s students, who often have to complete a thesis for attaining the degree, in order to demonstrate a basic level of competence. Doctoral students may also benefit from contributing to such a multisite collaboration in order to appreciate both our theory and methods, but may have to consider something unique for the dissertation requirement.

5  I include management, organizational behavior, data analytics, and human resource management contexts as I am currently working in a business school setting. I am exploring ways to grow a culture of replication and science in this context and will take ideas/suggestions.

6  Thanks to Rick Guzzo for pointing out this issue with the Many Lab studies.

References

Banks, G. C., O’Boyle Jr, E. H., Pollack, J. M., White, C. D., Batchelor, J. H., Whelpley, C. E., Abston, K. A., Bennett, A. A., & Adkins, C. L. (2016). Questions about questionable research practices in the field of management: A guest commentary. Journal of Management, 42(1), 5–20. https://doi.org/10.1177/0149206315619011

Castille, C. M., Köhler, T., & O’Boyle, E. H. (2022). A brighter vision of the potential of open science for benefiting practice: A ManyOrgs proposal. Industrial and Organizational Psychology: Perspectives on Science and Practice. OSFPreprints. https://doi.org/10.31219/osf.io/q4r97

Ebersole, C. R., Atherton, O. E., Belanger, A. L., Skulborstad, H. M., Allen, J. M., Banks, J. B., Baranski, E., Bernstein, M. J., Bonfiglio, D. B. V., Boucher, L., Brown, E. R., Budiman, N. I., Cairo, A. H., Capaldi, C. A., Chartier, C. R., Chung, J. M., Cicero, D. C., Coleman, J. A., Conway, J. G., … Nosek, B. A. (2016). Many Labs 3: Evaluating participant pool quality across the academic semester via replication. Journal of Experimental Social Psychology, 67, 68–82. https://doi.org/10.1016/j.jesp.2015.10.012

Eby, L. T., Shockley, K. M., Bauer, T. N., Edwards, B., Homan, A. C., Johnson, R., Lang, J. W. B., Morris, S. B., & Oswald, F. L. (2020). Methodological checklists for improving research quality and reporting consistency. Industrial and Organizational Psychology, 13(1), 76–83. https://doi.org/10.1017/iop.2020.14

Forscher, P. S., Wagenmakers, E.-J., Coles, N. A., Silan, M. A., Dutra, N., Basnight-Brown, D., & IJzerman, H. (2022). The benefits, barriers, and risks of big-team science. Perspectives on Psychological Science: A Journal of the Association for Psychological Science, 17456916221082970. https://doi.org/10.1177/17456916221082970

Guzzo, R. (under review). Open science is normative in field settings (albeit locally). Society for Industrial and Organizational Psychology, Boston, MA.

Guzzo, R., Schneider, B., & Nalbantian, H. (2022, in press). Open science, closed doors: The perils and potential of open science for research in practice. Industrial and Organizational Psychology: Perspectives on Science and Practice.

Hawkins, R. X. D., Smith, E. N., Au, C., Arias, J. M., Catapano, R., Hermann, E., Keil, M., Lampinen, A., Raposo, S., Reynolds, J., Salehi, S., Salloum, J., Tan, J., & Frank, M. C. (2018). Improving the replicability of psychological science through pedagogy. Advances in Methods and Practices in Psychological Science, 1(1), 7–18. https://doi.org/10.1177/2515245917740427

Hensel, P. G. (2021). Dissecting the tension of open science standards implementation in management and organization journals. Accountability in Research, 1–26. https://doi.org/10.1080/08989621.2021.1981870

Klein, R. A., Ratliff, K. A., Vianello, M., Adams, R. B., Bahník, Š., Bernstein, M. J., Bocian, K., Brandt, M. J., Brooks, B., Brumbaugh, C. C., Cemalcilar, Z., Chandler, J., Cheong, W., Davis, W. E., Devos, T., Eisner, M., Frankowska, N., Furrow, D., Galliani, E. M., … Nosek, B. A. (2014). Investigating variation in replicability: A “Many Labs” replication project. Social Psychology, 45(3), 142–152. https://doi.org/10.1027/1864-9335/a000178

Köhler, T., & Cortina, J. M. (2021). Play it again, Sam! An analysis of constructive replication in the organizational sciences. Journal of Management, 47(2), 488–518. https://doi.org/10.1177/0149206319843985

Miner, J. B. (2003). The rated importance, scientific validity, and practical usefulness of organizational behavior theories: A quantitative review. Academy of Management Learning & Education, 2(3), 250–268. https://doi.org/10.5465/amle.2003.10932132

Morrison, M., & Castille, C. M. (2019, April). Open science, open practice: Future reality or pipedream [Panel] Society for Industrial and Organizational Psychology, Inc., National Harbor, MD. https://www.siop.org/Annual-Conference/Registration-and-Resources/Past-Conference-Programs/ProgramSearch19

Moshontz, H., Campbell, L., Ebersole, C. R., IJzerman, H., Urry, H. L., Forscher, P. S., Grahe, J. E., McCarthy, R. J., Musser, E. D., Antfolk, J., Castille, C. M., Evans, T. R., Fiedler, S., Flake, J. K., Forero, D. A., Janssen, S. M. J., Keene, J. R., Protzko, J., Aczel, B., … Chartier, C. R. (2018). The Psychological Science Accelerator: Advancing psychology through a distributed collaborative network. Advances in Methods and Practices in Psychological Science, 1(4), 501–515. https://doi.org/10.1177/2515245918797607

Mülberger, A. (2022). Early experimental psychology: How did replication work before p-hacking? Review of General Psychology, 26(2), 131–145. https://doi.org/10.1177/10892680211066468

Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349(6251), aac4716–aac4716. https://doi.org/10.1126/science.aac4716

Podsakoff, P. M., MacKenzie, S. B., & Podsakoff, N. P. (2012). Sources of method bias in social science research and recommendations on how to control it. Annual Review of Psychology, 63(1), 539–569. https://doi.org/10.1146/annurev-psych-120710-100452

Pownall, M., Azevedo, F., Aldoh, A., Elsherif, M., Vasilev, M., Pennington, C. R., Robertson, O., Tromp, M. V., Liu, M., Makel, M. C., Tonge, N., Moreau, D., Horry, R., Shaw, J., Tzavella, L., McGarrigle, R., Talbot, C., Parsons, S., & FORRT. (2021). Embedding open and reproducible science into teaching: A bank of lesson plans and resources. Scholarship of Teaching and Learning in Psychology. https://doi.org/10.1037/stl0000307

Pratt, M. G., Kaplan, S., & Whittington, R. (2020). Editorial essay: The tumult over transparency: Decoupling transparency from replication in establishing trustworthy qualitative research. Administrative Science Quarterly, 65(1), 1–19. https://doi.org/10.1037/stl0000307

Ritchie, S. (2020). Science fictions: How fraud, bias, negligence, and hype undermine the search for truth. Henry Holt and Company.

Rouder, J. N. (2016). The what, why, and how of born-open data. Behavior Research Methods, 48(3), 1062–1069. https://doi.org/10.3758/s13428-015-0630-z

Uffelmann, E., Huang, Q. Q., Munung, N. S., de Vries, J., Okada, Y., Martin, A. R., Martin, H. C., Lappalainen, T., & Posthuma, D. (2021). Genome-wide association studies. Nature Reviews Methods Primers, 1(1), 59. https://doi.org/10.3758/s13428-015-0630-z

Uhlmann, E. L., Ebersole, C., Chartier, C., Errington, T., Kidwell, M., Lai, C. K., McCarthy, R. J., Riegelman, A., Silberzahn, R., & Nosek, B. A. (2019). Scientific utopia III: Crowdsourcing science. Perspectives on Psychological Science, 14(5), 711–733. https://doi.org/10.1177/1745691619850561

Wagge, J. R., Brandt, M. J., Lazarevic, L. B., Legate, N., Christopherson, C., Wiggins, B., & Grahe, J. E. (2019). Publishing research with undergraduate students via replication work: The collaborative replications and education project. Frontiers in Psychology, 10, 247. https://doi.org/10.3389/fpsyg.2019.00247

Print
291 Rate this article:
No rating
Comments are only visible to subscribers.

Categories

Information on this website, including articles, white papers, and other resources, is provided by SIOP staff and members. We do not include third-party content on our website or in our publications, except in rare exceptions such as paid partnerships.