Featured Articles
Jenny Baker
/ Categories: TIP, 2021, 583

Opening Up: Small Wins in Open Science: Things You Can Do Today to Improve Research in I-O Psychology

Christopher M. Castille, Nicholls State University; Frederick L. Oswald, Rice University; George Banks, University of North Carolina, Charlotte; & Larry Williams, Texas Tech University 

The recent open science movement is a multifaceted idea and undertaking, with grand long-term ambitions. Although many academic, editorial, and disciplinary forces involved in the open science movement have called for changes toward improving replicability and reproducibility, those calls can often seem too abstract to be useful, too formidable to be practiced, too peripheral to be worth doing, or otherwise challenging for researchers to understand how to engage in open science. Such calls are often made within a “crisis narrative” that is intended to bring serious attention to the need for transparency and openness in science. Unfortunately, the crisis narrative may very well interfere with our collective ability to carry out several practical day-to-day open science activities that are relatively easy for researchers to do and contribute to a more credible body of knowledge. Fortunately, we can switch the crisis narrative to one that brings focus on the specific opportunities and challenges we face as scientists (Fanelli, 2018). Weick (1984) aptly recognized that many organizational challenges can be reframed as “mere problems” that can be addressed by practical means, leading “small wins” that accumulate toward the common good.

We view many aspects of open science as “mere problems” to address in this practical way, one of them being transparency in published research. To be clear, engaging in open science behaviors that improve transparency does not always mean that sharing research materials and data is always appropriate. Instead, it means more broadly that researchers should communicate transparently about the key aspects of their research process (including why materials/data were or were not shared) so that their readers can then better understand and interpret the results of that research. So long as we can agree on the general principle that improving the transparency and openness of our research practices is beneficial (e.g., building a stronger and more credible community of research and practice), then we all can agree on our commitment to open science. This doesn’t have to be hard—we can work together on “small wins.”

We submit that authors can employ several tactics to get started with a small-wins strategy of opening up science. Here are a few: (a) making the exact scale items for a study available online in a repository, (b) preregistering one of a set of upcoming studies, (c) making a dataset open and accessible (e.g., through a publication outlet/on OSF/on social media), (d) making their science available sooner by posting an early draft to a preprint server, and (e) posting the training guide and content for activities such as an interview, focus group, or experimental intervention in an online appendix or repository. Picking even just one of these tactics will help you get started with open science. We would love to hear about more open science tactics useful to you! Email them to Chris at christopher.castille@nicholls.edu, and he will share them in a future issue.

We see value in open science habits not just for authors but also for journals. Habits can accrue to journals that encourage open science behaviors in their policies ,and these habits can also accrue to researchers as reflected across their work, within their lab, and in working with their colleagues. Such habits signal a strong commitment by journals to making our science more robust (see Grand, Rogelberg, Allen, et al., 2018), perhaps attracting additional journals and researchers in a virtuous cycle that begets a stronger science of I-O psychology, with more credibility to follow. Some research already reports the promising effects of open science practices. For instance, datasets are often put to greater use in the scientific community when they are shared by scholars in their work, and that work tends to be more highly cited (Christensen et al., 2019; Piwowar et al., 2007; Piwowar & Vision, 2013). Likewise, early access to research via preprints allows contributing authors to receive early feedback prior to publication, and perhaps this is why these articles are shown to gain more article citations as well as greater attention through social media (Conroy, 2019). To be clear, citation chasing is not the goal. Citation counts are a highly imperfect indicator of professional reputation. Even retracted articles could go on to be cited (Teixeira da Silva & Bornemann-Cimenti, 2017).1 Rather, we believe that the open science tactics that we share will draw attention to more credible signals in science, which is a goal that is widely shared across the sciences. Engaging in the process of open science is a “win win” for improving science as well as for increasing one’s professional reputation and scholarly impact.

With this entry of “Opening Up,” we wish to offer some guidance for pursuing small wins while highlighting a few pitfalls to avoid. Our intended audience is for those I-O researchers in academic as well as in practice settings who wish to start practicing open science in practical and concrete ways. Understandably, we are all newcomers to open science, and thus our tentative interest may come with some meaningful reservations, such as the idea that preregistration might constrain creativity or subject one’s research to excessive scrutiny (for a discussion see Toth et al., 2020). We should discuss those concerns, not only because open science means an open conversation about our science but also because it inspires healthy discussions about alternative options or finding middle ground. For example, sometimes organizational researchers cannot share raw data (e.g., the content of items within a scale) due to the constraints of the sponsoring organization; yet, that organization might be fine with sharing analytic code, descriptive statistics, and correlations (which can allow one to recreate variance–covariance matrices). Even small open science increments in this direction can yield important gains. Roughly speaking, imagine if we found a 20% increase in behaviors that improve transparency and openness in I-O psychology research (e.g., preregistration, materials/data sharing); what impact might that have for the broader credibility, replicability, and reproducibility of our field (and psychology more broadly)? How might science self-correct and improve upon itself?

We should also state that we are not meaning to “sell” open science without evaluating its effects. Like any intervention, open science research practices and journal policies must be evaluated and refined to understand what adds value, what has a neutral or unnecessary effect, and what has negative consequences or unwanted trade offs (e.g., data sharing vs. potential participant re-identification; Meyer, 2018). Indeed, there is now a whole field of study—meta-science (https://en.wikipedia.org/wiki/Metascience)—that develops new open science research practices and investigates the effects of practices on the scientific knowledge base. Thus, various small wins in open science will be adjudicated in time as more data on their impact become available.

Small Wins for Opening Up

In a previous entry of “Opening Up” (Castille, Oswald, et al., 2020), we shared the open science rainbow (Kramer & Bosman, 2018), which is a single PowerPoint slide that highlights a variety of practices any scholar can enact to open up various parts of their research pipeline (https://zenodo.org/record/1147025#.X8EymGRKj5c). Notably, only a few practices are highlighted; many more have emerged since then and continue to emerge on almost a weekly basis. For instance, tools such as Octopus (https://science-octopus.org/-octopus.org/ and ResearchBox (www.researchbox.org) have emerged since then, which are platforms that adopt a system-wide approach to open science, allowing all aspects of the research process—from idea inception to dissemination—to become part of the scholarly record. Such tools can also allow outsiders to make suggestions for improvement at any stage of the process (see Lakens, 2020; Pain, 2018). Tools like these are fascinating and powerful for helping scholars inject more openness and transparency into their work than has been previously possible. In fact, they are potentially paradigm shifting, as they get used by the research community in larger numbers and as these types of tools improve.

To encourage the broader adoption of specific open science practices by authors, journals, granting agencies, and other stakeholders, we think it is most productive to focus on those areas where there is widespread agreement that openness and transparency are helpful. For example, with rare exceptions, it is hard to deny that researchers should clearly explain the design and implementation of their research. Reporting methods and results transparently ensures that analyses are understood and subject to scrutiny, and thus they are viewed as more credible for their insights (Grand, Rogelberg, Allen, et al., 2018). This idea introduces our first recommended small win: Incorporate open science principles within relevant professional policies and principles. This is a small win, where an entity (e.g., a journal, a professional society, or a research lab) essentially declares a greater focus on the research process rather than solely on research outcomes (see Grand, Rogelberg, Banks, et al., 2018). For example, encouraging simple tasks that improve reproducibility and replicability demonstrates that a given entity not only stands behind the science being offered but is up to the challenges and scientific processes tied to verifying and extending its results. We like to know that a nice new car works before we buy it or, if any mechanical problems arise, that we can have the car examined and repaired. This seems no different, in principle.

Yet of course, following policies and principles might require some deviations in practice. For example, the possibility that employees could be reidentified in the study data usually needs to be avoided (see Meyer, 2018; Pratt et al., 2020). To avoid this possibility, one could adopt Banks et al.’s (2018) advice and explicitly state that either (a) deidentified data and analytic code will be available upon request for your projects or (b) the relevant summary statistics for reproducing analyses will be made available (e.g., descriptive statistics and variance–covariance matrices, see also Bergh et al., 2017). You may even go as far as pointing out that when confidentiality is particularly problematic to maintain (e.g., big datasets with lots of demographic information) that you will make synthetic datasets available to allow key claims to be probed in ways that uphold confidentiality agreements and participant privacy (for a discussion of synthetic datasets, privacy, and the law, see Bellovin et al., 2019). The spirit of this part of the commitment is ensuring that, if individual-level data need to be kept confidential, minimal analytic reproducibility can at least be assured.

To help craft open science statements, several resources are now available. In the I-O psychology research context, we strongly recommend reading Grand and colleagues’ (2018) work on robust science, which provides a series of ethical goals for the various roles we all play as authors, reviewers, practitioners, and educators. Some of these ethical goals are common across these roles; some are complementary, yet others are sometimes conflicting, and it is important to appreciate all perspectives on any given open science issue. For instance, such ethical goals, conflicts, and perspectives can become a part of the training we deliver to our students as educators. Fortunately, the Framework for Open and Reproducible Research Training (FORRT; https://forrt.org) offers (among many things) an extensive list of curated resources for teaching open science. Drawing on these resources and exposing students early may help students see psychology more like a natural science (and also be more critical of published claims; see Chopik et al., 2018).

Overall though, we hope this highlights why open science statements should be made: for example, to ensure that faculty and graduate students are trained in open science and to ensure that research reaches a higher level of credibility and robustness. Open science statements should not sit gleaming in a trophy case. They actually should be brought to life through practical use by the entities producing them. Just like public commitment might lead to stronger adherence to certain goals (Hollenbeck et al., 1989), affirming open science statements in some public manner (e.g., professional outlet, blog post, research statement) on a continuous basis can be helpful and culture building, where the stated commitment to open science is heard and shared by relevant audiences.

Several leading journals are now, in fact, joining together and signaling their support for a more robust and open science. Just recently, on November 10, 2020, the APA became a signatory to the Transparency and Openness Promotion (TOP) Guidelines (Center for Open Science, 2020). APA journals, such as Journal of Applied Psychology, will be developing and implementing a minimum set of standards for ensuring that data and research materials are disclosed. Indeed, editors of APA journals are encouraged to pursue higher standards as they see fit for their journal. So if you are an editor or play a role on an editorial board, consider both the TOP Guidelines (see Nosek et al., 2015) and the Editor Ethics 2.0 Code (https://editorethics.uncc.edu/editor-ethics-2-0-code/). These resources should prove helpful for crafting commitment statements. Management journals are also making changes. Recently, Laszlo Tihanyi (editor of the Academy of Management) called for scholars to shift their attention from interesting research to important research (see Tihanyi, 2020). Several structural changes have also been made at the Journal of Management to promote open science (Bergh & Oswald, 2020).

Next comes a pivotal part: following through. The problem here is ensuring that the next bit of science we share or foster is robust. Consider Banks and colleagues (2018) advice, which we discussed earlier: Preregister at least one of your next three studies. Many find that, like exercise, getting started can be the hardest part. After getting started, authors often find that preregistration can increase the amount of planning done before the study is conducted, thus increasing the quality of the work (see Toth et al., 2020). Again, preregistration only means stating what is planned to be carried out, much like a dissertation proposal. Deviations from a preregistration can still occur for many legitimate reasons, such as when situations or data and samples change, or even when new important aspects of a study need to be considered (see Castille, 2020; Szollosi et al., 2020). The point is simply to file a preregistration as a starting point, to commit to plans and increase the quality and statistical power of the research to be conducted (Toth, 2020), and that any changes from those plans simply will be declared. Regardless of the specific preregistration form that you use, you will be encouraged to ensure adequate statistical power for any proposed studies. To help conduct a power analysis effectively, you could draw on relevant benchmarks in the literature for the typical size of the effect in question (see Bosco et al., 2015). Testing mediation and moderation? Know that these are typically underpowered, but benchmark sample sizes do exist (see Götz et al., 2020; Murphy & Russell, 2017; O’Boyle et al., 2019). Taking the preregistration tactic a little further, if you intend on using code to analyze your data, then write your code before the data are gathered, being attentive to such details as clearly labeling variables and defining how data will be scrubbed. You can mock up a dataset to better ensure that your code can be run on your collected data. You can also mock up your tables. All of this does not really add much extra time to your work; it simply moves that work toward the beginning of the endeavor. This can all be viewed as part of your research, data management, and analysis plan—a plan that can be very valuable for improving the research and addressing issues that might derail a project later. Dissertation advisors often demand this sort of front-end work in a proposal meeting. We could ask why this is done, see if the answers apply to research endeavors following the dissertation, and then engage in open science behaviors accordingly.

To facilitate study preregistration, there are many useful resources available. Earlier, we mentioned ResearchBox and Octopus, but others such as those provided by the Center for Open Science (https://cos.io/prereg/) and AsPredicted (https://aspredicted.org) are also helpful for preregistering a study. Tools for forming a data management plan can be found at https://dmptool.org/get_started. Other guidance we wish to highlight for preregistering research in our field can be found in the recommendations for study preregistration submitted by Toth et al. (2020; e.g., preregistering a decision tree, analysis code). The APA’s pregistration template should also be helpful. We also should note that this template was discussed in this online video on preregistration standards for psychology. Some might be legitimately concerned about preregistration practices allowing a researcher to be identified or having their idea scooped. Know that there are ways to preregister privately (for the same reason of planning one’s research, and there is also time-stamp verification), and there are also ways to share work anonymously with reviewers or to have research embargoed for a period of time (https://osf.io/4znzp/wiki/home/). In other words, I-O psychologists can learn how many concerns about open science that have been constructively addressed by those outside of our field.

If these tactics regarding preregistering your work seem easy to implement but you hold a key reservation—that preregistration will not guarantee that a paper will be published—then consider adopting the registered-report format for publishing some of your work (Grand, Rogelberg, Banks, et al., 2018). Registered reports essentially involve submitting an extensive preregistration (introduction, hypotheses, research design, measures, analyses, etc.) for peer review. Journals in our field are increasingly offering this alternative publishing format, such as The Leadership Quarterly, Journal of Business and Psychology, and the Journal of Personnel Psychology. This deals directly with the two key reasons studies are rejected: (a) weak contribution and (b) lack of rigor.

Preregistration also may be one way to facilitate discussions about authorship early, allowing for more discussions about authorship to occur later on as contributions evolve. Commonly at issue is the authorship order. Fortunately, the APA has formed an authorship determination scorecard (https://www.apa.org/science/leadership/students/authorship-determination-scorecard.pdf) that can be useful as a starting point for clarifying responsibilities, determining authorship order, and, if necessary, changing this order, particularly as new contributions occur. We should also point out that there is value in the CREdiT (contributor roles) taxonomy, which is a method for clearly identifying the contributions each author has made towards a project (https://tinyurl.com/y67aeks3). Note how these authorship discussions center on being open and transparent about authorship contributions. As part of these conversations, you may even see value in gaining consensus about where openness and transparency are or are not reasonable. To that end, we encourage scholars to look over the Transparency Checklist (http://www.shinyapps.org/apps/TransparencyChecklist/; Aczel et al., 2020). These activities show a commitment to open science.

Letting 1,000 Flowers Bloom…

With this entry of “Opening Up,” we have highlighted several tactics that can help you to generate the small wins needed to make our collective scientific enterprise even more credible. The hope is that by instituting a small-wins strategy, each of us can pave the way toward a more exclusive use of open science principles throughout the research pipeline.

Moving forward, we’d like our future “Opening Up” entry to focus on the bright spots in our field where open science practices are being utilized routinely, allowing credible insights to be transmitted broadly. If you have a story or know of such a story, please feel free to share it with us. Also, we’d like to share other initiatives on which we’re working.

Chris Castille is working on a proposal that provides an alternative submission pathway for SIOP posters termed preregistration posters (see Brouwers et al., 2020). These posters allow the proposed theory and methods put forward by a scholar to be critiqued by peers and, we hope, to gain the attention of journals wishing to publish registered reports or results-blind reviews (e.g., Journal of Business and Psychology, International Journal of Selection and Assessment, Journal of Occupational and Organizational Psychology, Leadership Quarterly, Journal of Personnel Psychology). There is some preliminary evidence that preregistered posters help early career professionals receive constructive feedback, promote open science, and support early career research (BNA, 2019). We wish to support these kinds of efforts in our field. Get in touch with Chris (christopher.castille@nicholls.edu) if you’d like to contribute.

Additionally, Larry Williams, director of the Consortium for the Advancement of Research Methods (CARMA), would like to provide more resources for those interested in learning more about open science. Topics include (consisting of panelists and discussants)

  • The basics of preregistration and registered reports
  • Steps to take to make quantitative work reproducible (e.g., tips for annotating code)
  • Panels (editors, reviewers, authors) for giving out advice for adopting open science practices (e.g., pitfalls to avoid)
  • Lessons from the editors/authors/reviewers/funding agencies who’ve adopted open science practices
  • Funding agencies and journals that are encouraging scholars to adopt open science practices (e.g., NIH)
  • Can you still practice open science using proprietary data? Yes—here’s how.

If you have any other suggestions for lectures, workshops, tutorials, or panels, then please feel free to reach out to him directly (larry.williams@ttu.edu).

 

Note

[1] Tools like Zotero (https://www.zotero.org/blog/retracted-item-notifications/) are available to check your cited works and ensure that any retracted articles are no longer cited. There is also a Zotero plugin to see whether an article’s findings have been supported or disputed: https://medium.com/scite/introducing-the-scite-plug-in-for-zotero-61189d66120c

References

Aczel, B., Szaszi, B., Sarafoglou, A., Kekecs, Z., Kucharský, S., Benjamin, D., Chambers, C. D., Fisher, A., Gelman, A., Gernsbacher, M. A., Ioannidis, J. P., Johnson, E., Jonas, K., Kousta, S., Lilienfeld, S. O., Lindsay, D. S., Morey, C. C., Munafò, M., Newell, B. R., et al. (2020).  A consensus-based transparency checklist. Nature Human Behaviour, 4, 4–6  https://doi.org/10.1038/s41562-019-0772-6

Banks, G. C., Field, J. G., Oswald, F. L., O’Boyle, E. H., Landis, R. S., Rupp, D. E., & Rogelberg, S. G. (2018). Answers to 18 questions about open science practices. Journal of Business and Psychology, 34(3), 257–270. https://doi.org/10.1007/s10869-018-9547-8

Bellovin, S. M., Dutta, P. K., & Reitinger, N. (2019). Privacy and synthetic datasets. Stanford Technology Law Review, 22, 1–52.

Bergh, D. D., & Oswald, F. L. (2020). Fostering robust, reliable, and replicable research at the Journal of Management. Journal of Management, 46(7), 1302–1306. https://doi.org/10.1177/0149206320917729

Bergh, D. D., Sharp, B. M., & Li, M. (2017). Tests for identifying “red flags” in empirical findings: Demonstration and recommendations for authors, reviewers, and editors. Academy of Management Learning & Education, 16(1), 110–124. https://doi.org/10.5465/amle.2015.0406

Bosco, F. A., Aguinis, H., Singh, K., Field, J. G., & Pierce, C. A. (2015). Correlational effect size benchmarks. Journal of Applied Psychology, 100(2), 431–449. https://doi.org/10.1037/a0038047

Brouwers, K., Cooke, A., Chambers, C. D., Henson, R., & Tibon, R. (2020). Evidence for prereg posters as a platform for preregistration. Nature Human Behaviour, 4(9), 884–886. https://doi.org/10.1038/s41562-020-0868-z

Castille, C. M. (2020). Opening up: A primer on open science for industrial-organizational psychologists. The Industrial-Organizational Psychologist, 57(3). https://www.siop.org/Research-Publications/Items-of-Interest/ArtMID/19366/ArticleID/3293

Castille, C. M., Oswald, F., Marin, S., & Bipp, T. (2020). Opening up: Credibility multipliers: Simple yet effective tactics for practicing open science. The Industrial-Organizational Psychologist, 58(1). https://www.siop.org/Research-Publications/Items-of-Interest/ArtMID/19366/ArticleID/4596

Center for Open Science. (2020, November 10). APA joins as new signatory to TOP Guidelines. https://www.cos.io/about/news/apa-joins-as-new-signatory-to-top-guidelines

Chopik, W. J., Bremner, R. H., Defever, A. M., & Keller, V. N. (2018). How (and whether) to teach undergraduates about the replication crisis in psychological science. Teaching of Psychology, 45(2), 158–163. https://doi.org/10.1177/0098628318762900

Christensen, G., Dafoe, A., Miguel, E., Moore, D. A., & Rose, A. K. (2019). A study of the impact of data sharing on article citations using journal policies as a natural experiment. PLoS ONE, 14(12), e0225883. https://doi.org/10.1371/journal.pone.0225883

Conroy, G. (2019, July 9). Preprints boost article citations and mentions. Nature Index. https://www.natureindex.com/news-blog/preprints-boost-article-citations-and-mentions

Fanelli, D. (2018). Opinion: Is science really facing a reproducibility crisis, and do we need it to? Proceedings of the National Academy of Sciences, 115(11), 2628–2631. https://doi.org/10.1073/pnas.1708272114

Götz, M., O’Boyle, E. H., Gonzalez-Mulé, E., Banks, G. C., & Bollmann, S. S. (2020). The “Goldilocks Zone”: (Too) many confidence intervals in tests of mediation just exclude zero. Psychological Bulletin. DOI: 10.1037/bul0000315

Grand, J. A., Rogelberg, S. G., Allen, T. D., Landis, R. S., Reynolds, D. H., Scott, J. C., Tonidandel, S., & Truxillo, D. M. (2018). A systems-based approach to fostering robust science in industrial-organizational psychology. Industrial and Organizational Psychology: Perspectives on Science and Practice, 11(1), 4–42. https://doi.org/10.1017/iop.2017.55

Grand, J. A., Rogelberg, S. G., Banks, G. C., Landis, R. S., & Tonidandel, S. (2018). From outcome to process focus: Fostering a more robust psychological science through registered reports and results-blind reviewing. Perspectives on Psychological Science, 13(4), 448–456. https://doi.org/10.1177/1745691618767883

Hollenbeck, J. R., Williams, C. R., & Klein, H. J. (1989). An empirical examination of the antecedents of commitment to difficult goals. Journal of Applied Psychology, 74, 18-23. https://doi.org/10.1037/0021-9010.74.1.18

Kramer, B., & Bosman, J. (2018, January). Rainbow of open science practices. Zenodo. http://doi.org/10.5281/zenodo.1147025

Lakens, D. (2020, October 30). [93] ResearchBox: Open research made easy [Blog post]. DataColada. Retrieved from http://datacolada.org/93

Meyer, M. N. (2018). Practical tips for ethical data sharing. Advances in Methods and Practices in Psychological Science, 1(1), 131–144. https://doi.org/10.1177/2515245917747656

Murphy, K. R., & Russell, C. J. (2017). Mend it or end It: Redirecting the search for interactions in the organizational sciences. Organizational Research Methods, 20(4), 549–573. https://doi.org/10.1177/1094428115625322

Nosek, B. A., Alter, G., Banks, G. C., Borsboom, D., Bowman, S. D., Breckler, S. J., Buck, S., Chambers, C. D., Chin, G., Christensen, G., Contestabile, M., Dafoe, A., Eich, E., Freese, J., Glennerster, R., Goroff, D., Green, D. P., Hesse, B., Humphreys, M., … Yarkoni, T. (2015). Promoting an open research culture. Science, 348(6242), 1422–1425. https://doi.org/10.1126/science.aab2374

O’Boyle, E., Banks, G. C., Carter, K., Walter, S., & Yuan, Z. (2019). A 20-year review of outcome reporting bias in moderated multiple regression. Journal of Business and Psychology, 34, 19–37. https://doi.org/10.1007/s10869-018-9539-8

Pain, E. (2018, November 1). Meet Octopus, a new vision for scientific publishing. Science|AAAS. https://www.sciencemag.org/careers/2018/11/meet-octopus-new-vision-scientific-publishing

Piwowar, H. A., Day, R. S., & Fridsma, D. B. (2007). Sharing detailed research data is associated with increased citation rate. PLoS ONE, 2(3), e308. https://doi.org/10.1371/journal.pone.0000308

Piwowar, H. A., & Vision, T. J. (2013). Data reuse and the open data citation advantage. PeerJ, 1, e175. https://doi.org/10.7717/peerj.175

Pratt, M. G., Kaplan, S., & Whittington, R. (2020). Editorial essay: The tumult over transparency: Decoupling transparency from replication in establishing trustworthy qualitative research. Administrative Science Quarterly, 65(1), 1–19. https://doi.org/10.1177/0001839219887663

British Neuroscience Association. (2019, August). Preregistration posters: Early findings about presenting research early.  Retrieved November 6, 2020, from https://www.bna.org.uk/mediacentre/news/pre-reg-posters/

Szollosi, A., Kellen, D., Navarro, D. J., Shiffrin, R., van Rooij, I., Van Zandt, T., & Donkin, C. (2020). Is preregistration worthwhile? Trends in Cognitive Sciences, 24(2), 94–95. https://doi.org/10.1016/j.tics.2019.11.009

Teixeira da Silva, J. A., & Bornemann-Cimenti, H. (2017). Why do some retracted papers continue to be cited? Scientometrics, 110(1), 365–370. https://doi.org/10.1007/s11192-016-2178-9

Tihanyi, L. (2020). From “that’s interesting” to “that’s important.” Academy of Management Journal, 63(2), 329–331. https://doi.org/10.5465/amj.2020.4002

Toth, A. A., Banks, G. C., Mellor, D., O’Boyle, E. H., Dickson, A., Davis, D. J., DeHaven, A., Bochantin, J., & Borns, J. (2020). Study preregistration: An evaluation of a method for transparent reporting. Journal of Business and Psychology. https://doi.org/10.1007/s10869-020-09695-3

Weick, K. E. (1984). Small wins: Redefining the scale of social problems. American Psychologist, 39(1), 40–49. https://doi.org/10.1037/0003-066X.39.1.40

Print
3668 Rate this article:
No rating
Comments are only visible to subscribers.

Categories

Information on this website, including articles, white papers, and other resources, is provided by SIOP staff and members. We do not include third-party content on our website or in our publications, except in rare exceptions such as paid partnerships.