Featured Articles
Meredith Turner
/ Categories: 541

metaBUS: An Open Search Engine of I-O Research Findings

Christopher A. Baker, Frank A. Bosco, Krista L. Uggerslev, and Piers G. Steel

Social scientists are witnessing a paradigm shift in research methodology that has vast implications for the understanding and application of I-O research. This new zeitgeist has emerged concomitantly with advances in accessibility (e.g., cloud-based computing), scale (e.g., big data), and considerable introspection regarding research claims (e.g., lack of trustworthiness, Kepes & McDaniel, 2013; reproducibility, Klein et al., 2014) as well as how research should be conducted (e.g., appropriateness of inductive vs. deductive inference; Colberg, Nester, & Trattner, 1985). In this article, we describe a new open-access research tool called metaBUS (http://metaBUS.org), a search engine of currently more than 800,000 research findings that facilitates the location, summarization, and communication of a large corpus of I-O research. A short video tutorial of the metaBUS beta platform can be found here.

Researchers have begun to build cloud-based platforms designed to increase the transparency and memory of the research process, end-to-end. As an example, the Open Science Framework provides researchers with a platform for storing data, materials, and documents. However, such efforts accomplish much more than data management or a vehicle to reduce the prevalence of questionable research practices (e.g., pre-registration to reduce HARKing; Kepes & McDaniel, 2013; Kerr, 1998). Of particular interest, they make data much more “open,” allowing anyone with access to re-run analyses using new analytic techniques, ensure the accuracy of reported findings, and even test new hypotheses that were not considered at the time of the original study. In this paper, we describe a large-scale, open approach to science in the context of meta-analysis. We discuss several possible scientific goal states that, through the application of emergent methodologies, are becoming increasingly feasible to reach.

 

Current Approaches to Summarizing Science

            Meta-analysis, especially with the accompaniment of thorough systematic review, represents one of the greatest avenues for advancement within the social sciences. The approach fosters the development of scientific consensus, increased certainty regarding inferences, and the ability to test new hypotheses (e.g., cross-sample moderating effects) – including those not tested in any of the primary studies. These observations help explain why meta-analyses are highly cited, influential, and more likely to reach scientist and practitioner audiences (Aguinis, Gottfredson, & Wright, 2011). However, as summarized by Bosco, Steel, Oswald, Uggerslev, and Field (2015), current meta-analytic processes are associated with several inefficiencies. To be sure, these are not drawbacks associated with meta-analyses or systematic reviews themselves, but rather with the methods through which meta-analytic findings are updated, communicated, and consumed. We describe how the addition of two key ingredients to meta-analysis -- openness and technology -- will react with existing elements of research summaries to bring about massive return on investment for scientists and practitioners.

Consider the following contextual factors. First, across the sciences, the volume of scientific information now doubles roughly every nine years (Bornmann & Mutz, 2015). Evidence from the medical literature indicates that many meta-analytic conclusions require revision only a few years after their publication date (i.e., after adding newly published findings). In fact, in some cases, conclusions require revision following the short interval between meta-analytic manuscript acceptance and its appearance in print (Shojania et al., 2007). It is thus unfortunate that meta-analytic estimates are values frozen in time and follow a relatively “closed” science approach. That is, the information used as input to meta-analysis is often not readily available to other researchers and, when it is, it comes in the form of a large table in a published manuscript. Recent developments now allow continual updating of meta-analyses (i.e., living meta-analyses; Braver, Thoemmes, & Rosenthal, 2014; Elliott et al., 2014; Tsuji, Bergmann, & Cristia, 2014). Historically, however, updates to meta-analyses typically accommodate newly published findings at the rate of roughly once every five to 10 years (that’s if they are updated -- many exceed this window or have yet to be updated at all).

Second, research environments are in a constant state of flux. Statistical techniques are born and/or become refined, often in short order, yet there exists no efficient mechanism for accommodating such developments. Instead, consumers of science often settle by waiting for the next meta-analytic update to come along which, with luck, relies on the superior method. Third, recent studies indicate that meta-analyses are rarely reported comprehensively. Indeed, sensitivity analyses (e.g., tests for outliers; publication bias) are reported with disappointingly low frequency, leaving open the question of summary estimate trustworthiness (Kepes et al., 2012; Kepes, McDaniel, Brannick, & Banks, 2013; Kepes & McDaniel, 2015).

Given that meta-analytic findings represent key inputs for theory development processes (Schmidt, 1992), the issues noted above should be of great concern. However, our field presents with a silver lining: I-O psychologists have a relatively remarkable track record when it comes to reporting findings. Indeed, the common correlation matrix is a highly efficient way to report findings and, when one considers that most of our field’s findings are reported as such and that there exist tens of thousands of matrices spread across the literature, a great research curation opportunity should become apparent.

metaBUS: Current and Upcoming Features

We begin this section with an overview of the metaBUS platform’s key features (see Bosco, Steel, et al., 2015), followed by an elaboration on additions in development that will facilitate an open science approach to accumulating and summarizing research findings. As members of an open project, the metaBUS team welcomes suggestions and collaborations for improvement, expansion, and just plain analytic curiosities.

As described by Bosco, Steel, et al. (2015), the metaBUS platform is first and foremost a search engine. In the platform’s current embodiment, users specify two search criteria (e.g., “autonomy” and “turnover intentions”), and the platform conducts a database search that returns all matching results. Search criteria may be specified as verbatim text strings and/or taxonomic node/branch. A screenshot of the beta interface showing the sample relation is shown in Figure 1.

 

 

Figure 1. metaBUS beta interface. Search results for “autonomy” (specified using a letter string) with “turnover intention” (specified using a taxonomic branch code). In this example, 106 correlations from 40 samples returned mean r = -.205.

 

Users may then filter the results to their specification (e.g., limit publication year; limit sample size) and then submit the results to rapid meta-analysis. In order to make this possible, metaBUS relies on: (1) a “map” (i.e., ontology) of the I-O and related fields that organizes nearly 5,000 constructs and variables in the scientific space into a hierarchical taxonomic structure (e.g., employee turnover is classified as: Behaviors → Employee Behaviors → Movement → Out of the organization → Turnover), (2) a coding and data ingestion platform that facilitates tagging of metadata to each reported variable (e.g., sample type; sample size; country of origin; M; SD; reliability value; taxonomic assignment code; and so forth), (3) a variety of software packages that facilitate extraction of data from correlation tables, database management, and rapid, flexible database queries for near instant visualization and meta-analysis, and (4) a large database of curated research findings amassed by very dedicated team of graduate students. At the time of the authoring of this manuscript, approximately 800,000 findings have been curated and a beta version of the metaBUS interface is available here.

Overview of Features in Development

Here, we detail several new features essential to demonstrating and jump-starting an open science platform for meta-analysis anticipated to be available by fall 2016. Note that the descriptions provided below are preliminary and will likely be updated; we welcome community input to provide suggestions and to prevent duplicated efforts.

Enhanced query structure. Presently, the beta metaBUS platform allows users to search for findings pertaining to one specified relation at a time (e.g., “autonomy” with “turnover”). We are now building functionality to run queries that take as input a single concept (e.g., “autonomy”) and return all correlates. With this functionality, users will be able to run flexible, exploratory meta-analyses, and results could be summarized according to the taxonomic structure and visualized with networks. Alternatively, query results may be summarized and then returned in a table allowing flexible sorting -- by frequency of study, by effect size magnitude, or by any other variable in the database. This new functionality will allow users to easily ascertain answers to questions like, What are the strongest correlates of turnover? and With what variables is autonomy most frequently studied?

User accounts and data persistence. Developments are underway that will allow users to save all inputs and outputs pertaining to queries with persisted analytic and filter preferences, and to enable meta-analytic projects to be shared with chosen collaborators or the public for further refinement and expansion. With public posting comes opportunities for community commenting systems (e.g., a user might notify the maintainer of a meta-analysis of an overlooked study). Additionally, users will be able to upload their own datasets and, in so doing, facilitate their own meta-analyses or even assist in expanding the metaBUS database. To date, building the corpus of data contained within metaBUS has relied upon inclusion of all coefficients from each curated matrix so the matrix does not have to be revisited by future users to extract neighboring coefficients.

Error-reporting system. Because the majority of the metaBUS database contents are not double-coded (or better), the platform’s high-inference codes (e.g., taxonomic classification) lack evidence of reliability. However, based on a wide variety of checks, this appears to be a minor concern, and we have subsequently focused our resources on providing other enablements that advance the field rather than double-coding at this time. Still, to further improve database quality, we are building an “error-flagging” protocol that will allow users to indicate erroneous entries. Entries flagged by users will then be added to a priority rework queue. Also possible is some variant of a user “thumbs-up/like” system wherein the accuracy of entries is asserted by users. The general goal is to improve the database accuracy over time and, thus, facilitate accurate, large-scale research summaries. Indeed, the most popular internet search engines rely on user behavior to improve the search process.

Open source code. The platform is expanding to include a user-accessible “code development sandbox” that opens access to the source code for the platform. Inclined users and developers may “fork” the code to build new add-ons (e.g., new visualizations; new analytic techniques) that will access the same underlying database of findings. As an example, current forks in development involve: (1) addition of a host of publication bias analyses, and (2) moderator analyses based on row-by-row user classifications or based on codes native to the metaBUS database. Given that the sandbox relies on R Statistics, an increasingly prevalent and flexible open source platform (Leeper, 2014), virtually any R Statistics package or code snippet may be loaded into the environment and, eventually, incorporated into the official version of the software.

Other developments. Our team is currently investigating opportunities for formal linkages to raw data sources, including during data collection itself, that would foster limitless opportunities. Funding is in place to commence expansion of the metaBUS approach to mapping, curation, and synthesizing into related fields to enable interdisciplinary advancements. Also in construction are interfaces and linkages to practitioner terminology conventions (Bosco, Steel, et al., 2015), user-modifiable custom taxonomy arrangements, and others whose mention would exceed pagination.

Foreseeable Scientific Goals

            Over the past several years, while focusing on deliverables like amassing the database and building software, our team has been brainstorming a myriad of possibilities. In this section, we summarize a few products of those brainstorming sessions. In many cases, the scientific goals are lofty, yet entirely possible in terms of technology (e.g., software programming). However, for several goals, large-scale collaboration is required along with community adoption.

Goal #1: Establishment of Large-Scale Calibrations

I-O psychologists are not strangers to various calibrating benchmarks used to interpret research (e.g., reliability, Nunnally, 1978; effect size magnitude, Cohen, 1988; model fit, Bentler & Bonett, 1980; Bentler, 1990). More recent benchmarks tend to be backed by large-scale data rather than subject matter experts’ judgments (e.g., effect size magnitudes, Bosco, Aguinis, Singh, Field, & Pierce, 2015; reported reliability values, Köhler, Cortina, Kurtessis, & Gölz, 2015). As the metaBUS database continues to grow, the field has opportunities to answer many additional “calibrating” questions. As an example, I-O psychologists currently know surprisingly little about the relative importance of various factors thought to influence obtained findings (i.e., effect sizes). Indeed, I-O psychologists have long known that measure unreliability attenuates observed effect sizes. But, in relative importance terms, how does the impact of unreliability compare to that of response rate, sample type, and other methodological factors? Currently, I-O psychologists are in the dark with respect to answering these questions.

Goal #2: Establishment of “Living” Systematic Reviews

            We are not the first to lament that meta-analytic findings are relatively rigid and lack interactivity. Indeed, platforms for “living” systematic reviews that are continually updated as new findings emerge have appeared in psychology (Braver, Thoemmes, & Rosenthal, 2014; Tsuji, Bergman, & Cristia, 2014) and other large-scale efforts exist in medicine (Higgins & Green, 2008; Ip et al., 2012). However, reflecting on the curation efforts of other psychology teams, our experience to date has taught us that rapid progress in curation relies on at least semi-dedicated labor and, along with it, monetary compensation for that labor.

            Whatever the route to database expansion -- whether by contracted expert coders, crowdsourcing, or machine learning (or some combination; all currently under consideration by the metaBUS team) and assuming that the curated data are coded accurately, living systematic reviews would be a major accomplishment. Not only would living reviews provide the most up-to-date estimates, they would also allow for substantially enhanced interactivity. Indeed, if the observation that many analysts with one dataset come to vastly different conclusions (Silberzahn et al., 2015) generalizes to meta-analyses, then the interactivity and openness described herein should not be considered a technological luxury, but a bare necessity. Finally, as additional benefits from interactivity and openness, researchers will be better suited to conduct exploratory research and more seriously consider the efficacy of inductive approaches.

Goal #3: Comprehensiveness and Eventual Scientific Consensus

            Although any given search of the metaBUS database is likely to be incomplete until all sources of findings are curated, a limitation technically shared with all previous published meta-analyses given subsequent new research, we are optimistic that “islands” of scientific comprehensiveness and consensus are constructible. Consider that early meta-analyses were primarily of the bivariate form (e.g., cognitive ability with employee performance). Within only a few decades, we began to see increasingly sophisticated forms, such as “bow-tie” meta-analyses (i.e., antecedents-and-consequences of some focal concept) as well as meta-analytic structural equation models. We propose that one route to comprehensive curation and potential for creating islands of scientific consensus, relies on what might be termed “anchor-style” meta-analyses. In this proposed form of review, virtually all findings pertaining to one focal variable are curated (e.g., Ng & Feldman, 2008; 2010) rather than a handful of relations that speak to one or more theoretical perspectives. For example, one may choose to curate all findings pertaining to employee turnover -- with correlates including personality factors, demographic characteristics, attitudes, and everything else. Over time, with community adoption, data ingestion procedures, and living systematic review technologies, users will be able to summarize all evidence on a topic at a moment’s notice.

Conclusion

In the present paper, we have described how metaBUS adopts an open science approach to foster the development of a large scale, interactive scientific platform for gleaning scientific insights. We have also described functionality currently in development that will benefit virtually all I-O psychologists of research and/or practice persuasion. As described, we eagerly await members of the I-O community to conduct large-scale, calibrating studies that shed light on I-O phenomena as well as to enhance the platform’s functionality and accessibility. Finally, and perhaps most interesting, we await initial answers to long-standing questions of social science itself (i.e., “science-of-science” type work).

 

 

References

 

Aguinis, H., Gottfredson, R. K., & Wright, T. A. (2011). Best‐practice recommendations for estimating interaction effects using meta‐analysis. Journal of Organizational Behavior, 32(8), 1033-1043.

Bentler, P. M., & Bonett, D. G. (1980). Significance tests and goodness of fit in the analysis of covariance structures. Psychological Bulletin, 88(3), 588-606.

Bentler, P. M. (1990). Comparative fit indexes in structural models. Psychological Bulletin, 107(2), 238-246.

Bornmann, L., & Mutz, R. (2015). Growth rates of modern science: A bibliometric analysis based on the number of publications and cited references. Journal of the Association for Information Science and Technology, 66(11), 2215-2222.

Bosco, F. A., Aguinis, H., Singh, K., Field, J. G., & Pierce, C. A. (2015). Correlational effect size benchmarks. Journal of Applied Psychology, 100(2), 431-449. http://dx.doi.org/10.1037/a0038047

Bosco, F. A., Steel, P., Oswald, F. L., Uggerslev, K., & Field, J. G. (2015). Cloud-based meta-analysis to bridge science and practice: Welcome to metaBUS. Personnel Assessment and Decisions, 1(1), 2. http://scholarworks.bgsu.edu/pad/vol1/iss1/2

Braver, S. L., Thoemmes, F. J., & Rosenthal, R. (2014). Continuously cumulating meta-analysis and replicability. Perspectives on Psychological Science, 9(3), 333-342.

Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Erlbaum.

Colberg, M., Nester, M. A., & Trattner, M. H. (1985). Convergence of the inductive and deductive models in the measurement of reasoning abilities. Journal of Applied Psychology, 70(4), 681-694. doi:10.1037/0021-9010.70.4.681

Elliott, J. H., Turner, T., Clavisi, O., Thomas, J., Higgins, J. P., Mavergames, C., & Gruen, R. L. (2014). Living Systematic Reviews: An Emerging Opportunity to Narrow the Evidence-Practice Gap. PLoS Medicine, 11(2). doi:10.1371/journal.pmed.1001603

Higgins, J. P., & Green, S. (Eds.). (2008). Cochrane handbook for systematic reviews of interventions (Vol. 5). Chichester: Wiley-Blackwell.

Ip, S., Hadar, N., Keefe, S., Parkin, C., Iovin, R., Balk, E. M., & Lau, J. (2012). A Web-based archive of systematic review data. Systematic Reviews Sys Rev, 1(1), 15. doi:10.1186/2046-4053-1-15

Kepes, S., Banks, G. C., McDaniel, M., & Whetzel, D. L. (2012). Publication bias in the organizational sciences. Organizational Research Methods, 15(4), 624-662.

Kepes, S., & McDaniel, M. A. (2013). How trustworthy is the scientific literature in industrial and organizational psychology? Industrial and Organizational Psychology, 6(3), 252-268.

Kepes, S., & McDaniel, M. A. (2015). The validity of conscientiousness is overestimated in the prediction of job performance. PloS One, 10(10). doi:10.1371/journal.pone.0141468

Kepes, S., McDaniel, M. A., Brannick, M. T., & Banks, G. C. (2013). Meta-analytic reviews in the organizational sciences: Two meta-analytic schools on the way to MARS (the meta-analytic reporting standards). Journal of Business and Psychology, 28(2), 123-143.

Kerr, N. L. (1998). HARKing: Hypothesizing After the Results are Known. Personality and Social Psychology Review, 2(3), 196-217. doi:10.1207/s15327957pspr0203_4

Klein, R.A. et al., (2014). Data from Investigating Variation in Replicability: A “Many Labs” Replication Project. Social Psychology 45(3), 142-152.

Köhler, T., Cortina, J. M., Kurtessis, J. N., & Gölz, M. (2015). Are we correcting correctly? Interdependence of reliabilities in meta-analysis. Organizational Research Methods, 18(3), 355-428.                                                    

Leeper, T. J. (2014). Archiving reproducible research with R and Dataverse. The R Journal. 6(1), 151-158.

Ng, T. W., & Feldman, D. C. (2008). The relationship of age to ten dimensions of job performance. Journal of Applied Psychology, 93(2), 392-423. doi:10.1037/0021-9010.93.2.392.

Ng, T. W., & Feldman, D. C. (2010). The relationships of age with job attitudes: A meta‐analysis. Personnel Psychology, 63(3), 677-718.

Nunnally, J. (1978). Psychometric theory (2nd ed.). New York: McGraw-Hill.

Schmidt, F. L. (1992). What do data really mean? Research findings, meta-analysis, and cumulative knowledge in psychology. American Psychologist, 47(10), 1173-1181.

Shojania, K. G., Sampson, M., Ansari, M. T., Ji, J., Doucette, S., & Moher, D. (2007). How quickly do systematic reviews go out of date? A survival analysis. Annals of Internal Medicine, 147(4), 224-233.

Silberzahn, R., Uhlmann, E. L., Martin, D. P., Pasquale, Aust, F., Awtrey, E. C., … Nosek, B. A. (2015, August 20). Many analysts, one dataset: Making transparent how variations in analytical choices affect results. Retrieved from osf.io/gvm2z.

Tsuji, S., Bergmann, C., & Cristia, A. (2014). Community-augmented meta-analyses toward cumulative data assessment. Perspectives on Psychological Science, 9(6), 661-665.

 

Print
2700 Rate this article:
No rating
Comments are only visible to subscribers.

Categories

Information on this website, including articles, white papers, and other resources, is provided by SIOP staff and members. We do not include third-party content on our website or in our publications, except in rare exceptions such as paid partnerships.