Home Home | About Us | Sitemap | Contact  
  • Info For
  • Professionals
  • Students
  • Educators
  • Media
  • Search
    Powered By Google

A Quantitative Examination of Trends in I-O Psychology 2001–2005

Richard N. Landers
University of Minnesota, Twin Cities

In I-O psychology, as would be expected in any self-aware area of study, we sometimes wonder what direction our field is taking and who is leading it there.  Which work appears most often in the I-O literature?  What topics receive the most attention?  Which papers have made the most impact on our field?  Originally, seeking the answer to these questions represented a difficult and time-consuming task considering the quickly changing nature of our field and the vast amounts of ever-growing data to sift through.  Using traditional data-gathering techniques, counting the number of times a paper or author was cited in prior empirical research required much more in-depth data collection and processing than even the largest-k meta-analysis published today.  As newer tools became available, such as the Social Science Citation Index (SSCI–a part of Web of Science), this became substantially easier.  The SSCI allowed researchers to quickly determine a count of the number of times they or any one of their works had been cited in the social sciences.  It did not, however, allow much precision; because SSCI summarized works cited by the social science literature as a whole, it was difficult to determine, for example, the works and authors most cited by I-O psychology in particular.  Determining the key citations for a particular field of study, as reflected by that field’s top journals, could provide more accurate information on the direction that field is taking.  This paper describes and utilizes a new, more accessible quantitative investigative technique for analyzing past scholarly literature that makes this kind of targeted investigation much faster and simpler. 

For determining influence and impact in I-O, using SSCI alone is flawed for two specific reasons.  First, because this tool surveys the social science literature as a whole, it is impossible to distinguish whether a citation is stated in I-O psychology or a different social scientific field.  For example, Barrick and Mount’s (1991) widely cited study on the relationship between the Big Five personality constructs and job performance is commonly cited outside of I-O as early evidence of the renewed interest in and “practical application” of the Big Five personality constructs.  Although we can anecdotally discuss the great impact Barrick, Mount, and their work have had within I-O, quantitative evidence is surprisingly lacking.  Although SSCI offers some degree of search refinement by area of study, direct precise control is difficult to exert, as it is never made explicit upon what the area of study specifications are based.

The second and larger issue is the validity of equating influence and impact with the number of publications that an author has produced.  Quantity does not necessarily indicate quality.  Authors choosing to publish a great number of articles in less selective journals would appear more influential by this method.  The reality might be just the opposite.  In this context, this could be considered a sort of criterion contamination, which should clearly be avoided if possible. 

Because of both of these issues, replacing a paper’s “number of citations appearing within the social science literature” with a paper’s “number of citations within the I-O literature” would be substantially more construct valid in representing influence and impact within I-O.  Instead of examining the number of times anyone has cited a work by an I-O author, determining the number of times specific works have been cited by authors within the I-O literature would produce a much more informative list.  Past efforts in this vein have been extremely limited.  Although I imagine most I-O psychologists asked could come up with an impromptu list of the most “influential” papers in our field, there have been no substantial quantitative examinations seeking an answer to this question.  This is an unfortunate gap in the self-awareness that I-O needs to continue to grow.  Although individual authors might make an indirect impact, the papers themselves contain the theories that actually influence future research, and understanding which papers are most influential is central to understanding how our field is developing.

Method

To determine rankings of the most cited articles in I-O, a PsycINFO search was conducted for every article contained within the top 10 most prestigious I-O journals as determined by Zickar and Highhouse (2001) between 2001 and 2005, inclusive. This includes Academy of Management Journal, Academy of Management Review, Administrative Science Quarterly, Journal of Applied Psychology, Journal of Management, Journal of Organizational Behavior, Journal of Vocational Behavior, Organizational Behavior and Human Decision Processes, Organizational Research Methods, and Personnel Psychology. This journal list was chosen as a balance; collecting every journal with any I-O content would provide too many citations irrelevant to I-O whereas collecting only the top two or three journals would exclude too many.  The years 2001 to 2005 were chosen for three reasons. First, as of the time of this writing, PsycINFO has not yet fully indexed all journals published in 2006 or 2007, leaving gaps in the data from those years.  Second, full citation information is not recorded in PsycINFO for most of these journals before 2000, creating large systematic gaps if reference data was harvested from before that time.  Third, a 5-year metric creates an easily referenced time period; new 5-year segments can continue to be determined, allowing for future longitudinal examination of research trends.  It is also important to note here that “what papers published between 2001 and 2005 are most cited?” is not being tracked. Instead, this addresses “what do papers published between 2001 and 2005 cite?” by examining the reference lists from these articles.

All citations from all articles in these journals from 2001 to 2005 indexed by PsycINFO were extracted from PsycINFO and entered into a new freely available computer program (The Research Explicator for oNline Databases [TREND]) designed to parse this kind of data for entry into a database (Landers, 2007).  From 2,636 articles, this produced a final dataset containing 128,425 citations, 72,675 of which were unique.  To combat incorrect spelling and formation of citations by authors, the “moderate assumptions” option in the TREND software was used, which matches citations by ignoring the case of the citations, eliminating words less than three characters in length, eliminating subtitles starting with a colon, eliminating embedded Web addresses, and eliminating all numbers.  The remaining letters are then compared to previously extracted and cut-down citations to determine which should be counted together, even if the authors did not format the citations correctly (see Landers, 2008, for a discussion of these issues).  This does not catch all incidents, and a visual check of the dataset was required to match any remaining malformed cites.  This was done by sorting the list alphabetically and comparing citations close by in alphanumeric order.  This is not exact, however; final numbers should be interpreted as only an approximate rank ordering. 

From initial extractions of data, it became quite apparent that lumping all citations into a single list would mask the relative importance of many of the articles within their respective subfields.  For example, because topics in organizational psychology are more popular than topics in industrial psychology (and more commonly published), a top 20 list of all citations in the list would contain only three entries in industrial psychology.  Thus for added clarity, five categories in which citations might fall were defined: industrial psychology, organizational psychology, equally contributing to industrial and organizational psychologies, methodology and statistics, and other topics/unknown.  Three volunteers from the SIOP graduate student discussion list were then recruited to categorize the top 250 entries in the overall list, with the goal of extracting top 20 lists for each category.  Because this was categorical data with more than two raters, to measure interrater agreement, a Fleiss’ k of 0.49 was computed, indicating a moderate level of agreement.  All three raters agreed for 127 of the 250 citations (50.8%), and at least two raters agreed in 225 (90.0%).  Final categorical assignments were made based on majority opinion.  In the 25 cases where all three raters disagreed, the final categorical assignment for those citations was made by the author.

Results

The top 20 entries per major category were extracted from these lists and placed in Tables 1 (industrial), 2 (organizational), 3 (equally industrial-organizational), and 4 (methodological and statistics). Among the top 250, 38 (15.2%) citations were industrial, 135 (54.0%) were organizational, 22 (8.8%) were equally I-O, 39 (15.6%) were methodological and statistics, and 16 (6.4%) were other/unknown. For a more complete list of the top 500 citations contained in the I-O literature between 2001 and 2005, see
http://rlanders.filedrawer.org/iotrends.html.  Among the top 20 lists (96 citations), all three raters agreed for 44 of the 96 citations (45.8%), and at least two raters agreed on 86 (89.6%).






Discussion

A peculiar finding emerged from this process, and although peripheral to my main purpose here, it is interesting enough to mention: The most miscited article by the I-O literature is Baron and Kenny’s (1986) work on the moderator–mediator variable distinction, which is quite often cited as the “mediator–moderator” distinction or any of a hundred slight deviations.  Of 404 citations extracted, only 261 were formed correctly.  It is suspected that this variation in citation formatting exists in the psychological literature as a whole (for a full discussion of the development of TREND in relation to the difficulties realized through mis-citations of Baron and Kenny, see Landers [2008]).  These sorts of minor errors likely occur with every entry in the list, and add error to the rankings at an unknown degree (it would require a by-hand check of all 128,425 citations against all other citations in each list to be more sure) and draws some question to the accuracy of the ranks, especially in cases where they differ only by a count of one or two citations.  Because of this error, absolute position in the rankings should be considered with some caution.  While it may be safe to say that Baron and Kenny’s (1986) work is the most cited article by the I-O literature (see Table 4), the relative importance of the top two equally I-O articles (see Table 3) is a more difficult call to make.

Several interesting trends arise in the tables.  Methodological articles would dominate an overall top 20 list, as would be expected; research methods and statistics are common to both I and O.  Individual differences pervade the I, O, and I-O lists, which reflects the current high degree of attention our field pays as a whole to such topics.  This is perhaps best reflected in that  Barrick and Mount’s (1991) work represents the article most cited by the I-O literature that is also within the I-O literature.  Interestingly, I and
I-O (Table 1 and 3) citations are predominantly journal articles while O and methods (Tables 2 and 4) citations are predominately books.  Hypothesizing as to the cause of this difference is outside the scope of this paper, but several humorous anecdotes about “I people” and “O people” can be imagined.

The primary contributions of this paper, of course, are the lists themselves.  Not only do they give a convenient reading list for newcomers to modern-day I-O research, but they reveal the utility of the TREND tool.  A similar procedure could be used for any particular research area, and the most highly cited (and theoretically influential) papers could be extracted quickly and easily, providing the I-O scientist an extremely useful starting point for future investigation.  Say, for example, the rapidly growing field of e-learning drew a researcher’s interest.  This researcher could run a PsycINFO search for “e-learning” and run TREND on the results, thus quickly extracting all of the most highly cited articles and books within the e-learning literature, as well as several other summaries, including author prevalence, years of publication, source, and keywords used.  What journal is the most common carrier of this topic?  What is the trend in publication frequency over time for this topic?  Is this area on the rise, or are publication rates slowing down?  These questions and many more can be quickly and easily answered.  Thus, this further holds great potential for the practitioner, as I-O psychologists in the field often need to extend themselves into areas of I-O that they have not studied for some time.  The software could be used to search for any particular topic, quickly and easily extracting the most potentially relevant articles to the practitioner’s immediate problem.  The TREND software further supports the scientist–practitioner, breaking down barriers to interdisciplinary work, by allowing easy early investigation into topics in sister fields, such as human resource management or social psychology.  It is the hope of the author that not only do readers find the extracted lists themselves useful but also take advantage of this new software for their own use, discovering applications even beyond what was done here.  And of course, deciding on advanced course reading lists has never been easier.

References

     Baron, R. M., & Kenny, D. A. (1986). The moderator–mediator variable distinction in social psychological research: Conceptual, strategic, and statistical considerations. Journal of Personality & Social Psychology, 51, 1173–1182.
     Barrick, M. R., & Mount, M. K. (1991). The Big Five personality dimensions and job performance: A meta-analysis. Personnel Psychology, 44, 1–26.
     Landers, R. N. (2007). The Research Explicator for oNline Databases (TREND) v1.21 [Computer software]. Retrieved September 1, 2007, from
http://rlanders.filedrawer.org/explicator.html.
     Landers, R. N. (2008). TREND: A tool for rapid online research literature analysis and quantification. Behavior Research Methods, 40, 665–672.
     Zickar, M. J., & Highhouse, S. (2001). Measuring prestige of journals in industrial-organizational psychology. The Industrial-Organizational Psychologist, 38(4), 29–36.