Research Productivity of I-O Psychology Doctoral Programs in North America
Joy Oliver, Carrie A. Blair, C. Allen Gorman, and David J. Woehr
University of Tennessee, Knoxville
Editors Note: Research results and opinions expressed in this paper are those of the writers and do not reflect an official position of the TIP editor, Society for Industrial and Organizational Psychology, the American Psychological Association, or the American Psychological Society.
Over the years, various sources have examined the level of research productivity associated with industrialorganizational (I-O) psychology doctoral programs (Gibby, Reeve, Grauer, Mohr, & Zickar, 2002; Howard, Maxwell, Berra, & Sternitzke, 1985; Levine, 1990; Payne, Succa, Maxey, & Bolton; 2001; Surrette, 1989; Winter, Healy, & Svyantek, 1995). Although certainly not the only factor contributing to overall program quality, levels of research productivity for individual faculty and for specific programs are, and likely will continue to be, a critical component of any program evaluation. Given the emphasis placed on scholarly research within the field of I-O psychology, as well as academia in general, it is important to have as complete a picture as possible, not only of the levels of productivity associated with specific programs but also of the range and level of productivity across programs.
Both the criteria used as well as the level of inclusiveness have varied considerably across previous examinations of research productivity. Research productivity has been operationalized in a range of ways. Payne, Succa, Maxey, and Bolton (2001) examined 65 academic programs based on student representation at conferences. Winter et al. (1995) evaluated research productivity of 42 I-O psychology doctoral programs based on faculty contributions to the top five I-O-oriented journals from 1990 to 1994. Gibby et al. (2002) updated and extended Winter et al.s approach by evaluating research productivity with respect to five categories reflecting a broader set of publication outlets: (a) publications in the top five I-O journals from 1996 to 2000, (b) publications in the top 10 I-O journals from 1996 to 2000, (c) publications in the top 10
I-O journals for the entire career of program faculty members, (d) total research output from 1996 to 2000, and, (e) total research output for the entire career of faculty members. Although the criteria measures used by Gibby et al. are substantially more comprehensive than those included by Winter et al., relatively few I-O programs were examined with respect to these criteria. Specifically, data for only 20 programs were provided for all measures except publications in the top five I-O journals (for which 41 programs were examined).
The purpose of this paper is to provide a relatively comprehensive examination of I-O doctoral program research productivity. Gibby et al.s investigation based on journal publications ended with the year 2000. We present similar data ending with the year 2003. In line with Gibby et al., we also provide data based on the total career output of individual faculty members associated with each program. More importantly, Gibby et al. provide productivity information for only a subset of I-O programs. Our goal in the present study is to provide similar data for all current I-O doctoral programs in North America as listed on the SIOP Web site. Consequently, whereas previous investigations of research productivity associated with various programs have focused on the top programs, our review allows not only an examination of the productivity levels but also of the range and distribution of productivity across all programs.
To determine the set of programs to be included in our investigation, we first visited the Society for Industrial-Organizational Psychology (SIOP) Web site to obtain a current list of all doctoral I-O psychology programs. A listing of doctorate level programs indicated 94 programs. Although we did not necessarily confine our search to programs in psychology departments, we did restrict our investigation to programs offering doctoral degrees in industrial and/or organizational psychology. Therefore, although we did not include programs offering degrees in human resource management, organizational behavior, industrial relations, or human factors, we did include industrial and/or organizational psychology programs based in management departments and business schools (as long as the degree offered was in psychology), as well as programs offering doctor of psychology (PsyD) degrees. Based on these criteria, we identified 60 programs for inclusion in the analyses.
Once we compiled the list of relevant programs, we obtained a list of current core faculty members from departmental chairs or contacts. A few programs did not respond to several attempts at contact. For these programs, we used the core faculty indicated on their Web sites. Only those listed as core or primary faculty as of April 1, 2004 were included for each program. Furthermore, only faculty members directly associated with the industrial and/or organizational psychology program were included (we did not include adjunct faculty based in other departments). In addition, although Gibby and colleagues (2002) included student output as well as faculty output in calculating program productivity, they did so only for the current research categories (i.e., only those categories focusing on the past 5 years). We did not include student output in any of our calculations. We decided to forego this measure and instead directed our focus solely on faculty research productivity. Readers interested in previous studies regarding student productivity should see Payne et al. (2001) or Surette (1989).
Next, faculty names were entered into the PsychINFO database and searched according to publications. In instances in which multiple authors were associated with the same name (e.g., a search of PsychINFO for Kevin R. Murphy identifies several different individuals) and in instances when the same author seemed to publish under alternate names (e.g., maiden and married name), we attempted to verify the information in PsychINFo by contacting the individual. Productivity calculations were limited to refereed articles, book chapters, and edited books listed in PsychINFO as of January 1, 2004. Dissertations, book reviews, obituaries, technical reports, letters to editors, conference submissions, and errata were removed from calculations.
We utilized criterion categories similar to those used by Gibby et al. (2002). Specifically, we examined programs with respect to publications in the top 10 I-O journals for both the past 5 years (i.e., 19992003) as well as total career. We also calculated total research productivity for both the past 5 years as well as total career. In line with Gibby et al., we used Zickar and Highhouses (2001) listing of journal rankings as the basis for identifying the top 10 I-O journals. Journals on this list included Journal of Applied Psychology, Personnel Psychology, Academy of Management Journal, Academy of Management Review, Organizational Behavior and Human Decision Processes, Administrative Science Quarterly, Journal of Management, Journal of Organizational Behavior, Organizational Research Methods, and Journal of Vocational Behavior.
Similar to previous investigations of program research productivity (Gibby, et al., 2002; Howard et al., 1985; Winter et al. 1995), we assigned each individual points for each article using the formula presented by Howard, Cole, and Maxwell (1987). This formula assigns points as follows:
credit = (1.5n-i)/(1.5i-1)
i = 1
where n is the total number of authors on the published research and i is the author of interests position among the total set of authors. Thus, based on the formula, a sole author would receive 1 point, the first of two authors would receive .6 points, and the second of two authors would receive .4 points, and so forth. To obtain a score for a given program, points were summed across all faculty affiliated with the program. This formula was used as the basis for scores in each of the four productivity categories. We also calculated an overall research productivity score for each program representing a summary aggregation of the four productivity categories. Given the different ranges across productivity categories, we first converted each score (i.e., point total) to a z-score within category (i.e., the mean points across programs for top 10 journal publications from 19992003 was 3.03 [SD = .25], and these values were used to calculate z-scores for each program). Next, we calculated the average z-score across the four categories for each program. We then calculated z-scores for this summary category and finally converted these z-scores to T scores (i.e., X = 50, SD = 10) in order to eliminate negative values.
Table 1 presents the publication-based point totals by program for each of the four productivity categories, the overall productivity index, and the number of faculty associated with each program. Table 2 presents descriptive statistics and correlations between each of the five productivity scores as well as number of faculty. There are a total of 310 faculty across the 60 programs listed in Table 1. The mean number of faculty across programs was 5.17 (SD = 2.20) and the modal number of faculty was four.
Research Productivity Indices by Program and Category
Top Total Total Overall
# Top Ten Ten Output Output Productivity
University Faculty 19992003 Career 19992003 Career Index
|Alliant International University-Los Angeles
|Alliant International University-San Francisco
|Alliant International University-San Diego
|Baruch College, City University of New York
|Bowling Green State University
|Carlos Albizu University-San Juan
|Central Michigan University
|Claremont Graduate University
|Colorado State University
|Florida Institute of Technology
|Florida International University
|George Mason University
|George Washington University
|Georgia Institute of Technology
|Illinois Institute of Technology
|Kansas State University
|Louisiana State University
|Michigan State University
|New York University
|North Carolina State University
|Old Dominion University
|Pennsylvania State University
|Portland State University
|Rutgers-The State University of New Jersey
|Saint Louis University
|Teachers College, Columbia University
|Texas A&M University
|University at Albany, SUNY
|University of Akron
|University of Calgary
|University of California, Berkeley
|University of Central Florida
|University of Connecticut
|University of Georgia
|University of Houston
|University of Illinois at Urbana-Champaign
|University of Maryland
|University of Memphis
|University of Michigan
|University of Minnesota
|University of Missouri-St Louis
|University of Nebraska-Omaha
|University of North Texas
|University of Oklahoma
|University of South Florida
|University of Tennessee, Knoxville
|University of Tulsa
|University of Waterloo
|University of Western Ontario
|Virginia Tech University
|Wayne State University
|Western Michigan University
|Wright State University
Note. Top Ten 19992003 is program output in the Top Ten I-O Psychology Journals (as identified by Zickar & Highhouse, 2001) published during 19992003; Top Ten Career is program output in the Top Ten journals during the entire publishing career of each faculty member; Total Output 19992003 is total program output during 19992003; Total Output Career is total program output during the entire publishing career of each faculty member; Overall Productivity Index is the I-scored aggregation of the four productivity indices.
Not surprisingly, there is a high level of consistency across the five productivity categories (i.e., the mean correlation among the indices was .89). Examination of the aggregate overall productivity index indicates that, while the mean overall research productivity score across programs was (by definition) 50 (SD = 10), this distribution is positively skewed (skewness = .178), with the majority of scores falling below the mean. Examination of this distribution indicates that only six (10%) programs fall one or more standard deviations above the mean, 17 (28%) programs fall between the mean and one standard deviation above the mean, and 37 (62%) programs fall below the mean.
It is important to stress that the overall productivity score is an arbitrary metric that is difficult to interpret in any absolute sense. The other four category scores correspond to the number of publications (adjusted for number of authors) associated with each program. Examination of the publication points for the top 10 I-O journals for the 5-year period from 1999 to 2003 reflects a total of 181.98 points across the 60 programs or an average of 3.03 (SD = 3.25) per program. To put this in context, we calculated the total number of possible points. Specifically, we obtained a count of the total number of articles published (excluding book reviews, editorials, errata, etc.) in the top 10 I-O journals from 19992003. This count indicated that 2004 articles were published in the top 10 I-O journals from 1999 to 2003. The 310 faculty members associated with the 60 I-O psychology doctoral programs presented in Table 1 accounted for slightly less than 10% of the total number of articles published.
Finally, as evident from Table 2, program-level research productivity is, to some extent, a function of the number of faculty associated with each program. Not surprisingly, the more faculty associated with a program, the higher the level of overall research productivity. However, this relationship is not as strong as might be expected; the mean correlation between the number of faculty and each of the productivity indices was .48.
Descriptive Statistics by Category
Mean SD 1 2 3 4 5 6 __________________________________________________________________________________
|1. # of Faculty
|2. Top Ten 19992003
|3. Top Ten Career
|4. Total Output 19992003
|5. Total Output Career
|6. Overall Productivity
In the interest of increasing our knowledge about research productivity at industrial and/or organizational psychology programs in North America, we compiled research productivity statistics of I-O doctoral programs listed on the SIOP Web site. Although this study is similar to previous studies evaluating research productivity of I-O programs, our focus was slightly different. That is, rather than focusing only on the top programs, we sought to examine data with respect to the full spectrum of I-O doctoral programs. This allowed us to examine the range and distribution of productivity across all programs.
With respect to the overall index of research productivity, we recognize the urge to simply consider the relative ranking of individual programs. Are we in the top 10? Top 15? However, we believe that it is important to consider the whole picture with respect to research productivity across programs. Our results indicate considerable similarity across programs with respect to faculty size and overall research productivity. The differences between programs are often quite small, and valuable information is lost when only considering the rank position of a program.
It is certainly important to consider the faculty composition of any given program. Again, there is a relationship between the number of faculty in a program and overall research productivity. Thus, program output appropriately reflects the full faculty. However, one should not lose sight of the potential for high-quality programs with a small number of I-O faculty. There is also considerable mobility among faculty. The overall faculty composition of any given program may change substantially from one year to the next. Given the relatively small number of faculty typical of most I-O programs as well as the overall levels of productivity reflected in our data, it is obvious that the addition or loss of one highly productive faculty member may have a tremendous impact on a programs standing. A related caveat of this approach to assessing program research productivity is that it is based on the aggregated productivity of the current faculty. As such, it is not a reflection of all the research conducted within a given program. If, for example, an active researcher joined a particular program at the start of the 20022003 academic year, their research record (for the preceding 5-year period as well as their overall career) would be associated with their current program affiliation even if it had actually been conducted at a different location.
It is also quite interesting to examine the extent to which faculty in I-O doctoral programs contributed to the relevant literature over the 5-year period from 1999 to 2003. As noted, the contribution of faculty in I-O doctoral programs accounted for just under 10% of the total articles published in the top 10 I-O journals. At first glance, this number may seem surprisingly low. There are certainly student authors or coauthors that were not included in our data as well as contributions from I-O program faculty outside of North America. Even so, it is important to note that those journals considered to be the top I-O journals are not exclusively I-O publication outlets. Although the Academy of Management Journal and Review, Administrative Science Quarterly, and the Journal of Management are included in the top I-O journal list, they are broad management journals reflecting very diverse content areas. Given the small number of I-O doctoral programs and faculty relative to traditional management programs, it is not surprising that I-O psychology program faculty represent a small minority of contributors. Similar arguments could be made with respect to journals like the Journal of Vocational Behavior, Organizational Research Methods, and the Journal of Organizational Behavior. Contribution rates to journals like the Journal of Applied Psychology, Personnel Psychology, and Organizational Behavior and Human Decision Processes will certainly be higher, but again, contributions to these journals are not exclusive to I-O psychology. In short, our data support what all I-O researchers knowthe top I-O journals represent a highly competitive academic arena that extends well beyond the bounds of traditional I-O doctoral programs.
In addition to the levels of overall research productivity associated with each program, there is also considerable information to be gained by considering both scores in each category individually and the relationships among the different categories. An examination of the ratio of points in the top 10 I-O journals 19992003 category to points in the total output 19992003 category indicates a fair degree of consistency across programs with an average ratio of 25%. There are, however, exceptions. For example, although the University of Michigan falls above the mean with respect to overall productivity, the ratio of top 10 I-O journals 19992003 to total output 19992003 is just 8.7%. This suggests that quite a bit of faculty research is directed at outlets other than the traditional top 10 I-O journals and may represent areas of interest other than those considered mainstream I-O research. Similarly, a relatively large ratio between either top 10 19992003 and top 10 career or total output 19992003 and total output career may be indicative of a more junior faculty. To illustrate, Wayne State University has a top 10 19992003 to top 10 career ratio of 63% (compared with an average of 30%).
In summary, although previous investigations have tracked faculty research productivity as an indicator of the quality of the top I-O programs, our investigation attempted to gauge the full range of research productivity across current doctoral programs. Our goal was to provide a relatively comprehensive (i.e., inclusive) picture of the level of publication-based productivity occurring at I-O psychology doctoral programs in North America as well as an indication of the standing of individual programs.
Please send correspondence to David J. Woehr, PhD, University of Tennessee, Knoxville, I-O Psychology Program, Department of Management, Knoxville, Tennessee, e-mail: firstname.lastname@example.org.
Gibby, R. E., Reeve, C. L., Grauer, E., Mohr, D., and Zickar, M. J. (2002). The top I-O psychology doctoral programs of North America. The Industrial-Organizational Psychologist, 39(4), 1725.
Howard, G. S., Cole, D. A., & Maxwell, S. E. (1987). Research productivity in psychology based on publication in the journals of the American Psychological Association. American Psychologist, 42, 975986.
Howard, G. S., Maxwell, S. E., Berra, S. M., & Sternitske. (1985). Institutional research productivity in industrial/organizational psychology. Journal of Applied Psychology, 70, 233236.
Levine, E. L. (1990). Institutional and individual research productivity in I-O psychology during the 1980s. The Industrial-Organizational Psychologist, 27(3), 2729.
Payne, S. C., Succa, C. A., Maxey, T. D., & Bolton, K. R. (2001). Institutional representation in the SIOP conference program: 19862000. The Industrial-Organizational Psychologist, 39(1), 5360.
Surrette, M. A. (1989). Ranking I-O graduate programs on the basis of student research presentations. The Industrial-Organizational Psychologist, 26(3), 4144.
Winter, J. L., Healy, M. C., & Svyantek, D. J. (1995). North Americas top I-O psychology doctoral programs: U.S. News and World Report revisited. The Industrial-Organizational Psychologist, 33(1), 5458.
Zickar, M. J., & Highhouse, S. (2001). Measuring prestige of journals in industrial-organizational psychology. The Industrial-Organizational Psychologist, 38(4), 29-36.
July 2005 Table of Contents | TIP Home | SIOP Home