Home Home | About Us | Sitemap | Contact  
  • Info For
  • Professionals
  • Students
  • Educators
  • Media
  • Search
    Powered By Google

The 2011 SIOP I-O Psychology Graduate Program Benchmarking Survey: Overview and Selected Norms

Robert P. Tett, Cameron Brown, Benjamin Walser,
Daniel V. Simonet, and Jonathan B. Davis
University of Tulsa
Scott Tonidandel
Davidson College
Michelle Hebl
Rice University

The first doctorate in industrial-organizational (I-O) psychology was awarded by Brown University to Lillian Gilbreth in 1917. Since that time, graduate education in I-O psychology has blossomed to over 200 master’s and doctoral programs in the U.S. alone. At the core of all those programs is a joint focus on the science and practice of psychology in the workplace. Beyond that most basic and generic orientation, I-O programs vary substantially on many fronts, including, among other things, admissions standards, course offerings, thesis and dissertation procedures, and resources. For many years, I-O program directors and associated faculty have wondered how their own programs compare to others on such features. They want to know if the way they are selecting and training I-O psychologists is mainstream or somehow unique. Applicants to I-O programs also have a stake in knowing what is typical or unusual about their options for graduate training. Selected program features have been compiled by SIOP since 1986 and offered on the SIOP website since 2000. Many other features, however, are not covered.

Rentsch, Lowenberg, Barnes-Farrell, and Menard (1997) reported results of a program survey conducted by SIOP’s Education and Training Committee in 1995. Organized around inputs (e.g., GRE requirements), throughputs (e.g., number of full-time faculty), and outcomes (e.g., job placements), results identified a number of differences between I-O and OB/HR graduate programs and between master’s and doctoral programs in entrance requirements, faculty composition, degree requirements, required courses, and other characteristics.

The Education and Training Committee, seeking an update to the 1995 effort, undertook a second survey in 2011. This article is the first in a series describing the project’s findings. We begin by considering how a benchmarking survey can be beneficial. We then introduce the current survey’s nine main content areas and describe our methods. Following a summary of response rates and main sample characteristics, norms and subgroup comparisons are offered for general program features relating to students and faculty. Norms for the remaining areas will be presented in subsequent articles.

Why Conduct a Benchmarking Survey?

There are several reasons for conducting a benchmarking survey on I-O graduate programs. Practically speaking, individual programs might hope to improve by drawing on what others are doing. In a best-case scenario, a program identifying a comparative shortfall (e.g., in graduate student funding) might seek to leverage survey results to secure better resources from university administration. Applying this strategy across programs has the potential to develop an “arms race” of sorts, each program looking to catch up to or surpass competitor programs with the aim of winning the best applicants. Such competition seems unsavory in some ways (all I-O programs, after all, are on the “same team” when it comes to promoting the discipline). However, if it drives improvements in I-O graduate education as a whole by increasing outside resources, then such competition can be healthy for the discipline.

Second, there is merit in knowing whether one’s own program is uniquely endowed in some way, as a key point to cover in student recruitment and retention. This is most pertinent to programs with a unique specialization (e.g., work stress, human factors): norms should be lower, averaging across all programs, on variables relevant to the specialization (e.g., course offerings on selected topics). Beyond confirming program identity, norms permit more precise estimation of relative standing (e.g., in terms of z-scores).

A third reason for conducting a benchmarking survey is that it yields a snapshot of current education practices and standards for use as a baseline in judging changes over time. Current survey results afford the opportunity in the future to identify trends in how I-O students are trained. Knowledge of such trends, in turn, might better inform strategy toward meeting worthwhile educational priorities.

Finally, recurring topics of discussion within SIOP include licensure and program accreditation. These issues are heated because they carry potentially profound implications regarding how I-O psychology is trained and, indeed, the meaning of an I-O degree. This is not the place to consider the pros and cons of all the positions on these matters (go to www.siop.org/Licensure for SIOP’s official stance on licensure; other perspectives on licensure are also available on the SIOP site; see Lee, Siegfried, Hays-Thomas, & Koppes, 2003, for discussion of program accreditation). Survey results can inform discussion so that arguments either way benefit from fact over conjecture.

Main Content Areas

Collective experience led us to identify nine general topic areas on which most programs would allow comparison and about which we felt most readers would be interested in knowing. These are listed in Table 1 with representative subtopics. We added open-ended items at the end of each section to allow programs to be described in ways not covered by the standardized items. As a testament to the thoroughness of coverage, no responses to the open-ended questions offered any pattern suggesting missed content applicable across programs.

Table 1
MainSurvey Content Areas and Selected Subtopics

1. General program description
  Geographical location
% graduates seeking applied vs. academic jobs
Number of I-O faculty (e.g., core, non-core)
2 Admissions
  # applicants per year % applicants accepted
GRE and GPA requirements
Review process (e.g., importance of consensus)
3 Curriculum
  Course offerings
Courses required vs. elective
25 SIOP I-O competency emphasis
4 Comprehensive/qualifying exams
  Components
Grading methods
Item content and types
5 Theses and dissertations
  Length and requirements
Acceptable topics
Committee membership
6 Internships/fieldwork
  Duration (e.g., total working hours)
Compensation
Evaluations
7 Assistantships
  Workload (e.g., hours/week)
Stipend amounts
Duration of assistantship
8 Student resources
  Access to computers & printing
Travel and research funding
Summer funding
9 Student performance expectations
  Minimum GPA
Research expectations
# consulting projects

Survey Development

Targeting the nine content areas, the research team developed a list of questions intended to cover each area in reasonable breadth, balancing scope and length. Items and response options were reviewed and edited for relevance, clarity, length, order, and comprehensiveness, and were worded so as not to require perfectly accurate details; for example, to assess how many students are accepted into the given program each year, the survey asked “roughly what percentage” of applicants are accepted. Greater precision was not pursued because (a) the survey was already long and asking for exact numbers requiring detailed review of past records was expected to adversely affect response rates, and (b) responses were to be averaged across programs, such that higher levels of precision would be washed out in aggregate.

At this stage, the entire survey was reviewed by Dave Nershi, Milt Hakel, and Tammy Allen, who offered further suggestions for clarification and coverage. It was clear early on that two surveys were needed to accommodate graduate programs offering both master’s and doctoral degrees, as each degree program could be distinct within a given department (e.g., regarding entrance requirements) and the language used per degree sometimes varies (e.g., “thesis” vs. “dissertation”). The two parallel surveys cover the same sections (e.g., geographical location, course offerings) and are equal in length.

All items were uploaded to an online platform, ZipSurvey, allowing branching around sections not relevant to the given program (e.g., thesis in some MA programs) and with a save-and-finish-later option. The online surveys were beta-tested by the research team and by several conveniently accessible program directors. After a few wrinkles were ironed out, each survey (master’s, doctoral) contained a total of 160 items (some requiring multiple responses and/or subitems) and was expected to take 30 to 45 minutes to complete. Programs offering both master’s and doctoral degrees could expect to spend about an hour completing both, given instructions to skip over redundant sections.1

____________
1To avoid requiring that both surveys be completed in their entirety by programs offering both degrees, we asked those respondents to complete either one (master's or doctoral version) first, and then to complete only those sections in the second survey addressing unique content. For example, if the course offerings for the doctoral program were identical to those for the master's program, then completing the course offerings section in whichever survey was completed first (e.g., master's) would permit leaving that section blank in the second (e.g., doctoral) survey. This procedure lightened the burden of survey completion but required our having to track the gaps carefully in preparing the data sets. In such cases where the given section was clearly relevant to both programs (e.g., curriculum), the data from the completed survey section were copied into the blank section on the other survey.

Administration Procedure

The targeted population was all graduate programs listed with SIOP in summer of 2011. E-mail addresses were required for survey administration. About 10% of the e-mail addresses listed with SIOP were outdated. In some cases, contacted individuals forwarded our invitation to the appropriate person. Other cases required our tracking down the needed contact information. Every effort was made to ensure that all listed programs received the invitation to complete the survey. Of the 239 programs listed with SIOP, we successfully contacted all but two. Both exceptions were outside the U.S., and we suspect those programs may be defunct.

Following the initial invitation, we tracked the names of programs with completed and partially completed surveys. Reminders were sent by e-mail or by phone every few weeks to programs that either had partially completed the survey or had not yet started.

Response Rates and Sample Description

Table 2 presents response rates broken out by program degree type (i.e., master’s vs. doctoral). Twenty-six responding departments offer both degrees. The overall response rate of close to 60% is less than ideal (we had hoped to achieve upwards of 90%), but we judge it large enough to warrant meaningful normative comparisons for current aims. The rate is slightly higher for doctoral programs, but the difference is nonsignificant (X2  = .29, p = .59, two-tailed).

Table 2
Response Rates

Degree program* N invited N responded Response rate
  Master's 136 78 57.4%
  Doctoral 110 69 62.7%
  Total 246 147 59.8%
       
*26 programs offer both master's and doctoral degrees.

Table 3 offers a breakdown by U.S. versus non-U.S. programs. The split is clearly unbalanced. Three more specific observations bear noting. First, the numbers of non-U.S. programs offering usable data are fairly small (5 master’s, 7 doctoral), limiting the credibility of associated norms. Second, the proportion of all non-U.S. I-O programs represented in the dataset is unclear as it is difficult to ascertain the comprehensiveness of the non-U.S. programs listed with SIOP.2 Third, review of the non-U.S .program norms suggests considerable diversity among those programs and uniquenesses relative to the larger core set of U.S. programs. Given these issues, data from the non-U.S. programs were dropped from the normative summaries.

_____________
2It is almost certainly a substantial underrepresentation. Separate efforts are currently underway to survey I-O programs outside the U.S. We are more certain in the rates for American programs as we expect all, or very nearly all, active I-O graduate programs in the U.S. are listed with SIOP and therefore were invited to complete the survey.

Table 3
Country

Degree program USA Non-USA Total
  Master's 73 5 78
  Doctoral 62 7 69
  Total 135 12 147

Table 4 presents a cross-tabulation by degree type and department type based on U.S. programs. Relatively few programs in the dataset (12.9%) are housed in business/management departments, and the Ns for those subsamples are smaller than desired. We offer separate norms notwithstanding the limitations, in light of general interest in comparing I-O programs across the two main department types.

A final breakdown of the usable U.S. sample is offered in Table 5 by traditional versus nontraditional program type (i.e., brick-and-mortar, online, mixed). Very few purely online programs are represented (N = 4), although a fair number of programs (N = 15) offer a mixture of traditional and nontraditional access to graduate training.

Table 4
Department Type

Degree program Psychology Business/Mgmt. Other
  Master's 60 (76.9%) 7 (9.0%) 6 (7.7%)
  Doctoral 44 (63.8% 12 (17.4%) 6 (8.7%)
  Total 104 (70.7%) 19 (12.9%) 12 (8.2%)
 
Note: Non-USA programs dropped

Programs participating in the benchmarking survey were assured that their individual responses would not be revealed. Thus, results are offered in aggregate form only. Five initial sets of norms were prepared, one based on all U.S. programs (including online and “other” department types), and the other four based on (U.S.-only) program classes generated by crossing master’s versus doctoral with psychology versus business/management department. This latter 2x2 array excludes online-only programs and programs in departments other than psychology and business/management in order to permit clearer comparisons for the large majority of cases, which fall into one of the four 2x2 cells.

Table 5
Program Type

Degree program Brick & mortar Online Mixed
  Master's 59 (75.6%) 3 (3.8%) 12 (15.4%)
  Doctoral 58 (84.1% 1 (1.4%) 3(4.3%)
  Total 117 (79.6%) 4 (2.7%) 15 (10.2%)
 
Note: Non-USA programs dropped

Additional norms were prepared for the “top” programs in the field. No ranking system is immune to criticism. In an attempt to provide a balanced view of top programs, four “top-10” lists were considered for norm derivation. The first is from the US News & World Report’s 2009 ranking of the top-eight I-O graduate programs, based on judged institutional reputation. The second is Gibby, Reeve, Grauer, Mohr, and Zickar’s (2002) ranking of doctoral programs based on objective productivity indices (e.g., number of publications in top I-O journals). We used their overall index to identify the top- 10 programs. The last two top-10 lists, derived separately for master’s and doctoral programs, come from Kraiger and Abalos’ (2004) study of student ratings on 20 dimensions relating to quality of life and perceived quality of training. Results from both Gibby et al. (2002) and Kraiger and Abalos (2004) are somewhat dated, but we judge it unlikely that the top programs would have changed by so much in the interim to substantially compromise generalizability to the present day. In fact, the US News’ (2009) top eight are subsumed completely within Gibby et al.’s (2002) top 10. Accordingly, we report norms for Gibby et al.’s top-10 programs and the two top-10s from Kraiger and Abalos (2004).3

___________
3In each case, at least one program listed as a "top-10" did not complete the survey. Actual Ns are specified per variable in the tables to follow.

Many continuous variables have significantly skewed distributions, calling for reporting of not just the mean and standard deviation, but the median, minimum, and maximum values as well. Statistical outliers were retained in favor of more comprehensive representation.4 Variables with notable (and significant) skewness warrant emphasis on the median as the preferred measure of central tendency. Means and standard deviations, reported for all continuous variables, permit calculation of z-scores based on a given program’s particular data. We caution, however, that transformation to percentiles using the normal distribution (e.g., z = 1.28 corresponds to the 80th percentile) is limited to the normally distributed variables. Finally, missing data were left blank, as imputation would likely have little impact on central tendency and could lead to underestimation of variability.5

____________

4Following remedy of obvious errors through followups with specific programs, all outliers were judged legitimate contributors to the dataset.

5Imputation is more relevant for relational analyses (e.g., correlations among continuous variables), to be considered in a future article.

Norms for General Program Features

The survey’s detail and the need to break the total sample out into subgroups (i.e., the 2x2 array and three “top-10” sets) call for presentation of norms in installments. Here we present norms only for general program features. Later articles will offer results for the remaining eight content areas listed in Table 1.

Table 6 presents overall norms for program features relating to student and faculty composition. Departments housing I-O programs vary considerably in overall size (range = 1 to 55 faculty). Results for average number of graduates per year and number of “core” I-O faculty suggest similar variability in program size. The mean percentage of graduates seeking applied (vs. academic) positions yields a 3:1 ratio (75:25), although the median of 90% suggests an even more predominantly applied focus. I-O programs, overall, average nearly the same number of core I-O faculty as non-I-O contributors, albeit with greater variability and a lower median in the latter (2 vs. 4). Few programs rely on core faculty outside the host department (mean = .3, max = 3), and most programs do not rely on adjunct instructors (median = 0).

Norms for main program features are broken out by program degree type (master’s vs. doctoral) in Tables 7 and 8 for psychology and business/management departments, respectively. Table 9 presents corresponding ANOVA results for the 2 x 2 breakout. Programs vary notably by both degree and department type on yearly graduates and percentages of students seeking applied positions. Master’s programs, as expected, graduate more students per year than do doctoral programs (weighted means = 12.2 vs. 3.4). Business/management departments average more graduates than psychology departments do (weighted means = 10.9 vs. 7.6), and the master’s/doctoral difference is greater in business/management programs as well (i.e. more master’s, fewer doctors).

A more dramatic pattern is evident in the percentage of graduates seeking applied versus academic positions across the two department types: Master’s graduates from both psychology and business/management seek applied jobs at high rates (91% and 99%, respectively), but the rates are strikingly different at the doctoral level: 67% for psychology-based programs versus 2% (i.e., 98% seeking academic) in business/management departments. A possible reason for the differential rates may be that a doctoral degree in psychology means more in applied settings than does a doctoral degree in management. Business students seeking applied work may advance more readily with a master’s degree and on-the-job experience than with equal time spent earning a doctorate. Comparisons between department types on other variables in the dataset (e.g., regarding curriculum) may offer more definitive explanations in future articles in the series.

Business/management-based programs average more core I-O faculty (weighted mean = 6.1) than do psychology-based programs (4.0). The raw numbers of I-O faculty are larger in psychology departments than in business/management owing to the greater number of psychology-based programs, but the difference in average size suggests that business schools offer a larger “critical mass” of core faculty in I-O-related graduate programs. Rentsch et al. (1997) reported a similar difference favoring OB/HR programs, suggesting some stability in this finding over the past 16 years. The mean for OB/HR doctoral programs in 1995, however, was 8.1, which compares to 6.2 in the current survey (the means for psychology doctoral programs are similar across the two surveys: 4.8 and 4.7, respectively). This suggests a relative decline in the size of OB/HR doctoral programs. The numbers of non-I-O contributors from the host department vary considerably across programs within the 2 x 2 cells; mean differences between cells are nonsignificant (see Table 9). Doctoral programs average greater reliance on core I-O contributors from outside the department (weighted mean = 1.0) than do master’s programs (.3).

Tables 10 to 12 contain norms for general program characteristics based on the three sets of “top-10” I-O programs. Results, overall, are similar to those based on comparable subgroups. The only significant difference (noted to the right of Table 10) shows that the Gibby et al. (2002) “top-10” programs (all of which are doctoral programs in psychology departments) average a lower rate of students seeking applied versus academic jobs (54% vs. 71%). This is understandable, as the Gibby et al. “top-10” is based on research productivity: Academically productive programs tend to attract students seeking academically productive careers. Other unique properties of the three “top-10” subgroups may be more likely to emerge in other areas covered in future articles in this series.

A Look Ahead

This concludes our introduction to the 2011 I-O program benchmarking survey and norms pertaining to general program features. The next two articles will cover norms for variables relating to student admissions and program curriculum, respectively. Later articles will target the remaining six areas. An additional article is planned to present results of relational analyses identifying meaningful patterns of variables and possibly different types of programs (e.g., research- vs. practice-oriented). The survey’s detailed dataset promises meaningful insights into the state of training in I-O graduate programs and we look forward to offering the remaining installments as a foundation for productive discussions on this important topic.

References

Gibby, R. E., Reeve, C. L., Grauer, E., Mohr, D., & Zickar, M. J. (2002). The top I-O psychology doctoral programs of North America. The Industrial-Organizational Psychologist, 39(4), 17–25.
Kraiger, K., & Abalos, A. (2004). Rankings of graduate programs in I-O psychology based on student ratings of quality. The Industrial-Organizational Psychologist, 42(1), 28–43.
Lee, J. A., Siegfried, W., Hays-Thomas, R., & Koppes, L. L. (2003). Master’s programs in I-O: Should they be accredited? The Industrial-Organizational Psychologist, 41(1), 72–76.
Rentsch, J. R., Lowenberg, G., Barnes-Farrell, J., & Menard, D. (1997). Report on the survey of graduate programs in industrial/organizational psychology and organizational behavior/human resources. The Industrial-Organizational Psychologist, 35(1), 49–65.
US News and World Report (2009). Best industrial and organizational psychology programs/top psychology schools/US News best graduate schools. Retrieved from
http://grad-schools.usnews.rankingsandreviews.com/best-graduate-schools/top-humanities-schools/industrial-organizational-psychology-rankings.