Home Home | About Us | Sitemap | Contact  
  • Info For
  • Professionals
  • Students
  • Educators
  • Media
  • Search
    Powered By Google

Rankings of Graduate Programs in I-O Psychology Based on Student Ratings of Quality

Kurt Kraiger1 and Anthony Abalos
University of Tulsa

1 Send correspondence or requests for individual program feedback to Dr. Kurt Kraiger, McFarlin Professor of Psychology, Department of Psychology, University of Tulsa, 600 S. College Ave., Tulsa, OK 74132 or kurt-kraiger@utulsa.edu.

Graduate training programs in industrial-organizational (I-O) psychology are periodically ranked on objective and subjective criteria related to the quality and output of their graduate faculty. For example, the U.S. News and World Reports (1995; 2001) rankings are based on psychology department chairs judgments of program reputation. Objective criteria used to rank I-O programs include the number of I-O faculty in a program serving on editorial boards (Jones & Klimoski, 1991), the number of faculty publications (Levine, 1990; Winter, Healey, & Svyantek, 1995), and total student conference presentations (Payne, Succa, Maxey, & Bolton, 2001; Surette, 1989, 2002). 

As often noted, the basis for any ranking system is subject to criticism (cf., Cox & Catt, 1977; Gibby, Reeve, Grauer, Mohr, & Zickar, 2002; Winter et al., 1995). For example, rankings based on program reputation may be unrelated to current faculty productivity given halo (general reputation of the university), turnover, or raters who do not fully understand the discipline or activities of individual institutions. As our department chair likes to proclaim, When you look at the nighttime sky, some of the brightest stars have burned out a long time ago. 

Objective systems based on faculty research productivity have been criticized as well. Gibby et al. (2002) noted problems with the criterion of membership on editorial boards: It is an indirect and contaminated measure of research productivity; it fails to measure direct contributions of the faculty to the development of graduate students; and it may penalize programs with young, productive faculty who have not yet attained the professional stature that triggers invitations to the editorial boards of prestigious journals. 

The most popular objective method for ranking I-O programs has been counting faculty publications (Gibby et al., 2002; Levine, 1990; Winter et al., 1995). There are several advantages to such ranking systems. The criterion is reasonably objective and virtually all academic departments use publication counts in the assessment of faculty productivity. Further, research that leads to a publication frequently involves students. Finally, to the extent that the publication-based ranking system considers only upper-echelon journals, there are built-in controls for the quality of the research. The most recent such ranking system was by Gibby et al., who reviewed both narrow and broad lists of journals and also considered both recent and career accomplishments when compiling those lists.

While the relative merits and drawbacks of using various criteria for ranking programs will continue to be debated, it is likely that similar systems will be used to rank graduate training programs in I-O psychology. In addition to the shortcomings of these systems already addressed, we would like to add two others. First, all rankings to date place limited emphasis on the quality of the training experience from the perspective of the student. For example, Gibby et al. (2002) noted that rankings based on editorial board membership may be deficient in that involvement on editorial boards may take away time otherwise spent engaged with graduate students (p. 17). Instead, they advocated for the use of number of publications as the criterion for ranking programs. However, such rankings fail to measure faculty time spent with, or impact on, graduate students. It is possible that when graduate faculty devote too much time on publishing or editorial duties their classroom preparation suffers. We do not mean to suggest that teaching and research are mutually exclusive. Many faculty excel (or fail) at both. Further, active engagement in research may result in grants, joint authorships, and research experience that are all valuable to students. However, our point is that no ranking system to date explicitly quantifies these opportunities for students. Further, for students pursuing a nonacademic career path, variables such as availability of internships, networking opportunities, and skill training may be more important determinants of satisfaction with graduate school than faculty productivity. A ranking system that assesses the experience of being a student may provide alternative, important information for prospective students or undergraduate faculty who provide recommendations to their students on potential graduate programs. 

The second shortcoming we would like to address is the focus of most ranking systems solely on doctoral programs (notable exceptions are systems based on student presentations at conferences). Currently, there about 98 North American MA/MS programs and 66 PhD programs listed on the SIOP Web site. Since MA programs typically take in many more students than doctoral programs, it is reasonable to assume that there are currently two to three times as many MA students as PhD students in North America. Accordingly, it seems important to provide rankings of MA programs to provide guidance to prospective students or undergraduate faculty who provide recommendations to students pursuing that degree option. However, traditional criteria used to rank doctoral programs may be inadequate for ranking MA programs. Departments that offer terminal MA programs typically assign higher teaching loads (than departments with doctoral programs) and often place more emphasis on instructional activities (compared to research and publishing). Further, students entering terminal MA programs may have different expectations for their graduate training, with a greater emphasis on faculty accessibility, applied classroom training, and internship or practicum opportunities. Thus, while a ranking system for terminal MA programs would provide important information for prospective students or undergraduate advisors, it should be based on variables relevant to the professional objectives of faculty in those programs.


Accordingly, our research was conducted with three objectives explicitly defined. The first was to develop a ranking system of graduate programs in I-O psychology based on a broader set of criteria than has traditionally been considered. Specifically, our goal was to develop a ranking system based on current graduate students evaluations of variables important to the quality of life and quality of training from a students perspective. Second, we wanted to develop a ranking system that would be applied to both terminal masters and PhD programs. Finally, we wanted to determine if the set of criteria students used to evaluate quality of life and quality of training differed between students in terminal masters versus PhD programs.

This research study consisted of three phases. First, in the criterion development stage, current graduate I-O students (both MA and PhD candidates) helped develop the criteria used to evaluate the quality of a program. Second, we had both program directors and graduate students review this list of criteria and evaluate the importance of each variable for judging the quality of graduate programs (either MA/MS or PhD) in general. Third, we elicited ratings from current graduate students for their respective programs. These ratings were weighted by the importance judgments obtained in the second phase to compute an overall index. In phases two and three, all I-O psychology programs listed on the SIOP Web site were contacted and invited to participate in the study.

Scaling Issues

It is important to note that in phase three, when we sent out requests to program directors to provide a link to their graduate students to rate their programs, there were a number of program directors who refused to do so. There were others who begrudgingly agreed to do so (so as not to be left out), but expressed reservations (as did the first group) about the validity of the ratings. The concerns expressed by these program directors are important and should be reviewed before the rankings are presented. 

One specific concern came from program directors in expensive metropolitan areas. Because we included cost of living as a variable in the calculation of our final index, some directors felt that their programs would be disadvantaged in our rankings; further, there was a perception that inclusion of this criterion was unfair, since it is beyond their control. For example, a graduate program in an expensive area might do everything possible to create a positive climate for its students but still not rank highly if it is penalized for its location. We included variables such as location and cost of living in the survey because respondents in phase one of the research indicated that these variables contributed to the quality of life as a graduate student. However, we recognize that inclusion of these variables may create inequities in the calculation of an overall index for variables beyond the control of individual programs. More importantly, when we calculated variable weights in phase two, cost of living actually received a slightly negative weight, so that the more favorably students rated local cost of living, the worse the overall weighted index for their school. Accordingly, we excluded cost of living from the calculation of the overall weighted index; it has been included only in a ranking based solely on cost factors (see Table 6).

The other issues raised by program directors dealt with concerns about the fairness of the process, the validity of the ratings, or the suitability of student ratings as a criterion for judging graduate programs. Three directors refused to solicit student ratings because they believed that such opinions were unimportant for evaluating program quality. Because of their concerns about the fairness of the process or the validity of the data, six directors were unwilling to participate, while others participated while expressing reservation. Issues raised included concerns that (a) other program directors might only send the survey to graduate students likely to express positive opinions (or avoid sending to students likely to provide negative opinions); (b) other program directors might send the survey to all students but with instructions to provide only positive ratings; (c) the surveys would be sent to all students but students with negative attitudes would be the most motivated to respond; (d) students in some schools would choose to either rate their program more negatively than warranted to express overall dissatisfaction or more positively than warranted to express overall satisfaction and enhance the reputation of their program.

Our response to these concerns is that they represent valid issues and are applicable to any type of survey administration. We are clear that we are providing subjective criteria for ranking programs. Sampling and response biases may affect the validity of any data based on personal attitudes, opinions, or judgments. Our intent is that the rankings generated by these ratings will be used in conjunction with other, more objective rankings to help prospective students make important decisions about where to attend graduate school. We caution all readers to remember the source of our data and realize its limitations.

Method

Survey Development
As noted above, the survey was developed in three phases. In phase one, University of Tulsa I-O graduate students were asked to list the criteria people use to choose a graduate program or to recommend a program to another person. Taking this initial list of possible criteria, we logically combined related criteria, then wrote definitions for each. The final list of 20 variables appears in Figure 1. 

 

Faculty support and accessibility

Overall extent of faculty support, accessibility, and involvement in student affairs. Includes faculty advising and interaction outside of the classroom.

2.

Quality of instruction

Overall quality of classes; the extent to
which classes prepare students for careers
in academic or applied settings.

3.

Balance between applied and academic emphases

A program with both applied and academic foci; faculty with applied experience to augment academic knowledge.

4.

Research interests of the faculty and the program

Faculty with varied research interests, and
which students find relevant.

5.

Overall quality of research that takes place in the program

The number of faculty publications at this program compared with that at other programs.

6.

Research opportunities for students

Includes willingness of professors to include
students in their research and actual student
involvement.

7.

Opportunities for work in the local community

Includes quality and quantity of available
internships and jobs; programs relationships
with local organizations.

8.

Cost of living

Cost of living in the city in which the
program is located.

9.

Placement services and employability of students after graduation.

Includes faculty aid with searching for
internships and jobs; and/or a formal job
placement service; network between
current students and alumni.

10.

Average graduation rate/length of time required to complete degreeMasters Students only2

 

11.

Average graduation rate/length of time required to complete degreeDoctoral students only

 

12.

Connection with the I-O community

Active faculty and student involvement
in professional organizations and conferences.

13.

Overall quality of students

Includes selectivity of the program in
admitting students and number of student
publications. The quality of students at this
program compared with that at other programs.

14.

Availability of funding

Available funding through assistantships;
monetary support for attending conferences.
The availability of funding at this program
compared with that at other programs.

15.

Location of the university

Qualities of the city in which the program is
located such as weather, cost of living,
availability of housing, entertainment
opportunities, and so forth.

16.

Variety or breadth of course offerings

 

17.

Class size

Size of classes that are conducive to learning.

18.

Culture of the program

Collaborative versus competitive atmosphere;
relationships between students and
professors; pervasiveness of politics.

19.

Faculty turnover

The rate of faculty turnover at this program
compared with that at other programs.

20.

Availability of educational resources

Includes quality of departmental and
university resources such as libraries,
computers, software, journals, and so forth.

2 Responses to items 10 and 11 were collected for research purposes but not used to rank programs.

Figure 1.  Variables used to elicit ratings on graduate training programs


The second phase of the study was conducted in the fall of 2002. Two surveys comprised of the 20 variables were constructed using ZipSurvey by corporatesurvey.com; links to both surveys were sent to all MA, MS, and PhD program directors listed on the SIOP Web page affiliated with North American psychology departments. One survey was for program directors and asked them to rate the importance of each variable for recommending a graduate program to potential graduate students. Directors were instructed to forward the link for a similar survey to their graduate students with instructions to complete the survey. The second survey contained identical items and scales but asked students to rate the importance of each variable for choosing graduate programs from the perspective of a current graduate student. Ratings were on a five-point Likert-type scale (1 = Not at all important; 5 = Extremely important). Students also indicated their year in school and whether they were in a terminal MA/MS program or a doctoral (PhD) program. Completed surveys were stored on the corporatesurvey.com server and downloaded to a spreadsheet file without any information that could identify respondents. All surveys were completed anonymously.

Importance ratings were obtained from 68 program directors (43 from an MA or MS program and 36 from a PhD program)3, as well as from 313 graduate students (142 self-identified as from MA/MS programs, and 164 from PhD programs). It is not possible to specify an exact response rate since it was not known exactly how many program directors received an e-mail to participate; neither was it known how many graduate students received forwarded e-mails. However, e-mails were originally sent to 160 program directors, suggesting that approximately 43% all directors contacted completed the survey.

3 Numbers by degree total more than 68 as many respondents directed both an MA/MS and a PhD program.

Once the importance ratings were averaged by program, weights were computed for each item to use in the determination of the final rankings. The weight for each item was calculated using the following formula:

Importance item n = Mean importance rating item n − Mean importance rating all items
                                    __________________________________________________


                                                                    Standard Deviation All items

We had originally intended to use separate importance weights for faculty and student ratings. However, across the variables, there was a correlation of .89 between mean importance ratings by faculty and students. For ease of presentation, we used only one weighted index based on the student weightsthese weights were used instead of the faculty weights because we were primarily interested in students perceptions of factors influencing quality of life as a graduate student. It is significant to note that faculty and students place similar importance on each variable when evaluating graduate programs.

While not a primary goal of the study, examining the difference between the factors important to graduate students in MA/MS programs and those important to doctoral students was of interest to the researchers. For example, MA students might place greater value on faculty instructional support while PhD students place more emphasis on research opportunities. We calculated average ratings on each item for both MA/MS students and for PhD students, and then correlated the two vectors. The importance ratings for both groups of students were very similar (r = .80), indicating that variables affecting quality of graduate education were similar for both groups of students. We compared the mean differences between groups on all variables and found no significant differences.

The third phase of the study began in the fall of 2003. The 20 items in Figure 1 were used to construct the online survey. Potential respondents were told that the purpose of this study was to collect perceptions of the quality of the graduate programs from the perspective of their customersthe graduate students. Respondents rated each of the 20 items using a five-point Likert-type scale with anchors tailored to the item. For example, Culture of the program was rated on a scale ranging from 1 = very unfavorable culture to 5 = very favorable culture. Students also indicated their year in program, gender, and race.

An updated list of program directors was obtained from the SIOP Web site and program advisors at all North American I-O graduate programs (listed on the site) were contacted and sent a link to the online survey. Reminders were sent to all program directors in November and were sent to specific programs that had not responded in December. Data were collected through the end of December 2003. 

Ratings of programs in phase three were completed by students only. Responses were anonymous and submitted to the corporatesurvey.com server, then written as a spreadsheet file and sent to the researchers for analysis. Unlike the data from phase two, data in phase three included the degree program in which the respondents were enrolled.

Results

A total of 923 ratings were obtained from graduate students, both masters and doctoral. As noted above, it is impossible to determine a response rate since all that is known is how many program directors received requests to participate, not how many students received forwarded links. In this sample, 285 respondents were male, 592 were female (46 did not specify sex). Race and ethnicity were broken down as follows: 704 Caucasian, 36 Latino/Latina, 35 African American, 44 Asian American/Pacific Islander, 3 Native American, and 54 self-described as other. Table 1 shows mean ratings, standard deviations, and intercorrelations for each of the 18 items used to calculate the weighted index. 

Table 1 

Correlations and Descriptive Statistics for Rating Items
__________________________________________________


To create an overall rank, we first calculated the average rating on each item for each graduate program. If a school had both a PhD and a terminal MA program, we calculated separate averages for each. A program had to have at least five respondents to be included in the ranking. As readers examine the rankings in the following tables, there will be programs not listed in the tables. Their absence reflects one of three conditions: (a) the program director chose not to respond; (b) there were fewer than five respondents from the program; or (c) there were five or more respondents, but the program received a lower ranking than those schools shown in the table.

An overall weighted index for each program was computed by multiplying the average item rating (obtained in phase 3) by the average item weight (obtained in phase 2), and then summing over all products (recall that cost of living was not included in this index). The calculated values for the weighted index ranged from 4.93 to 7.83 for PhD programs and from 4.78 to 7.62 for MA/MS programs. The top 20 programs by rank are shown in Table 2 for PhD programs, and in Table 3 for MA/MS programs. 

Table 2
__________________________________________________
Top 20 PhD Programs by Overall Weighted Index of Student Ratings 
____________________________________________________

Rank         Program                     N       Weighted     Z-    Converted
                                                                index      score     index
____________________________________________________

1. George Washington University 7 7.83 1.52 96.2 
2. University of Guelph 6 7.83 1.51 96.2 
3. Florida Institute of Technology 8 7.74 1.42 94.9
4. Colorado State University 18 7.73 1.41 94.7 
5. Georgia Institute of Technology 7 7.67 1.35 93.9
6. Illinois Institute of Technology 25 7.55 1.23 92.2
7.  Teachers College of Columbia U. 22 7.55 1.23 92.2 
8. University of North Texas 7 7.39 1.07 89.9
9. University of Maryland 17 7.39 1.06 89.9
10.  George Mason University 17  7.15 .81 86.4 
11. Rice University 5 7.13 .80 86.2 
12. University of Houston 9 7.09 .75 85.5 
13.  Baruch CollegeCUNY 25 6.86 .51 82.2
14.  University of IllinoisChicago 7 6.76 .42 80.8
15.  University of Memphis 12 6.71 .36 80.1
16.  University of Tulsa 12  6.69 .34 79.8 
17.  Bowling Green State University 6 6.56 .21 78.0 
18. Carlos Albizu UniversitySan Juan  13 6.40 .05 75.7
19. University of NebraskaOmaha 6.39 .04 75.5
20. University of Georgia 14  6.38 .02 75.4 

_____________________________________________________


Table 3

Top 20 MA/MS Programs by Overall Weighted Index of Student Ratings
____________________________________________________
Rank         Program                     N       Weighted     Z-    Converted
                                                                index      score     index
____________________________________________________

1. Minnesota State University      27 7.62 2.16 99.20
2. University of Tulsa    10  7.62 2.16 99.12 
3. Carlos Albizu University    8 7.20  1.52 90.47 
4. George Mason University     8 6.79 .88 81.94
5. Elmhurst College    9 6.68  .72 79.77
6. University of NebraskaOmaha     7 6.57 .56 77.54 
7.  Xavier University      8 6.55 .53 77.45
8. East Carolina University      7 6.51 .46 76.19
9. Teachers College of Columbia U.      67 6.45 .43 75.78
10.  Florida Institute of Technology    20  6.40 .37 74.96 
11. U. of Tennessee at Chattanooga      14 6.33 .30 74.02
12. Middle Tennessee State University     12 6.30 .19 72.53 
13.  Radford University    15  6.27 .14 71.83 
14.  Chicago School of Prof. Psychology    22  6.24 .10 71.30 
15.  San Francisco State University    15  6.15 .05 70.65 
16.  Indiana University
Purdue U.    
10  6.14 -.08 68.87
17.  St. Cloud State University    6.14 -.10 68.65 
18. Georgia State University    6.07 -.11 68.57 
19. Valdosta State University      9 6.06 -.21 67.19
20. University of Central Florida     5.95 -.22 67.00

____________________________________________________

Because the weighted index is not an intuitively meaningful metric, Tables 2 and 3 provide two other indices. First, we calculated a z-score for each programs ranking (compared to all other rankings within that program type). Thus, a PhD program with a z-score of 0 received an average overall index score, while programs with positive z-scores received above-average overall scores. Finally, we converted the z-scores to a familiar 100-point scale, centered on an average score of 75 for PhD programs and 70 for MA/MS programs. To do so, we multiplied the z-score by 14 (for PhD programs) or 13.5 (for MA/MS programs) and added the product to 75 or 70.4 The converted scores are also shown in Tables 2 and 3 and are offered as an intuitive reference for readers who may be less familiar with properties of z-scores. 

4 14 and 13.5 were arbitrarily chosen values. We chose them because they work, that is, they produced a desired distribution with many scores clustered in the 70s and low 80s, fewer scores in the 90s, and no scores over 100. We had hoped to use the same conversion that was used for both program types, but the greater variance in the MA/MS programs necessitated a smaller multiplier.

We also wanted to provide distinct rankings for specific factors influencing overall perceptions of quality of life. To determine these factors, we tried factor analyzing the program ratings by students. Both principal components and common factor analysis methods were used, but we were unable to identify a clean underlying factor structure. We then factor analyzed the mean item ratings for each of the 69 programs for which we had five or more respondents. Even though the cases-to-variables ratio is low, each individual scorean item meanis more reliable than simple ratings in the full data set. We used principal axis factoring and an oblique rotation to find a clean three-factor solution accounting for 61% of the common variance. Factor one was labeled Program Resources and indicated by the following items: quality of students, research quality by faculty, availability of funding, research opportunities for students, availability of educational resources, research interests of the faculty, and placement services and employability of students. Factor two was labeled Program Culture and indicated by the following items: balance between applied and academic emphases, culture of the program, faculty support and accessibility, variety and breadth of course offerings, and quality of instruction. Factor three was labeled Program Costs and indicated by the following items: availability of funding, cost of living, location of the university, and class size.

Using unit weights for the variables with the highest factor loadings, a score was calculated on each factor for each program. Rankings of the top 20 MA/MS and PhD programs on each factor are shown in Tables 4 through 6. Note that doctoral programs appearing in the top 20 on Program Resources (Table 4) are primarily those that traditionally score highly in rankings based on program reputation or faculty productivity, lending validity to our rankings. One discrepancy in the various rankings should be noted. There were several variables that were used to calculate an overall ranking but not used to calculate any of the three specific factors. On several of these variables (e.g., opportunities for work in the local community), there were programs that received very low scores and consequently scored low on the overall weighted index. One example of such a program is the University of Illinois at Urbana-Champaign, which finished in the top 10 on all three factors, but not in the overall top 20.

Table 4

Rankings of PhD and MA/MS Programs on Program Resources
____________________________________________________

   Rank       PhD Program                Rank      MA/MS Program 
____________________________________________________

1. U. of Illinois at Urbana-
Champaign
1. East Carolina University
2. Bowling Green State Univ    2. George Mason University
3. University of Oklahoma    3. Appalachian State University
4. Rice University  4. Xavier University    
5. University of Maryland   5. Minnesota State University 
6. University of Minnesota   6. Indiana University
Purdue U. 
7.  Pennsylvania State Univ  7.  Middle Tennessee State Univ  
8. University of South Florida  8. University of Tulsa     
9. University of Akron   9. Radford University 
10.  George Mason University  10.  Southwest Missouri State Univ  
11. George Washington Univ  11. Valdosta State University 
12. U. of Illinois at Chicago  12. San Diego State University 
13.  Colorado State University     13.  Teachers College of Columbia U. 
14.  University of Memphis  14.  St. Cloud State University 
15.  University of Georgia  15.  Western Kentucky University 
16.  Portland State University   16.  San Francisco State University
17.  Georgia Inst of Technology   17.  Emporia State University  
18. Wayne State University  18. University of WisconsinStout   
19. University of Calgary  19. Elmhurst College    
20. Clemson University       20. University of NebraskaOmaha

____________________________________________________



Table 5

Rankings of PhD and MA/MS Programs on Program Culture
____________________________________________________

    Rank       PhD Program                 Rank      MA/MS Program 
____________________________________________________

1. Florida Institute of Technology  1. East Carolina University 
2. Rice University  2. Appalachian State University 
3. University of Maryland   3. Valdosta State University 
4. Bowling Green State University  4. Xavier University 
5. University of Oklahoma  5. Middle Tennessee State U. 
6. Teachers College of Columbia U.  6. Emporia State University 
7.  George Washington University  7.  Minnesota State University 
8. Clemson University   8. University of Tulsa 
9. University of Illinois at Chicago   9. Indiana University
Purdue U.
10.  U. of Illinois at Urbana- Champaign 10.  Carlos Albizu University
11. Illinois Institute of Technology  11. George Mason University 
12. University of NebraskaOmaha   12. Radford University
13.  University of Memphis  13.  Georgia State University 
14.  George Mason University  14.  University of Northern Iowa 
15.  Portland State University  15.  St. Cloud State University 
16.  University of South Florida  16.  Teachers College of Columbia U.
17.  Wayne State University  17.  San Diego State University 
18. University of Tulsa  18. University of WisconsinStout 
19. University of Minnesota  19. Elmhurst College 
20. University of Guelph  20. Western Kentucky University 

____________________________________________________


Table 6

Rankings of PhD and MA/MS Programs on Program Costs
____________________________________________________

    Rank       PhD Program                 Rank      MA/MS Program 
____________________________________________________

1. University of Oklahoma  1. Indiana University
Purdue U. 
2. Bowling Green State University  2. Southwestern Missouri State U.
3. Rice University   3. Radford University 
4. U. of Illinois at Urbana-
Champaign
4. Valdosta State University
5. Clemson University  5. Middle Tennessee State U. 
6. U. of Tennessee, Knoxville   6. Western Kentucky State U.
7.  University of South Florida  7.  Appalachian State University 
8. Virginia Tech University  8. Emporia State University 
9. University of Calgary   9. St. Cloud State University 
10.  University of Georgia  10.  Xavier University 
11. University of Akron  11. Minnesota State University 
12. U. of MissouriSt. Louis   12. San Diego State University 
13.  University of Maryland  13.  University of Tulsa 
14.  Pennsylvania State University  14.  East Carolina University 
15.  University of Houston  15.  University of Central Florida 
16.  Colorado State University   16.  University of WisconsinStout
17.  University of Minnesota  17.  U. of Nebraska at Omaha 
18. U. of Nebraska at Omaha   18. University of Northern Iowa
19. University of Illinois at Chicago  19. Georgia State University 
20. University of Memphis  20. U. of Tennessee at Chattanooga 

____________________________________________________

Discussion

The objectives for our research were as follows: (a) to develop a ranking system of graduate programs in I-O psychology based on a broader set of criteria than has traditionally been considered; (b) to apply the same system to rank terminal masters and PhD programs; and (c) to determine whether criteria used to evaluate quality of life and quality of training differed between students in terminal masters versus PhD programs.

We wish to thank those program directors, and in particular those graduate students, who participated in every phase of the study. While we would have liked to have received ratings from more graduate programs, we believe that the data reported in this study represent a valid, alternative way of evaluating the quality of graduate programs and provide a useful impetus for additional discussions about factors influencing the perceptions of quality of graduate training.

Regarding the third objective, we were surprised to see that the criteria used to evaluate program quality did not differ between MA/MS and PhD students. This was determined by comparing mean importance ratings from both groups in the second phase of the study. Both groups place similar emphasis on research opportunities, instructional quality, availability of funding, and so forth. As terminal MA/MS programs look to add faculty and develop their programs, they should strive to improve in the same areas that doctoral programs do.

With regard to the first two objectives, the rankings reported in Tables 2 through 6 provide alternative ways of ranking graduate programs in both degree options. We elected to publish only schools at the top of the rankings, rather than publish all rated programs. Our goal was to draw attention to programs that are doing well in the eyes of their students, not those that have issues to be addressed. Also recall that there are several reasons why a program may not appear in the top 20: fewer than five respondents, the choice to not participate in the study, or a ranking below 20. We believe that the factor rankings provide as much or more value for comparing programs than do the overall rankings. Different prospective graduate students will value different attributes in a graduate program, and the factor tables provide considerable information about the strengths of certain programs on those attributes.

There are several possible uses for the data we collected and presented. Schools ranking high on either the overall index or on specific factors may choose to use the information to publicize strengths of their programs. As noted above, many of the program directors who participated requested specific feedback for their programs. These data may be used in several ways, such as targeting areas for improvement or as leverage when seeking out more resources from school administrators. For example, the doctoral program at the University of Tulsa scored high on a number of variables, but scored below average on three: opportunities for work in the local community, availability of funding, and faculty turnover. The low score on faculty turnover reflects the fact that in the past 2 years we have lost two junior faculty to higher paying jobs in business schools. The impact of this variable and the funding variable on our overall ranking can be used to build a case to the administration for better pay for junior faculty and more internally funded R.A. or T.A. positions. In addition, using principles of survey feedback, the University of Tulsa psychology faculty plan to present all the results back to the students as a whole, elicit critical incidents regarding problems associated with work opportunities, turnover, and so forth, and create an action plan to develop a more positive environment for graduate students.

We anticipate that there may be some controversy regarding the rankings we present. There may be other controversies regarding our methods for choosing or weighting variables, even for the idea of evaluating program quality by the use of student ratings. We welcome feedback and commentary as we believe any discussion on how to rank the quality of graduate training will, in the end, lead to better experiences for our students.

References

     Americas Best Graduate Schools. (1995, March 20). U.S. News and World Report.
     Americas Best Graduate Schools. (2001, April 9). U.S. News and World Report.
     Cox, W. M., & Catt, V. (1977). Productivity ratings of graduate programs in psychology based upon publication in the journals of the American Psychological Association. American Psychologist, 32, 793813.
     Gibby, R. E., Reeve, C. L., Grauer, E., Mohr, D., & Zickar, M. J. (2002). The top I-O psychology doctoral programs of North America. The Industrial-Organizational Psychologist, 39(4), 1725.
     Jones, R. G., & Klimoski, R. J. (1991). Excellence of academic institutions as reflected by backgrounds of editorial board members. The Industrial-Organizational Psychologist, 28(3), 5763.
     Levine, E. L. (1990). Institutional and individual research productivity in I-O psychology during the 1980s. The Industrial-Organizational Psychologist, 27(3), 2729.
     Payne, S. C., Succa, C. A., Maxey, T. D., & Bolton, K. R. (2001). Institutional representation in the SIOP conference program: 19862000. The Industrial-Organizational Psychologist, 39(1), 5360.
     Surette, M. A. (1989). Ranking I-O graduate programs on the basis of student research presentations. The Industrial-Organizational Psychologist, 26(3), 4144.
     Surrette, M.A. (2002). Ranking I-O graduate Programs on the basis of student research presentations at IOOB: An update. The Industrial-Organizational Psychologist, 40(1), 113116.
     Winter, J. L., Healy, M. C., & Svyantek, D. J. (1995). North Americas top I-O psychology doctoral programs: U.S. News and World Report revisited. The Industrial-Organizational Psychologist, 33(1), 5458.

July 2004 Table of Contents | TIP Home | SIOP Home