Industrial-Organizational (I-O) Psychology
Graduate School Rankings: A Guide
for I-O Graduate School Applicants
Carrie A. Bulger
Bowling Green State University
Note: Order of authorship was determined alphabetically.
When evaluating graduate programs
in industrial-organizational (I-O) psychology, many sources and types of
information are important. One of the sources of information that you will
encounter are ranking systems that quantify the quality of a particular school
according to a set of criteria. There are an increasing number of these
ranking systems, and they often produce different rank orders because they use
different criteria to rank the programs. Unless you understand the
criteria, the results are more likely to be confusing than helpful. We
wrote this report to help you evaluate individual ranking systems so that you
can make a more informed decision about which program best fits your interests
In general, when evaluating a
particular ranking system, pay attention to the criteria that are being measured
with that system. Some systems rank programs on reputation as determined
by department chairs or esteemed faculty. Other systems tabulate research
publications and presentations by faculty and/or graduate students and use those
tabulations as an indication of the quality of a graduate program.
Finally, other systems survey current graduate students within programs to
assess their satisfaction with their program.
Each of these ranking systems has
its own strengths and limitations. There are often important factors that
are left out of the equation of a particular ranking system. For
example, the fit between faculty interests and your own interests is left out of
all equations. In addition, each ranking system measures some aspects of
graduate school quality and ignores other aspects. Systems based on number
of faculty publications assume that quality of doctoral education can be linked
to faculty research productivity. That assumption may be reasonable if a
graduate student desires to publish lots of research articles while in graduate
school. That assumption may be unreasonable if a graduate student is less
interested in research productivity. In the following report, we discuss
general criteria to evaluate each ranking system and we provide specific
evaluations of some of the ranking systems that have been conducted.
Making a decision about which
graduate program to choose can be difficult. Remember that the decision of
which school to choose is a personal one. You need to understand what is
important to you and make your decision accordingly. We hope that this
report helps you better evaluate information that you will encounter.
(I-O) Psychology Graduate School Rankings:
A Guide for I-O Graduate School Applicants
This guide was
written for people who are considering entering a doctoral program in
Industrial-Organizational (I-O) psychology. Just as there are many
different ranking systems used to evaluate professional and collegiate athletics
teams, there have been several different attempts to evaluate and rank I-O
psychology doctoral programs. Making sense of different ranking systems
can be difficult. Systems vary widely on the factors and methodologies
used to rank programs. This guide highlights issues and concerns of
ranking systems, and although we criticize the ranking systems more than we
praise them, ranking systems do provide helpful information. Our goal is
to identify issues to consider and questions to ask that will make you a better
consumer of the information the different ranking systems provide.
All Ranking Systems
The following four
questions should be considered while examining all ranking systems.
What matters for you?
There are lots of factors that
can be used for ranking doctoral programs. Some ranking systems weight a
factor heavily whereas other systems ignore the factor entirely. Make sure
that the factors considered in a particular ranking system are criteria that are
important to you.
Who is doing the ranking?
Different sources may have
different preferences and biases. In addition, certain sources may be in a
better position to evaluate doctoral programs in I-O psychology than other
sources. Always try to determine who is providing the judgments or
Which programs are considered
in the rankings?
Different ranking systems may
omit certain programs. Just because a program fails to make a list does
not necessarily mean that that program is of poor quality. Find out what
the criteria were for considering programs.
What is the methodology of a
particular ranking system?
Consider that each ranking is
just like any other psychological study. A particular study may have its
own strengths and limitations. Use methodological criteria that you would
use to evaluate other psychological research. For example, is the sample
size appropriate for the conclusions made? Is the operational definition
of the ranking system adequate?
Three Types of
We have classified ranking
systems into three categories. Some systems rate perceptions of a
programs prestige using external perceptions (e.g., US News & World
Reports Best Graduate Schools, 2005), whereas others rate the productivity
of faculty (e.g., Gibby et al., 2002) or student satisfaction with
the program (Kraiger & Abalos, 2004). We discuss the strengths and
limitations of each of the methods in the following sections.
US News & World Report
publishes a ranking of I-O programs in their annual publication, Best
Graduate Schools. They send surveys to many individuals who are either
department chairs or are the heads of I-O programs. These people are given a
sheet with 10 blank lines and are asked to list the top ten schools in I-O
psychology. The responses are compiled, and the schools ordered based on
the frequency of being mentioned in the rankings.
Strengths of this rating
This system rates schools using
individuals overall impressions. Therefore, a possible strength of this
system could be its comprehensiveness. That is, when individuals make their
ratings, they can use any information at their disposal, which could increase
the chances that all relevant sources of information are considered.
Another strength of this system
is its visibility. Employers or recruiters unfamiliar with the world of I-O may
be more familiar with this rating system than with the others. Therefore,
employers may use these ratings to gauge the prestige of the program, and
consequently your quality as a job applicant (note that this strength has
nothing to do with the actual quality of your education, but it could still be
an advantage in the job market).
Limitations of this rating
The first limitation of this
system is the number of programs rated. US News & World Reports Best
Graduate Schools (2005) lists only the top 10 schools, giving no information
that one could use to evaluate many other possible programs. Also, as mentioned
above, this rating system measures the perceived prestige of each
graduate program. Although prestige may be related to the quality of education
you would receive at each school, this is not necessarily a rating of how well
you would be taught at each institution.
The next limitation has to do
with the quality of the raters. For ranking systems like this one, given the
number of programs that exist, it is becoming difficult for a rater to be
accurately informed about all of them (Graham & Diamond, 1999). In
addition, many department chairs may have little knowledge about I-O doctoral
programs. The link between their ratings and the quality of doctoral
education may be based on hearsay, outdated information, or overall reputation
of the entire department.
Another problem with the US
News & World Report rankings is that there is an assumption that program
quality is a unidimensional variable. Clearly there are many dimensions
that could be used to distinguish programs. Some programs may be high on
certain dimensions but low on other dimensions. In addition, applicants
may value particular dimensions. By measuring overall program quality, the
US News & World Report rankings miss important information that
could be used to help applicants make better decisions.
A final limitation of this system
has to do with the ways in which ratings can be biased. When individuals rate
each program, their assessments may be contaminated by issues unrelated to the
quality of the program. For instance, previous research on the US News &
World Report and similar ranking systems in other fields has shown that the
rating of a particular program may be biased by someones perceptions of the
university as a whole (e.g., Jacobs, 1999; Paxton & Bollen, 2003). Even with
everything else held constant, programs may be rated more positively if the
university has recently had a top-ranked football or basketball team.
Additionally, programs may receive lower than expected ratings if they are
located in urban areas, are part of public universities, or are located in the
South. It has also been found that the size of a program can have an undue
influence on ratings (perhaps because larger programs produce more alumni, so
that the pool of raters is biased toward those universities; Paxton & Bollen,
The strength of this system
(raters are able to use a comprehensive set of information when making their
ratings) can also be its limitation (that is, raters can be biased by extraneous
or inaccurate information). Furthermore, this system only lists the top
programs, giving no information with which to evaluate many other programs.
Kraiger and Abalos (2004)
collected information used to rank masters and doctoral programs using
graduate students (it would be possible to collect data on internal reputations
using faculty though no study has done so). In their analysis, they
collected data from current doctoral and masters students. Kraiger and
Abalos assessed twenty variables that spanned the range from Faculty Support and
Accessibility, Research Opportunities for Students, Cost of Living, to
Availability of Funding. Programs were ranked based on a combination of
these 20 variables derived from importance ratings solicited in a previous wave
of data collection.
Strengths of this rating
This system uses graduate
students to determine the ranking of programs. For many variables,
graduate students may be the best source of data. For example, graduate
students will undoubtedly be the most appropriate source to judge whether the
level of support provided by the university is enough to live on in a particular
city. In addition, assessing faculty support and culture of the program
would probably be best done by graduate student informants.
An important aspect of graduate
life is the extent to which faculty and fellow graduate students create a
supportive environment, or climate, that helps promote productivity and
emotional well-being (see Slaughter & Zickar, in press, for empirical
evidence supporting this assertion). Consistent with this, I-O
psychologists incorporate climate variables in their studies of organizational
effectiveness. It is reasonable to consider climate variables when making
your decision. In fact, it is common for students to visit programs that
they are seriously considering before committing to that school. Climate
dimensions are perhaps best assessed using the members of the department being
Limitations of this rating
In Kraiger and Abaloss study
(2004), several programs refused to participate. In those cases, program
directors either thought that the validity of the ratings was suspect or did not
bother to pass on the information to graduate students. Lack of full
participation hurts the overall quality of the ratings. That is, if a program is
not listed, it may be because of a low rating, but it could also mean that the
program would have had a high rating if it had participated.
In addition, although graduate
students may be appropriate sources for judging program quality on many
dimensions, there are other dimensions on which they may not be very good
judges. This criticism is similar to many of the criticisms about the
validity of student ratings of course instructors. Graduate students may
be influenced by the likeability of faculty.
Finally, all subjective ratings
suffer from the possibility that respondents may inflate ratings to promote
their graduate program. Respondents would be motivated to promote their
own school in order to increase the value and prestige of their degree.
Given the visibility of these ratings, this is quite possible.
We think that many of the
dimensions that are best assessed by internal reputations are important ones
that all potential graduate students should consider. Climate variables,
cost of living, and faculty support are all important variables that are best
assessed by current doctoral students. The limitations (especially the
possibility for self-promotion), however, of internal reputations are serious.
We recommend that applicants treat the results of Kraiger and Abalos (2004) and
any other studies that use this method with caution. In general,
applicants should visit several programs that they are considering. There
is no substitute for observing the interactions between faculty and students
(and students with students) in person.
One way that I-O programs have
been evaluated and ranked has been to look at the research productivity of the
schools. Research productivity has been examined by looking at the frequency of
faculty publications in top I-O psychology journals (e.g., Gibby, Reeve, Grauer,
Mohr, & Zickar, 2002), at representation in the SIOP conference program
(Payne, Succa, Maxey, & Bolton, 2001), and at student presentations at the
Annual Graduate Student Conference in Industrial-Organizational Psychology and
Organizational Behavior (IOOB; Surrette, 2002).
Gibby et al. (2002) is the most
recently published examination of faculty publications in I-O journals. The
authors report five sets of rankings based on faculty publications. The first
index ranked institutions based on faculty publications during the years 1996 to
2000 and the second index ranked institutions based on faculty publications
during the entire career of the faculty member. It is important to note that
these rankings accounted for the number of authors on the publication and the
location of the faculty member in the author order (authors listed first
typically contributed more to the research). The third and fourth rankings were
based on the total number of publications, regardless of journal, for the five
year period 1996 to 2000 and for the career of the faculty member. The fifth
ranking was an average rank for the institution based on a summation of the four
previously described rankings.
Surrette (2002) provides an
update to his 1989 examination of the presence of I-O programs at the IOOB
conference. This examination shows the number of student presentations from
various institutions at the conference for each year from 1992 to 2002. He
further identifies the rank for each school (where applicable) from the Gibby et
al. (2002) ranking system. Finally, he reports a small, but statistically
significant (p. 113) correlation of .19 between the number of student
presentations at IOOB and the Gibby et al. productivity score, indicating that
programs ranked high using one system are also somewhat likely to be ranked high
in the other.
Payne and her colleagues (2001)
examined research productivity of I-O programs indirectly in their look at the
frequency of presentation at the SIOP conference during the years 1986 to 2000.
This examination focused only on affiliation and not on whether the individual
was a faculty member or graduate student. Further, the authors did not weight
the rankings by the role the individual played in the session. The authors also
did not differentiate affiliation by department, thus, these rankings may
include authors from departments outside of Psychology, such as the Management
Strengths of this rating
Getting some idea of the
productivity of the institutions you are considering can be very important. One
of the key components to success for all I-O psychologists is a clear
understanding of the science of I-O psychology. So, whether your goal is to
become an academic or a practitioner, you should make sure that your graduate
school experience will provide you with opportunities to participate in the
research process. Knowing whether faculty are publishing in I-O journals and
whether the people at the institution present at SIOP and/or IOOB is one way to
It is also true that having
research presentations and publications on your curriculum vita by the end of
your graduate training is a very important factor in securing a good job. This
is probably more true for those seeking academic employment than employment in
the field, but either way, presentations and publications cannot hurt your
Another reason to pay attention
to productivity-based rankings is based on the emphasis on dissemination of
research in academia. Any research methods course will teach you that
dissemination is the end goal of any research project. The main reason for this
is the dissemination of knowledge. However, when an individual publishes or
presents research the name of the institution accompanies the name of the
individual in the journal or conference program. This serves to enhance the
reputation of the institution which can also increase the prestige of the degree
you will earn from the institution.
Limitations of this rating
With that said, you must also
consider some limitations of the evaluations of productivity. First, though
Payne et al. (2001) and Surrette (2002) include graduate student representation
at conferences, no evaluation of graduate student publications has been
conducted. One thing to note might be whether faculty include graduate
students as co-authors on their own publications. This can be an
indication of the extent that faculty involve their graduate students in
research. As of now, none of the indexes have considered this.
It is also true that ratings
change over time. Though some institutions have consistently ranked near the
top, the rankings reported by Gibby et al. (2002) look somewhat different than
those reported by Howard and his colleagues in 1985, which were different still
from those reported by Cox and Catt in 1977. Such changes could occur for many
reasons including changes in faculty, changes in the focus of the psychology
department, and the like.
Even more important to keep in
mind when looking at productivity rankings is what they do not tell you.
Productivity of faculty and/or graduate students does not tell you about the
coursework you will be required to complete. It does not tell you whether you
will have the opportunity to gain practical experience. And, most important,
simply looking at the productivity rankings of the institutions does not tell
you whether a given program is the right place for you. It is much more
important to find a school at which you will be able to pursue your interests
than it is to attend a school that is highly ranked. Thus, you must look beyond
the number of publications or presentations to the topic areas and foci of the
various faculty at the institution.
Because research is such an
important part of a doctoral program, looking at productivity rankings can be
informative and useful when applying to graduate school. However, applicants
should remember that number of publications and presentations does not tell the
whole story. Anyone applying to doctoral programs should be sure to find
programs where there are faculty members who do research of interest to the
There are some
limitations that apply not just to one single rating system, but to all of them.
First, they can quickly become outdated. Faculty move from university to
university just like employees in any other job, so the ratings you see may
reflect a different group of faculty than are currently at a particular
university. Good faculty may leave an institution, or a university may have
recently hired several top-notch professors. As you look through the ratings,
you should keep in mind how old the ratings are. Furthermore, as you begin to
investigate schools, you should find out whether they have lost or gained any
faculty since the ratings were made.
Another issue common to all
rating systems is that of making meaningful distinctions between schools. That
is, as one moves up or down the rankings, the differences between each school
and the next may be very small, large, or could change depending on where you
are in the rankings. Some of the rating systems can give you some indication how
close the schools are to each other, but others do not. As you use these
rankings, you should try not to put too much weight on small differences in
rankings. For instance, dont choose the #8 school over the #9 school just
because it has a better ranking use other criteria to make this decision.
Finally, most of these systems
rank only Ph.D. programs. Therefore, they may omit schools that offer only
masters degrees (Kraiger & Abalos, 2004 and Surrette, 2002 are the
exceptions). Also, schools that offer both terminal masters degrees as well as
Ph.D.s may differ in the quality of each type of degree (for instance, by
offering different levels of support). Use these rankings with caution when
making inferences about the quality of terminal masters programs. Although
this report focuses on making decisions about doctoral programs, many of the
same criteria and ideas apply to process of choosing between masters programs.
The amount of information
available to help you make your decision can be overwhelming. Please
remember to evaluate critically all information presented to you during this
process. There are many aspects of any doctoral program to consider when
deciding where to apply and, ultimately, where to go for your degree. This
guide has focused on three areas you might encounter in popular media or through
the SIOP website. Throughout this document, we have alluded to other areas
to consider in addition to those we discussed. We list below, not
necessarily in order of importance, several areas for you to consider when
choosing a doctoral program:
Student satisfaction/climate: Discussed above
in the second section.
Prestige/external reputation: Discussed above
in the first section.
Productivity: Discussed above in the third
Research fit: This involves determining whether
there is a faculty member at the institution who is doing research on the
topic area you would like to study. Most programs seek to admit
students who will work on research that will further the lines of research
already being conducted. Faculty members will be looking for new
advisees who will not only help them conduct research they already have
going but who will bring new ideas to their program.
Coursework: This involves looking at what you
are required to take and the kinds of courses that will be available to you.
For example, you might be very interested in taking a lot of
quantitative/statistical courses. In that case, learn whether the
institution offers many different kinds of such courses. Most programs
will require one or two Methods and Statistics courses, but some will offer
Applied Experiences: Some doctoral programs
require an internship, others encourage an internship, and still others
discourage an internship experience. Additionally, schools differ in
the extent to which they can help you obtain internships (for instance, some
schools might have good relationships with nearby industries). If gaining
applied experience is important to you, pay close attention to the ways the
school handles internships.
Where people get jobs when they graduate: It
can be very informative to identify the kinds of organizations that hire
graduates of a program. If, for example, you want an academic career
but the graduates of a particular program tend to pursue careers in industry
(or vice versa), the program may not be a good fit for you. Remember,
the alumni network is an important source of information about internships,
research opportunities, and job opportunities.
Financial support available: It is pretty
common for doctoral programs to offer funding for students in many forms.
For instance, many programs offer tuition waivers, teaching and/or research
assistantships, fellowships, and even health insurance. They do this
because they expect you to be a full-time student and that you will not be
working outside of school. Getting funding has many advantages, but
the primary advantage is that it allows you to focus on your coursework and
your research as opposed to supporting yourself financially.
Student opportunity to present/publish
research: In addition to knowing the level of research productivity at
the institution, you should determine to what extent students are included
on research with faculty members and to what extent students present and
publish their own research. As we indicated above, publishing and
presenting research is a key component to finding a good job when you
Fit with particular professors: This is
different from research fit, which we discussed in #4. Fit here is
about whether you think you could get along with the faculty at the school.
The best way to determine this is through conversations with the faculty in
person, via telephone and even email. Talking to current graduate
students is another important way to learn about the interpersonal styles of
the faculty members.
Quality of Life: Youll only be in
graduate school for a few years, so the quality of life at a particular
school may not be as important as some of the other considerations. However,
if you have strong preference for certain types of environments, you should
of course take this into account (for instance, if you are married, you
might want to see if there are many nearby job opportunities for your
Probable success at gaining admission into the
program: It is good to set your aspirations high, but you should also be
realistic. Many universities have provided data (available on the SIOP
website) about their GRE and GPA cutoffs and average scores, as well as how
many people apply (and are accepted) to their program each year.
W.M., & Catt, V. (1977). Productivity ratings of graduate programs in
psychology based upon publication in the journals of the American Psychological
Association. American Psychologist, 32,
R. E., Reeve, C. L., Grauer, E., Mohr, D., & Zickar, M. J. (2002). The top
I-O psychology doctoral programs of North America. The
Industrial-Organizational Psychologist, 39, 17-25.
H. D., & Diamond, N. (June 18, 1999). Academic departments and the ratings
game. Chronicle of Higher Education, 45,
G.S., Maxwell, S.E., Berra, S.M., & Sternitzke, M.E. (1985). Institutional
research productivity in Industrial/Organizational psychology. Journal
of Applied Psychology, 70,
D. (1999). Ascription or productivity? The determinants of departmental success
in the NRC quality ratings. Social Science Research, 28, 228-239.
K., & Abalos, A. (2004). Rankings of graduate programs in I-O psychology
based on student ratings of quality. The Industrial-Organizational Psychologist, 42, 28-43.
P. & Bollen, K. A. (2003). Perceived quality and methodology in graduate
department ratings: Sociology, political science, and economics. Sociology
of Education, 76, 71-88.
S.C., Succa, C.A., Maxey, T.D., & Bolton, K.R. (2001). Institutional
representation in the SIOP conference program: 1986-2000. The
Industrial-Organizational Psychologist, 39,
J. E., & Zickar, M. J. (in
press). A new look at the role of
insiders in the newcomer socialization process.
Group and Organization Management.
M.A. (2002). Ranking I-O graduate programs on the basis of student
representations at IOOB: An update. The
Industrial-Organizational Psychologist, 40,
M.A. (1989). Ranking I-O graduate programs on the basis of student research
Industrial-Organizational Psychologist, 26,
News & World Reports best graduate schools (2005).
Washington, DC: US News & World Report.
Graduate Training Program