Home Home | About Us | Sitemap | Contact  
  • Info For
  • Professionals
  • Students
  • Educators
  • Media
  • Search
    Powered By Google

The 2011 SIOP I-O Psychology Graduate Program
Benchmarking Survey

Part 2: Admissions Standards and Processes

Robert P. Tett, Benjamin Walser, Cameron Brown, and Daniel V. Simonet
University of Tulsa

Scott Tonidandel
Davidson College

This is the second in a series of TIP articles describing results of the 2011 survey of I-O graduate programs. In the October issue of TIP, we introduced the survey’s aims and methods, and offered norms for basic program descriptors (e.g., number of yearly graduates, number of core-I-O faculty). Here, we turn to program admission requirements and procedures.

Engaging sound selection principles, graduate programs rely on multiple data sources to identify the most promising applicants. An established literature addresses graduate-level training and the empirical validity of common admissions criteria (e.g., undergraduate GPA). Our aims here are primarily descriptive, but we offer some commentary in light of relevant prior research. In the admissions section of the survey, we asked of each program (a) how many applications are received per year and the proportions of students accepted and then enrolled, (b) what materials are required of applicants, (c) how much weight is given to various application content dimensions, (d) what cutoff scores are specified for GPA and standardized tests (GRE and/or GMAT), and (e) what processes describe how application materials are reviewed.

As in the first article, current norms target U.S. programs only (owing to likely underrepresentation of foreign programs) and are offered for all (US) programs combined, as well as separately for master’s and doctoral programs in psychology and business/management departments (i.e., 2 x 2 breakouts). Norms are also provided for three “top 10” program sets: Gibby, Reeve, Grauer, Mohr, and Zickar’s (2002) objectively productive doctoral programs (e.g., number of publications in top I-O journals), and Kraiger and Abalos’ (2004) top master’s and doctoral programs (two separate lists) based on student ratings of qualities of life and training. Distributions are skewed in many cases, calling for median and range data, in addition to means and standard deviations. Nominal data are reported as frequencies and percentages. We offer significance tests for the 2 x 2 comparisons (main effects and interactions): Fs from ANOVAs for continuous DVs and χ2s and partial χ2s from logit (multiway frequency) analysis for nominal DVs. Due to space constraints, tables reporting significance test results are not included here in the printed article, but are available online at http://www.utulsa.edu/TIP-admissions-tables. Finally, norms are provided for a given variable only when N is at least 3.

Caveats

Due to an oversight, we failed to ask programs about GMAT score requirements and weighting, especially pertinent to business/management programs. In an effort to fill this gap, we prepared a brief follow-up survey on the GMAT and sent it to all business/management programs participating in the original survey. Response rates were 46.2% and 47.4% for master’s and doctoral programs, respectively. The Ns in these cases are suboptimal; corresponding norms should be interpreted cautiously. A second problem is that we inadvertently asked programs to tell us about GRE-Analytic and GRE-Writing subtests, failing to consider that these are not separate tests. Results for the Analytic subtest, accordingly, are unusable.

Numbers of Applicants, Acceptances, and Enrollees

Table 1 shows the mean numbers of applicants, acceptees, and actual enrollees for all (U.S.) programs combined. I-O graduate programs receive around 61 applications on average per year and accept around 16 applicants. A few programs attract and accept disproportionately large numbers (max = 300 and 125, respectively), rendering median values of 50 and 10 more representative of central tendency. These results suggest an overall acceptance rate of between 20.0% and 26.6%. The “percent accepted” results show considerable variability in selectivity across programs (range = 2% to 100%).1 The total number of enrollees per year, across all programs responding to the survey, is around 1,230. Accounting for the overall 59.8% response rate (see the October TIP article) and assuming no systematic sampling effects with respect to enrollee numbers pushes this estimate to about 2,050 for all I-O programs in the U.S.


1 The 16.2 mean N of students accepted, which is 26.6% of the 60.8 mean N of applicants, appears discrepant from the mean "% accepted" value of 32.7. This apparent discrepancy is a numerical artifact resulting from averaging "% accepted" across programs versus applying "% accepted" within programs and then averaging the resulting N accepted. A similar effect appears with the median values, and with "% choosing to attend." We urge reliance on the mean and median numbers of acceptees and enrollees over corresponding percent values reported in this and later tables. The reported percent indices are uniquely informative for their min and max values.


The admissions, acceptance, and enrollee numbers for the overall sample are broken out in Tables 2 and 3 for master’s and doctoral programs in psychology and business/management departments. In light of F values reported in Table A1 (online), the four types of program receive similar numbers of applications but differ in the numbers of students they accept and who choose to attend. Specifically, master’s programs (combining department types using weighted means) accept an average of around 20 students per year, compared to about 8 students for doctoral programs. The corresponding selection rates are 37% versus 12%. This is understandable, as standards tend to be higher in doctoral programs (e.g., see GRE cutoffs, below). Differences are also evident in enrollments: 12.3 versus 4.5 students on average into master’s versus doctoral programs, respectively, although the rates are about equal: 60% versus 58%. The significant interaction (see Table A1 online) suggests the drop in mean enrollee numbers from master’s to doctoral programs in psychology departments (from 11.6 to 5.2; 58.4% to 58.1% of those accepted) is less than the corresponding drop in business/management departments (from 18.5 to 2.1; 73.5% to 60.2%). The basis for this effect is unclear. One possibility may be that business master’s applicants (for which the enrollment rate of 73.5% is especially high) apply to fewer programs than do their psychology counterparts, restricting their options when accepted to multiple programs.

Required Application Materials

Table 4 shows frequencies and percentages of programs requiring assorted application materials in the entire sample and by degree and department type. Corresponding significance tests are reported in Table A2 (online). Undergraduate transcripts are universally required, as are language proficiency test scores from foreign applicants. Reference letters, graduate transcripts (if available), and personal statements are also commonly required (range: 89% to 95%) and a large majority (79%) of programs require GRE-V and GRE-Q scores. Some of those materials, however, and others are variably required across degree and department types.

Doctoral programs more often require available graduate transcripts (98% vs. 84%), reference letters (100% vs. 91%), and GRE psychology subject test scores (9% vs. 0%). Business/management programs are more likely to require language proficiency test scores from general applicants than are psychology programs (77% vs. 40%), perhaps owing to greater numbers of foreign applicants to business programs. A similar difference is evident between doctoral and master’s programs (57% vs. 36%; one-tailed test2), likely due to the need for greater selectivity. Graduate assistantship applications are more often required by doctoral programs (26% vs. 14%; one-tailed). This trend is more pronounced in business/management departments (55% vs. 17%; one-tailed), suggesting possibly greater available resources for graduate funding in those departments. Not surprisingly, psychology departments are more likely than business/management departments to require GRE scores (87% vs. 53%). Follow-up survey results suggest business/management programs commonly rely on the GMAT (100% of follow-up survey respondents reported this requirement). The GRE proportions for master’s and doctoral programs within psychology departments (83% and 93%, respectively) are nearly identical to those reported by Norcross, Hanynch, and Terranova (1996; 81% and 93%), suggesting stability in these rates over time.


2Given the exploratory, normative nature of the survey, directional effects were not predicted. Results of one-tailed tests are reported in cases where observed effects permit relatively straightforward post hoc rationales. Advocates of stricter adherence to significance testing standards may choose to ignore these findings.


Relative Weighting of Application Content

Beyond asking what materials are required of applicants, we also asked how specific application content is weighted in the selection process, using a 1 = small weight to 3 = heavy weight scale. Results for only those programs requiring corresponding application materials are summarized in Table A3 (online). To better capture the sample’s overall weighting of application elements, we recalculated the weighting norms after entering 0 weight for programs not requiring associated materials. These results are reported in Table 5, and for the 2 x 2 breakout in Tables 6 and 7. Corresponding ANOVA results are reported in Table A4 (online). Programs requiring a given application content item may be interested in how other such programs weight that item. Here, we focus on the broader norms, incorporating 0 weights for nonrequiring programs (Tables 5 to 7).

Means in Table 5 show the strongest weight for undergraduate GPA (2.6) followed by weights for writing ability (2.3), personal statements (2.3), available graduate GPA (2.3), GRE scores (Q = 2.3, V = 2.2), overall maturity (2.2), performance in methods courses (2.1), understanding of I-O psychology (2.1), language proficiency (2.1), and reference letters (2.1). These means reflect unbalanced Ns between degree and department types (giving greater weight to program types with higher Ns). Results in Tables 6 and 7 (see also Table A4) reveal interpretable differences among program types.

Undergraduate GPA is weighted more heavily in psychology I-O programs than in their business/management counterparts (weighted means = 2.7 vs. 2.2, respectively), as is performance in I-O courses (2.0 vs. 1.4), research experience (2.1 vs. 1.4), and GRE-Q scores (2.4 vs. 1.8). The latter undoubtedly reflects greater reliance on the GMAT in business programs. Understandably, performance in undergraduate business courses is weighted more heavily by business programs (1.6 vs. 1.2, one-tailed test). Research experience is also weighted more heavily in doctoral over master’s programs (weighted means = 2.5 vs. 1.6, respectively), as is language proficiency (2.2 vs. 2.0). Other items show more nuanced effects, as follows.

Performance in undergraduate methods courses is weighted markedly lower in business master’s programs (mean = .7) than in the remaining three cells (2.0 to 2.1). Similar patterns are evident for research interests (.3 vs. 1.6 to 2.6), understanding of I-O psychology (.7 vs. 1.8 to 2.3), and performance in undergraduate psychology courses (.5 vs. 1.5 to 2.0). Proof of financial support is weighted highest in business master’s programs (1.3) and lowest in business doctoral programs (.2), possibly reflecting a combination of higher costs for business program tuition and better funding for business doctoral students. Personal statements are weighted more heavily in business doctoral programs than in business master’s programs (2.8 vs. 1.8) but about equally between degree types in psychology-based programs (2.3 vs. 2.2). The reason for this interaction is not clear. Additional differences are evident within business/management programs. Specifically, doctoral programs put greater weight than do master’s programs on both the verbal and quantitative subtests of the GMAT. A similar pattern is evident for the GRE within psychology departments (tverb = –2.33, tquant = –1.92; p < .05, one-tailed).

All told, three major themes are evident regarding what different I-O program types are looking for in a good applicant. First, doctoral programs tend to emphasize research content (research experience, research interests, performance in methods courses; GREs for psychology programs and GMATs for business/management programs), which is understandable given the centrality of the dissertation in doctoral training and the greater investment of resources in accepting doctoral students in a competitive application process. Emphasis on language proficiency also fits this pattern, given the increased importance of written and oral communication at the doctoral level. The greater weight placed on understanding of I-O psychology by doctoral programs shows recognition of I-O psychology as a scientific discipline and the value of applicants’ knowing what they are getting into when seeking the doctorate. 
Second, psychology-based programs appear to emphasize application content bearing on academics and research (e.g., undergraduate GPA, performance in methods courses, research experience), especially that focusing on I-O psychology (performance in I-O courses, understanding of I-O). Business-management programs, of course, emphasize performance in business courses. The relatively lower weights on academic and research variables perhaps reflect a more practice-based orientation to the discipline.

Third, as an extension of the second point, the practice–research difference between master’s and doctoral programs appears to be stronger in business/management departments than in psychology departments. Most of the significant interactions show notably lower mean weights for research-oriented content in applications to business master’s programs. In short, scientific competence at the master’s level is weighted more heavily in psychology than in business, and this departmental distinction is less apparent at the doctoral level.

That undergraduate GPA is, overall, the most commonly required and highly weighted application item is supported by meta-analytic evidence showing moderate predictive validity for this item. Kuncel, Hezlett, and Ones (2001) report corrected mean correlations of .32, .27, and .14 in predicting graduate GPA, faculty ratings, and degree attainment, respectively, for social science graduate programs (uncorrected values = .29, .19, and .14; k range = 14 to 32).3 Stronger validity estimates are reported for GRE-V and GRE-Q: mean ρ = .39 and .34, respectively, for predicting graduate GPA, .37 and .38 for predicting faculty ratings, and .22 and .31 for predicting degree attainment (corresponding uncorrected values = .27 and .23, .20 and .20, .17 and .22; k range = 14 to 55). Combining these three measures in predicting a composite of graduate GPA and faculty ratings using correlation of linear sums yielded a (mean) operational validity estimate of .53.4 Such validity strongly supports I-O graduate programs’ reliance on undergraduate GPA and standardized test scores in student selection. Two points bearing on the use of these measures warrant discussion.


3Credibility intervals are moderately wide in most cases, suggesting situational specificity in validity strength (e.g., 10% of population correlations for UGPA in predicting graduate GPA are < .23 and 10% are > .41).
4Adding GRE-Analytical test scores lowered the combined operational validity to .50.


First, GRE-Subject test scores (i.e., for psychology) are required by very few programs (5 of 131 = 4%), and yet it tends to outperform both GRE-V and GRE-Q in predicting graduate student performance. Kuncel et al. (2001) report mean corrected values of .40, .38, and .30 in relations with the three criteria noted above (uncorrected values = .30, .23, and .24, respectively) and show an increase of the combined estimate from .53 to .56 when GRE-Subject test scores are added to undergraduate GPA, GRE-V, and GRE-Q. The primary rationale for the unique predictive advantage of the GRE-Subject test is that it reflects not only native ability (i.e., g) but also interest in psychology and motivation to learn psychological content (e.g., Ewen, 1969). I-O graduate programs are urged to include the GRE psychology subject test in their application requirements and to weight it at least as strongly as they do the two main GRE subtests when making selection decisions. Individual programs may be reluctant to require the subject test as it is required by so few programs that adding it may be expected to be a burden to most applicants, thereby shrinking the applicant pool. In addition, only 4-6% of the psychology subject test pertains directly to I-O psychology (see: http://www.ets.org/gre/subject/about/content/psychology).  Whether the predictive advantages of adding the subject test might outweigh the drop in applicant numbers (thereby increasing the selection ratio) is a matter for careful consideration as I-O programs vie for top applicants.

Second, of the four program types considered in the survey (i.e., the 2 x 2 breakout), those weighting the noted predictors highest of all application elements, on average, are psychology doctoral programs (see Tables 4 and 6). This is understandable as the demand for predictive accuracy is higher in doctoral than in master’s programs, owing to increased risks and investments, and GREs are more relevant to psychology programs than to business/management programs. Notably, the GMAT is required in all nine business programs responding to our follow-up survey, and the verbal and quantitative subtests are weighted especially highly in business doctoral programs. Undergraduate GPA, however, ranks seventh in the latter programs with respect to mean weights. Whether master’s programs might improve their selection decisions by relying more on standardized tests, and business doctoral programs by relying more on undergraduate GPA, are questions extending beyond current aims.

A further point concerns reliance on predictors besides standardized test scores and GPA. The application content item with the second-highest weight (behind undergraduate GPA) based on all programs is writing quality. Personal statements, from which inferences of writing quality are most directly derived, are required by 89% of programs. Business master’s programs (N = 3) weight writing quality at the first rank, and it ranks sixth in business doctoral programs (N = 11), ahead of undergraduate GPA. Psychology master’s programs weight personal statements and writing quality third (tied) and corresponding weights from psychology doctoral programs rank 10th and 7th, respectively. The relatively strong emphasis placed on writing quality reflects an obvious awareness of the importance of writing in graduate work. What is less clear is how well applicants’ personal letters accurately reflect writing ability. They are far from pure writing samples as they permit almost unlimited editing by others and by software tools.5 A letter could be written by someone other than the applicant, and the receiving program might be none the wiser. We are unaware of validation research on personal statements and derived dimensions (writing ability, maturity, understanding of I-O psychology). Given programs’ reliance on these items for student selection, validation seems a timely and worthwhile pursuit.


5One might ignore the latter as a source of bias to the degree students are permitted to use such tools in their graduate work. Writing well on one's own, however, seems preferable to reliance on external assistance.


A similar point can be raised about reference letters, which are required by 95% of all programs (Table 4) and whose mean weights fall near the middle of the pack (e.g., rank = 11 of 18; Table 5). Published research on the validity of reference letters is thin. We did find a link to an unpublished report by Aamodt (2012) on a meta-analysis (k = 51) yielding uncorrected mean validity estimates of .17 and .25 in predicting work and training performance in students and employees. The author cautions that interrater reliabilities for reference letters tend to be modest, averaging .22. Letters, he infers, say as much about the writer as they do about the applicant. As with personal statements, research is needed to assess the validity of reference letters in predicting graduate student performance. More broadly, programs would benefit from the collective examination of all common application materials and content dimensions, particularly with respect to incremental validity. Some items may actually reduce validity and relying on a few good predictors could substantially streamline the application and selection processes.

Cutoffs

Cutoffs for undergraduate GPA and standardized test scores are summarized in Table 8 for the combined sample, and in Tables 9 and 10 for the 2 x 2 breakout. In light of decision process norms presented below, it is doubtful that most programs employ those values rigidly in making selection decisions. Rather, they are probably best considered modally as guidelines. Nonetheless, comparisons across program types are meaningful. Lack of responses from master’s programs in business/management departments on standardized test score cutoffs precluded our running ANOVAs for these results. Instead, we used t-tests to compare master’s and doctoral program means within psychology departments (see right column of Table 9), and compare doctoral program means between psychology and business/management departments (see right column of Table 10).6


6Use of multiple t-tests raises the likelihood of Type I error in the comparisons as a set. As we are not testing theory or drawing strong prescriptive inferences in this primarily descriptive effort, we refrained from adjusting the per-comparison error rate. Proportions of statistical tests yielding significant results bear comparison to the nominal 5% error rate under the stringent assumption that all population effects are null.


The overall mean undergraduate GPA cutoff of 3.14 reflects a nearly universal minimum of 3.00 with some programs setting higher cutoffs (max = 4.00). ANOVA for undergraduate GPA cutoffs (afforded by ample N in all four cells) yielded F = 11.67 (p < .01) for degree type, 3.72 (p < .06) for department type, and .96 (p > .10) for the interaction. Doctoral programs set higher GPA standards for admission than did master’s programs (weighted means = 3.26 and 3.06, respectively). The departmental comparison approaches two-tailed significance, GPA cutoffs averaging a bit higher in psychology departments (3.16 vs. 3.02). This difference may reflect added emphasis on academic and research competence by psychology programs, as noted above in review of application content weighting. The relatively small Ns for business/management departments preclude firm interpretations here.

GRE cutoffs are expressed on the old 200–800 scale, which was replaced in August 2011 with a new 130–170 scale. The overall means of 525 and 550 for GRE-V and GRE-Q translate to 154.5 and 146 on the new scale, corresponding to around the 66th and 36th percentile ranks, respectively. Interestingly, these values differ from the means from programs relying on percentile cutoffs per se (60th percentile rank in each case, see Table 8). Distributional differences in scaled scores make the 550 GRE-Q mean actually lower in relative terms than the 525 GRE-V mean.7 These differences, particularly in the case of the GRE-Q, raise the possibility that programs using scale-score cutoffs may be biased toward selecting for lower quantitative abilities (36th vs. 60th percentile) and, to a lesser extent, higher verbal abilities (66th vs. 60th). Further analyses with the broader dataset may permit tentative exploration of this issue (e.g., in terms of relative offerings of quantitative courses). Although five doctoral programs (four in psychology, one in business/management) reported requiring the GRE subject test, none provided cutoff data for this test.


7The 66th percentile rank on the GRE-Q corresponds to a scaled score of 685, substantially higher than the mean cutoff of 550 (and the 36th percentile rank on GRE-V yields a scaled score of 410, much lower than the noted mean of 525). This normative difference is partially rectified in the new scaling, but reliance on percentile ranks obviates the need for comparative adjustments.


Turning to Tables 9 and 10, GRE cutoffs in psychology-based programs (those in business/management lack sufficient N) are higher in doctoral programs than in master’s programs. This holds for both scaled scores and percentile ranks. The same issue noted above regarding differences between the GRE subtest scale score distributions applies to the within-psychology means. Specifically, 499 on the GRE-V and 523 on the GRE-Q, the mean cutoffs for masters’ programs, yield percentile ranks of 62 and 28, respectively. For doctoral programs, 554 on the GRE-V and 579 on the GRE-Q yield percentile ranks of 74 and 40. Business/management doctoral programs appear to use higher cutoffs on GRE percentile ranks (i.e., 71 vs. 64 for GRE-V and 74 vs. 64 for GRE-Q), but the differences are nonsignificant, as indicated in the right column of Table 10. Larger Ns would permit more powerful estimation of population differences.

Correspondingly detailed analysis of GMAT cutoffs is precluded by small Ns. Tentatively, we note that the mean scaled score cutoff of 583 (Table 10) for the GMAT total score in business doctoral programs (N = 3) corresponds to a percentile rank of 61, which is lower than the mean percentile rank cutoff reported by other business doctoral programs (N = 3). We cannot draw firm inferences here, but it may be that programs relying on scaled score cutoffs are less selective than those relying on percentile cutoffs, generally consistent with what we noted above regarding the GRE.

All told, doctoral programs tend to employ higher cutoffs on undergraduate GPA and standardized test scores, no doubt reflecting higher doctoral performance expectations and associated risks in selecting doctoral students relative to master’s students. Programs are urged to use percentile rank cutoffs to more readily balance selection for verbal and quantitative abilities, or otherwise to clarify differential selection for specific abilities should this be an explicit program directive. In addition to easing comparisons between subtest scores, percentiles are more directly interpretable, specifying the percentage of cases in the normative population expected to fall below the targeted cutoff.

Application Review Processes

The last subsection of the admissions portion of the survey addressed how applicant materials are processed in making admittance decisions. Specifically, we asked how programs combine the various sources of applicant data (compensatory, multiple cutoff only, multiple cutoff plus ranking, heuristic, and holistic),8 whether poor applications are screened out in the early stages of review (yes, no), who reviews application materials (e.g., program director, other program faculty), how reviewers collaborate in the review process (crossed, nested, targeted),9 and how much consensus is sought in deciding whom to admit (low, majority, high). Results are summarized in Table A5 (online) for all programs combined and for the 2 x 2 breakout. Corresponding frequency analysis results are provided in Table A6 (online).


8 Compensatory = sources averaged (with or without weighting) to yield an overall score; multiple cutoff only = cutoffs strictly applied per source, with all surviving applicants selected; multiple cutoff + ranking = cutoffs strictly applied per source, with surviving applicants ranked; heuristic = cutoffs serve as guidelines, with some compensation allowed among sources and exceptions made on a case-by-case basis; holistic = all relevant sources judged as a set, with applicants dropped on a "red flag" basis.
9 Crossed = each reviewer reviews every application; nested = each reviewer reviews a subset of applications; targeted = promising applications are sent to particular faculty for further review.


Results in Table A5 show that the modal process for combining application materials in the overall sample is heuristic in nature (48%), where, for example, a high GPA might compensate for low GREs, and no research experience in an otherwise well-qualified doctoral applicant could be cause for rejection. A holistic approach is second most popular (29%), followed by a purely compensatory approach (14%). The remaining programs (9%) reported using multiple cutoffs with ranking (i.e., top-down selection) or without it (select out). Corresponding test results in Table A6 show no significant differences across programs in this overall pattern.10


10Results in Table A5 are provided for each data combination strategy. An omnibus test including all strategies as the third variable yielded a significant main effect for strategy (partial chi square = 83.23, p < .01) but nonsignificant main and interaction effects of degree and department types on strategy (min p observed = .33).


A notable feature of the more commonly used strategies (heuristic, holistic, compensatory) is their relative reliance on clinical (i.e., subjective) judgment. Research has shown such judgments, relative to actuarial (i.e., quantitatively objective) methods, to be more error prone (cf. Dawes, Faust & Meehl, 1989; McCauley, 1991), raising potential concerns with how most I-O graduate programs select their students. Research also suggests that decision makers are reluctant to abide strictly by actuarial protocols, even in light of supportive evidence. The impact of relying on heuristic and holistic strategies in graduate student selection is difficult to assess.

Our results suggest a potential limitation in how I-O graduate students are selected, but they are far from definitive. As I-O psychology identifies personnel selection as a core expertise, the discipline may be better suited than most to offering effective and acceptable guidelines for how data are combined in selecting the most promising students. This question bears discussion beyond that afforded here.

The single line of results in the middle of Table A5 pertains to whether programs screen out applicants in the early stages of review. We did not seek details on the screening procedure, but we suspect the modal case would entail application of GPA and/or standardized test score cutoffs, as these indices are commonly required, easily amenable to sorting, and supported by validity evidence (e.g., Kuncel et al., 2001). As applications outnumber the acceptees a given program can reasonably accommodate, judges seek in the early stages to concentrate review efforts on the more promising candidates. About 80% of all responding programs adopt early screening, and the rates do not vary significantly across degree and program types (range: 78% to 83%). For applicants, this means that having low GPA and/or test scores can seriously jeopardize the chances of being accepted into an I-O graduate program. On the plus side, given that over 90% of programs adopt heuristic, holistic, or strictly compensatory combination methods, having a single low score may not be a “kiss of death” in applying to most programs; falling below the cutoff on multiple predictors, however, more than likely is.

Moving down Tables A5 and A6, we consider who reviews application materials. Unlike the earlier process variables, those in this section tend to show greater variability across degree and program types. Program directors, the most common reviewers, are active in 80% of all programs, a rate that is relatively stable across program types (64% to 84%). All program faculty serve as reviewers in around 49% of all programs, but in none of the business master’s programs. This may be due to such programs having more core program faculty (mean = 5.8 compared to the grand mean of 4.2; see Table 1 in the October TIP article), sparing some, perhaps the junior-most members, the burden of applicant review. Doctoral programs in both department types have higher rates of all program faculty serving as reviewers (combined rate = 62.3%), reflecting greater need for decision accuracy due to heightened risks in selecting doctoral students. For similar reasons, doctoral more than master’s programs assign applications to faculty reviewers who share applicants’ interests (28% and 10%, respectively). Notably, 10% of psychology-based programs compared to 0% business/management-based programs have reviewers who are specifically requested by applicants. Whether this is because business/management applicants are less likely to request specific faculty advisors or such programs are more likely to ignore such requests is unclear. Within psychology departments, doctoral programs, understandably, showed the highest rate (17%) of reviewing by requested faculty. A small proportion of programs ask nonprogram department faculty to serve as reviewers (13%), a rate that does not vary significantly across program types. No programs use reviewers from outside their departments.

Proceeding further down Tables A5 and A6, we see that about 64% of programs have all reviewers go through all applications surviving initial cutoffs (i.e., crossed strategy) and that this rate varies nonsignificantly across program types (50% to 70%). This relatively high and stable proportion suggests that programs generally take selection decisions seriously. In about 21% of all programs, a given rater reviews just a subset of applications (i.e., nested strategy). Why this rate is higher in doctoral than in master’s programs (30% vs. 8%) is not clear. About 24% of all programs use a targeted applicant review strategy, in which especially promising applications are sent to particular faculty. This rate does not vary significantly across program types (range = 21% to 36%). Although this may seem to be a relatively underutilized strategy, it is rendered moot by the more common “crossed” strategy, whereby all raters review every (prescreened) application.

The last sections of Tables A5 and A6 pertain to the level of consensus sought among judges in deciding whom to admit. The majority (52%) of all programs reported seeking a high level of agreement, which is nonsignificantly variable across program types. In only 13% of programs can a selection decision rest with just a single judge. What proportion of these cases entail a judge prevailing over the opinions of others versus a judge amicably consigned authority for all selection decisions is unclear. What is clear is that single-judge student selection is relatively rare, and the rate is not significantly variable across program types.

Normative Comparisons With the Three “Top 10” Program Sets

Comparisons among each of the three top-10 program sets (Gibby et al, 2002; two in Kraiger & Abalos, 2004; K&A) and relevant groups yielded several meaningfully significant differences. Before turning to those effects, we note the following. (a) At least one program in each top-10 list did not complete the survey and some completed only certain items. (b) One of the responding programs in the K&A master’s set and two in the K&A doctoral set reported being in a department other than psychology or business/management (i.e., “other”) and were dropped from the comparisons to avoid confounding. (c) Of the nine available Gibby et al. programs (all of which are doctoral) and the eight available K&A doctoral programs, two are included in both sets. Results involving those two top-10 sets, accordingly, are not independent.

All (remaining) programs in each set are housed in psychology departments. The relevant comparison group for both the Gibby et al. set and the K&A doctoral set are the other psychology doctoral programs, and the relevant comparison group for the K&A master’s set, are the other psychology master’s programs. Differences on continuous variables were assessed using independent samples t-tests and those on nominal variables, using χ2.
Significant results involving continuous variables, reported in Table 11, warrant several comments. First, the Gibby et al. and K&A doctoral top-10 programs average 90 and 100 applicants per year, respectively, compared to 61 and 63 in their respective comparison groups. The numbers of students accepted, however, are not significantly different.11 We surmise that top doctoral programs based on productivity and/or student favorability are afforded greater selectivity (i.e., smaller selection ratios) by virtue of attracting greater numbers of applicants. Second, the same two top-10 program sets showed higher mean weights for GRE-Q scores than their respective comparison groups. The K&A doctoral set also weighted GRE-V and undergraduate GPA especially heavily, and the Gibby et al. set weighted performance in undergraduate business courses lower. Third, the Gibby et al. top-10 programs set higher cutoffs on both undergraduate GPA and the GREs. Fourth, the only significant effect to emerge with the nominal variables is that the Gibby et al. programs are more likely to require that applicants submit GRE psychology subject test scores (3 of 9 Gibby et al. top-10 programs vs. 1 of 33 remaining psychology doctoral programs). Given earlier discussion, it appears some of the more productive doctoral programs seek to take advantage of the GRE Subject test’s noted validity (Kuncel et al., 2001). Finally, the K&A top-10 master’s program set yielded no meaningful pattern of significant differences in application materials and process.12


11The apparently high mean of 17 for the K&A doctoral set reflects high values in two of the five contributing programs. The t assuming equal variances yielded p < .05; but significantly higher variance in the K&A set led us to use the unequal variance t, reported in Table 11.
12 A few significant effects that emerged at chance levels would disappear with minor shifts in some of the nominal variable distributions.


Conclusions and a Look Ahead

Wrapping up this second installment of the 2011 SIOP Graduate Program Survey results, we note that the norms presented here offer few if any major surprises regarding what master’s and doctoral programs in psychology and business/management departments are looking for when deciding who to admit. Doctoral programs look especially for research competence, and master’s programs, particularly in business/management departments, focus on broader, more practical qualities (e.g., writing ability, maturity). Doctoral programs are choosier, setting higher entrance standards and selecting fewer students because the training investments are greater and the risks, accordingly, higher. Undergraduate GPA and standardized test scores are commonly used, with ample empirical support, and are likely the main hurdles set by most programs in the early stages of review. While screening out low-scoring applicants, however, most programs use heuristic, holistic, or otherwise flexible selection strategies. The degree to which subjective biases in such strategies undermine effective student selection awaits research, as does common reliance on reference letters and personal statements, particularly in terms of their incremental contributions over established, empirically validated measures.

In keeping with the survey’s major aims, the norms reported above offer benchmarks for comparing a given program’s application procedures. We see upward potential for the GRE psychology subject test (reasonably, more so in psychology-based programs) as an addition to the more common verbal and quantitative subtests. Perhaps the fact that some top I-O programs are using it will encourage others to follow suit.

For applicants, we note that I-O programs as a whole take the task of finding the best students very seriously, investing considerable time and effort reviewing multiple data sources and valuing agreement among faculty reviewers toward making the best decisions possible. Who is judged a good candidate varies across programs, and students should seek to apply where they expect the best match to their strengths and aspirations.

Looking ahead to the third installment in the series, readers will see what I-O programs offer their students in the way of courses and development of I-O-related competencies. Curricular comparisons among degree and department types (i.e., in the 2 x 2 breakouts) promise further unique insights into the scope and content of graduate training in I-O psychology.

References

Dawes, R. M., Faust, D., & Meehl, P. E. (1989). Clinical versus actuarial judgment. Science, 243(4899), 1668–1674. doi:10.1126/science.2648573
Ewen, R. B. (1969). The GRE Psychology Test as an unobstrusive measure of motivation. Journal of Applied Psychology, 53(5), 383–387. doi:10.1037/h0028092
Gibby, R. E., Reeve, C. L., Grauer, E., Mohr, D., & Zickar, M. J. (2002). The top I-O psychology doctoral programs of North America. The Industrial-Organizational Psychologist, 39(4), 17–25.
Kraiger, K., & Abalos, A. (2004). Rankings of graduate programs in I-O psychology based on student ratings of quality. The Industrial-Organizational Psychologist, 42(1), 28–43.
Kuncel, N. R., Hezlett, S. A., & Ones, D. S. (2001). A comprehensive meta-analysis of the predictive validity of the Graduate Record Examinations: Implications for graduate student selection and performance. Psychological Bulletin, 127(1), 162–181. doi:10.1037/0033-2909.127.1.162
McCauley, C. (1991). Selection of National Science Foundation Graduate Fellows: A case study of psychologists failing to apply what they know about decision making. American Psychologist, 46(12), 1287–1291. doi:10.1037/0003-066X.46.12.1287
Norcross, J. C., Hanych, J. M., & Terranova, R. D. (1996). Graduate study in psychology: 1992–1993. American Psychologist, 51(6), 631–643. doi:10.1037/0003-066X.51.6.631