Has Industrial-Organizational Psychology Lost Its Way?
Deniz S. Ones, Robert B. Kaiser, Tomas Chamorro-Premuzic, and Cicek Svensson
Work is important. It’s how society gets things done, largely through organizations—commercial enterprises, nonprofits, governmental agencies, and more (Hogan & Chamorro-Premuzic, 2013). It’s where people spend much of their lives and establish a big part of their sense of self. Work groups provide social identities, hierarchies provide status, and difficult work problems provide a chance to be creative and innovate. More than any other discipline, industrial and organizational (I-O) psychology is focused on better understanding and improving this important aspect of life.
There is no need to catalogue the historical contributions of I-O psychology—a high-level reminder of a few things like enhancing organizational and individual effectiveness, improving working conditions and enriching jobs, and promoting justice in the workplace more than makes the point. I-O psychology is probably more relevant than ever to work lives, organizations, and society at large. But there is a problem: We see the field losing its way, in danger of becoming less relevant and giving up ground to other professions with less expertise about people at work—but perhaps better marketing savvy and business acumen. Without a fundamental reorientation, the field is in danger of getting stuck in a minority status in organizations: technocrats who apply their trade when called upon but not really shaping the agenda or a part of the big decisions.
This article summarizes our concerns with the current state of play in I-O psychology, both academic and applied. Our point is to make a case for how a return to a seemingly forgotten ideal, the scientist–practitioner model, can help the profession get back on the path to relevance, respect, and impact in the world of work.1
I-O psychology has been moving in recent years in a direction that we believe may hurt the discipline. Some of the more troubling trends include:
- an overemphasis on theory
- a proliferation of, and fixation on, trivial methodological minutiae
- a suppression of exploration and a repression of innovation
- an unhealthy obsession with publication while ignoring practical issues
- a tendency to be distracted by fads
- a growing habit of losing real-world influence to other fields.
Overemphasis on Theory
I-O psychologists are increasingly focused on testing grand (and occasionally grandiose) theories that have little utility for advancing the discipline. These efforts are often carried out in the name of science, but they are primarily academic. Science is about knowledge. The word science comes from the Latin word scientia, which means knowledge. Science-generated knowledge arises from repeated measurements, studies, and experiments about a given phenomenon. In this view, scientists…
are technicians collecting and collating information, often in quantitative forms. Paul Meehl (1967, 1978) dispelled once and for all the misconception that we, in what he called the “soft social sciences,” are testing theories in any way even remotely resembling how theory focuses and advances research in the hard sciences. Instead, the mistaken notion that we are theory-driven has, in Meehl's opinion, led us into a worthless pro forma ritual of testing and rejecting statistical hypotheses that are a priori known to be 99% false before they are tested. (Glass, 2000)
There is a need to dial back the overzealous enthusiasm for theory that dominates I-O scholarship. Of course, theory is essential for explanation, and as Kurt Lewin (1943) opined is his famous maxim, “there is nothing so practical as a good theory.” But theory for theory’s sake is a fallacy that some are beginning to call out for the excess that it is (cf. Campbell & Wilmot, in press; Hambrick, 2007).
A fixation on methodological minutiae is causing the I-O to become more and more precise in ways that matter less and less. The upside to this trend is that the field has achieved an enviable level of methodological rigor. For instance, if one compares the typical methodological features of top I-O publications, such as the Journal of Applied Psychology or Personnel Psychology, from 50 years ago to today, the higher standards are obvious: larger and more representative samples, multiple studies, meta-analytic reviews, and more refined statistical analyses are now the norm (Shen et al., 2011). The majority of articles published 50 years ago would likely be rejected by reviewers today.
The downside, however, is that this obsession with methodological rigor has widened the gap between science and practice. Many empirical papers are impressive demonstrations of statistical wizardry but are detached from real-world problems and concerns. The past decade’s reviews on common I-O practices (e.g., the reliability of the employment interview, the effectiveness of executive coaching, or the validity of integrity measures) are based on high-quality publications that bear little resemblance to how their topics are used in real-world applications. One wonders if those papers would have survived the peer-review process had they had a greater focus on practice.
By the same token, the methods prescribed by I-O scholars are increasingly but unnecessarily complex. To be frank, simple statistical approaches suffice for most practical questions (Murphy, 1997). The methodological strength of I-O practitioners comes from the volume and real-world fidelity of their data. A principle of Big Data analytics applies here: More field data beat more complex statistics. But the peer-review process is hopelessly tipped in favor of complex analytics. As anyone who has experienced the review process for the annual SIOP conference will have noticed, methodologically precise, statistically complex proposals on trivial subjects are usually preferred over proposals focusing on innovative but perhaps methodologically imperfect practical applications. As a result, SIOP is increasingly plagued with sessions that celebrate I-O’s statistical sophistication and technical precision while neglecting its practical utility to solve real problems in real organizations. Not surprisingly, the vast majority of HR and talent-management practitioners have never heard of SIOP or I-O psychology, so they seek solutions elsewhere.
Suppressed Exploration and Repressed Innovation
There is an absence of innovation and new ideas in the field. The topics and discussions that concern SIOP today are, by and large, the same ones that concerned it 30, 50, and in some cases even 70 years ago. For example, in employee selection, the field is still debating the validity of personality and cognitive ability, refining our understanding of assessment centers, and examining situational judgment tests. “Modern” measures of the former have been used in employee selection for over 90 years. Assessment centers were invented in the 1930s. Even the currently de rigueur situational-judgment tests have existed for over 50 years. Where are the new ideas?
There seems to be an increasingly dysfunctional view in I-O psychology that exploration and, by extension, innovation are undesirable. Inductive research is regarded as inherently unsound—just try to publish an exploratory study in a top-tier I-O journal. In fact, our experience is that discovery has become a dirty word in our field’s scientific writing. (More than once we have encountered reviewer comments to the effect that “such a finding cannot exist in your huge database because it is not predicted by theory” or a favorite perennial dismissal, “Not sufficiently grounded in theory”). This is reminiscent of the widespread disbelief in Galileo's demonstration of the telescope, with some of his colleagues remarking, upon experiencing the power of direct observation, that this would indeed be very convincing evidence if only it did not contradict Aristotle's theory or the doctrine of the Catholic Church. Refusal of some of Galileo’s contemporaries to even look through the telescope perfectly parallels some modern gatekeepers to I-O psychology platforms who block the dissemination of inductive research.
Empirical means based on observation and induction; it involves generalizing from observations, and it is often exploratory in an effort to make sense of what you see. Inductive research does not have theories to test; rather, it identifies which questions to examine and sometimes even helps to better define how to frame them. Discovery is the goal and it highlights the functional aspect of serendipity (Locke, 2007; Spector, Rogelberg, Ryan, Schmitt, & Zedeck, 2014). It seems rather arrogant to emphasize theory testing over exploration, as if we already know what the important questions are and how they should be formulated. Curiosity and exploration fuel innovation, which is essential for driving progress in both academic and real-world applications of I-O psychology.
Prosaic Publications Divorced From Practice
Pedestrian, often derivative, work dominates I-O research. Scholarship is increasingly unconcerned with application and applicability (Silzer & Parsons, 2012). As we described, many scholars seem focused on building theories and complex statistical models. Their goal appears to be advancing academic discourse rather than providing solutions or guidance to practitioners. Not surprisingly, practitioners often ignore much of what is in mainstream I-O journals (Blanton, 2000) because these publications exhibit, to paraphrase Lewin (1943) again, a disdain and high-brow aversion to practical problems. It is as if applied work is somehow beneath the rigors of proper scholarship.
This may be a defensive reaction on the part of many academics not really understanding the tradeoffs needed to do applied work. It may also reflect a real desire to stay in the ivory tower of I-O academia precisely because real-world research is difficult.Yes, it’s messy. Yes, the databases are often noisy and incomplete. Yes, there is a lack of experimental control. But field data are contextually rich and often voluminous (in participants, observations, measurements, etc.). Much of these valuable data do not make it into journals and remain in hiding in technical reports or data archives of organizations and consultancies. Bringing such data to bear on I-O science should be a goal of both research and practice. We need to shift greater focus to field-based, application-relevant research.
Scientists will need to look beyond mere theory building, concentrating their work on messy, real-world data to address questions of importance to practice. Practitioners will need to share their data and partner with scientists. We also concur with Campbell’s (1990) pronouncement from many years ago that appears to have gone unheeded: “Given the difficulty of its chosen assignment, psychology has compounded its problem by devaluating teaching and public service in favor of doing research. As a result, more people are conducting research than should be, spreading the available resources too thinly and filling the journals with too much that is unimportant” (p. 46).
Practitioners Distracted by Fads and Fashions
Although I-O research has the potential to provide evidence-based solutions to many real-world talent-management problems—that is, how to hire, develop, engage, and retain employees and leaders—practitioners are much more likely to look to the latest fad or shiny new object than to seminal I-O publications. Some topics—for example, emotional intelligence, transformational leadership, and the dark side of personality—simultaneously capture the interests of I-O scholars and self-proclaimed “gurus.” However, a great many HR trends—for instance, learning agility, strengths-based coaching, the MBTI, digital leadership, HR analytics, and managing millennials—are virtually alien to I-O scholars.
This is problematic for two reasons. First, practitioners—or at least those who are trying to be evidence-based—would benefit from an informed and expert opinion on these topics from the I-O community (in particular, independent scholars). It is not that I-O has nothing to say on these matters but, rather, that it is too focused on its own academic concerns, and when it does turn attention to practical concerns, it often seems to be speaking a different language. Second, I-O’s absence from the party delays or obstructs progress for real-world innovations.
For example, at the latest HR Tech Conference, which attracts around 10,000 delegates each year, during his closing keynote address Peter Cappelli (2016) asked the audience whether they had ever heard of I-O psychology. Fewer than 10% had. Yet the overwhelming majority of sessions covered and products offered during this convention concerned traditional areas of application for I-O: employment interviews, performance appraisals, 360s, and selection tools. Although the focus of these sessions and products was largely on emerging technologies, rather than I-O research, one would think that the science of I-O would be integral to enhancing such technologies. This trend is consistent with the broader gap between I-O scholars (who appear to be disinterested in new technologies) and technology enthusiasts (who have little interests in I-O research).
Losing Ground to Other Fields
Other fields are taking over I-O research and practice, and they seem to garner more attention and respect: for instance, behavioral economics, neuroscience, education, professional credentialing, and even marketing. They have shinier, not necessarily better, toys, and they sell themselves better. Consider the following examples of how I-O psychologists are getting beaten at their own game.
Much of what is happening with Big Data amounts to little more than identifying covariations and patterns in massive convenience samples of data. Many I-O psychologists have a deep understanding of these statistical methods as well as ways of organizing, cleaning, and structuring databases. Yet economists, data scientists, and even marketers seem to be leading the way in Big Data. How did we get scooped at one of the things we do best (and better than most)?
Professions outside of psychology are getting into the business of predicting and understanding people. Assessment has become a ubiquitous feature in marketing, credit and risk, and online dating, yet few of these industries show much interest in understanding lessons from I-O psychology. Perhaps more strikingly, traditional fields of I-O application, such as training, recruitment, and performance evaluations, are rapidly incorporating methods and tools from computer science, such as gamification, machine-learning, and artificial intelligence while ignoring the vast body of knowledge from I-O on these very matters.
The big accounting and financial-management firms are getting into talent management, especially leadership assessment and development. It’s a nice lateral extension to sell more to customers with whom they’ve had long and deep relationships, all under the core competency of risk management. But the degree of sophistication in understanding people and learning and development is not nearly as impressive as, say, these firms’ understanding of financial-asset management.
Some of the more interesting research on leadership in the last few years has been conducted by economists. Economists have identified common management practices that explain firm profitability, industry profitability, and even the financial success of different nations (Bloom & Van Reenen, 2007). Economists have established links between CEOs, corporate strategy, corporate policy, and financial performance using distinctively psychological explanations like personality, arrogance, and hubris (Bertrand & Schoar, 2003; Malmendier, Tate, & Yan, 2011). Making matters worse, training in I-O psychology does not incorporate knowledge and skills from these fields, which is too bad because this knowledge can expand our students’ understandings and make them better able to compete in the labor market.
These are indeed troubling trends in I-O psychology. The insular, academic thinking that dominates the discipline creates hostility and antipathy toward practice and the applied world that keeps it on the periphery—when it could be center stage in a leadership role.
Getting Back on Track: The Scientist–Practitioner Model
We must remember that we are a field of both scientists and practitioners, united by a desire to make psychology an applied tool for improving the world of work (Silzer & Cober, 2010). At the same time, we must find a way to better integrate these two aspects of the field. In order to achieve this, we recommend a renewed emphasis on the scientist–practitioner model, which at the core simply holds that I-O psychologists should be formally and systematically educated for a comprehensive understanding of the discipline and how knowledge is added to it (scientist) as well as trained to apply this knowledge effectively to real-world situations (practitioner)—akin to clinical psychology’s “Boulder model” (Shakow et al., 1947).
Why will this model help the field find its way? In a large sense, it will help because the practice aspect will focus the field on the problems that concern the people we want to help, whereas the scientific part will ensure that we develop our ideas and applications with a dedication to evidence. Practice without evidence can quickly devolve into fads; science without practice runs the risk of navel gazing. Applying both sides of the model emphasizes the purpose of I-O psychology: to gain a better understanding of people at work and to help them deal with the challenges they face.
On a more existential level, this dual focus can go a long way to stemming the troubling trends we described above. To do this, the practice side of our house needs to be fortified—with better, broader, and more diverse training, both in graduate-school classes and through continuing education. A model of lifelong learning and development should encourage and enable practitioners to publish, present, or collaborate with scientists on field-based research throughout their careers. There may be more wisdom about what really works in solving organizational problems in the heads of reflective practitioners and seasoned consultants than in a research library of I-O journals (Kaiser, 2015). But getting that know-how systematically organized and codified remains a challenge.
The science side of the I-O house must come to better appreciate real-world issues and to learn to communicate better with and influence practitioners. As scientists, we should focus more on discovery and knowledge creation. An important point to stress in all of this is that knowledge is not the same thing as information. Information from organizations and field settings must be transformed into knowledge, testing generalizability of effects and boundary conditions. Theory-driven meta-analyses are ideally suited to this enterprise (Ones, Viswesvaran, & Schmidt, 2017).
In order to integrate science and practice, I-O psychology also needs to change its society membership, publishing, and graduate training approaches to one of greater inclusivity. Dusty old structures and practices must be replaced with forward-looking, contemporary approaches. For instance, the SIOP organization is highly political and hierarchical, guarded heavily by academics, which makes it difficult for practitioners to penetrate. SIOP has a great deal to learn from adjacent professional societies like the Association of Test Publishers (ATP), the Association for Talent Development (ATD), the Human Resource Planning Society (HRPS), and the Society for Human Resource Management (SHRM). In addition to marketing and brand management lessons, SIOP could learn to extend its reach. The approachability and inclusivity of these organizations has gone a long way to expanding their membership and leadership. The expansion has extended their influence, especially with real-world organizations, and has not detracted from their prestige. In fact, it has enhanced their prestige among practitioners.
Relatedly, SIOP Fellowship status places more emphasis on academic pedigree than professional contribution: one can meet the requirements of a body of innovative research with a high publication impact factor, have founded a global business advancing research-based solutions, or regularly advise top management at large companies—and even all three—but if you don’t also have a PhD, then you cannot be a SIOP Fellow. Recent changes in membership requirements are a promising move: becoming a full Member, which was necessary to vote in SIOP elections and hold positions on the Executive Board and chair committees, has historically required a PhD, whereas a master’s degree only qualified one for non-voting, “Associate” membership. The recent change provides Associate members, which include a great many practitioners, with a path to Member status by meeting certain reasonable requirements (e.g., active involvement in the profession, attendance at annual meetings, nomination by a Member). More reform along these lines could do more to promote practitioner influence on the profession.
We also need to attract and train strong, quantitatively-oriented graduate students for our field. Capable applicants from all sorts of undergraduate majors can be better encouraged to enter I-O-based graduate programs. Applicants’ choices are not easy when there are monetary consequences to choosing I-O psychology (e.g., obtaining an MBA or a graduate degree from a business school may pay better than a graduate degree in I-O psychology). Our appeal must include very strong nonmonetary incentives. Finally, I-O-psychology graduate programs should train methodologically sophisticated I-O psychologists who are ready to function in both applied and academic settings. They should be positioned to successfully compete and exceed competence offered by other fields encroaching on I-O (e.g., behavioral economics, data science, and people analytics).
Our field faces a perverse irony: the psychology of work is more relevant than ever and organizations are becoming much more data-driven and evidence-based, but I-O psychologists are at risk of being marginalized. It is not that the field isn’t acknowledged but rather that its acknowledgement—and, more importantly, its influence—is smaller than it ought to be. As Rahm Emanuel indelicately put it, “If you don't have a seat at the table, you may be on the menu.”
The fault lines between science and practice are deep and getting deeper. I-O psychology is following the course of our increasingly polarized society and specialized workforce. We can claim that seat at the table through a renewed, whole-hearted embrace of the scientist–practitioner model. Transformative change is possible, and it typically starts with individuals. Perhaps our best hope is with the new generation of I-O psychologists.
To the old guard, we say: Be inclusive, train and mentor scientist–practitioners. Do not stand in the way of the scientist–practitioner model. To the rising new generation, we say: Learn from both science and practice, and chart your own path of practical science, discovery, and innovation. You can keep I-O psychology alive and relevant. Its future depends on you.
 As the authors of this article, we should note that we are scientist–practitioners, carrying out multiple activities in both the scholarly (scientist, author, journal editor, etc.) and practitioner (consultant, advisor, CEO, entrepreneur, speaker, organizational educator, etc.) domains. Working in both worlds gives us a perspective that is increasingly rare in I-O psychology.
Bertrand, M., & Schoar, A. (2003). Managing with style: The effect of managers on firm policies. Quarterly Journal of Economics, 118, 1169-1208.
Blanton, J. S. (2000). Why consultants don't apply psychological research. Consulting Psychology Journal: Practice and Research, 52, 235-247.
Bloom, N. & Van Reenen, J. (2007). Measuring and explaining management across firms and countries. The Quarterly Journal of Economics, 122(4), 1351-1408.
Campbell, J. P. (1990). Modeling the performance prediction problem in industrial and organizational psychology. In M. D. Dunnette, & L. M. Hough (Eds.), Handbook of industrial and organizational psychology (2nd ed., Vol. 1, pp. 687–732). Palo Alto, CA: Consulting Psychologists Press.
Campbell, J. P., & Wilmot, M. P. (in press). The functioning of theory in industrial, work & organizational psychology. In N. Anderson, D. S. Ones, H. K. Sinangil, & C. Viswesvaran (Eds.), Handbook of industrial, work & organizational psychology (2nd ed., Vol. 1). London, UK: Sage.
Cappelli, P. (2016, October). Taking HR’s influence to the next level. Closing keynote presented at the 19th Annual HR Technology Conference & Exposition, Chicago, IL.
Glass, G. V. (2000). Meta-analysis at 25. Retrieved from http://www.gvglass.info/papers/meta25.html
Hambrick, D. C. (2007). The field of management’s devotion to theory: Too much of a good thing? Academy of Management Journal, 50(6), 1346-1352.
Hogan, R., & Chamorro-Premuzic, T. (2013). Personality and the laws of history. In T. Chamorro-Premuzic, S. von Stumm, & A. Furnham (Eds.), The Wiley-Blackwell handbook of individual differences (pp. 491–511). Oxford, UK: Wiley-Blackwell.
Kaiser, R. B. (2015). The wisdom paper: Introducing the first of a new type of article in an occasional series. Consulting Psychology Journal Practice and Research, 67, 1-2
Lewin, K. (1943). Psychology and the process of group living. Journal of Social Psychology, 17, 113–131.
Locke, E. A. (2007). The case for inductive theory building. Journal of Management, 33, 867-890.
Malmendier, U., Tate, G., & Yan, J. (2011). Overconfidence and early-life experiences: The effect of managerial traits on corporate financial policies. Journal of Finance, 65, 1687-1733.
Meehl, P. E. (1967). Theory-testing in psychology and physics: A methodological paradox. Philosophy of Science, 34, 103-15.
Meehl. P. E. (1978). Theoretical risks and tabular asterisks: Sir Karl, Sir Ronald, and the slow progress of soft psychology. Journal of Consulting and Clinical Psychology, 46, 806-34.
Murphy, K. (1997, April). Confessions of a statistical minimalist. Presentation give at the 12th Annual Conference of the Society for Industrial and Organizational Psychology, St. Louis, MO.
Ones, D. S., Viswesvaran C., & Schmidt F. L. (2017). Realizing the full potential of psychometric meta-analysis for a cumulative science and practice of human resource management. Human Resource Management Review, 27(1), 201-215.
Shakow, D. Hilgard, E. L., Kelly, E. L., Luckey, P., Sanford, N., & Shaffer, L. (1947). Recommended graduate training program in clinical psychology. Report of the Committee on Training in Clinical Psychology of the American Psychological Association. American Psychologist, 2, 539-558.
Shen, W., Kiger, T. B., Davies, S. E., Rasch, R. L., Simon, K. M., & Ones, D. S. (2011). Samples in applied psychology: Over a decade of research in review. Journal of Applied Psychology, 96(5), 1055-1064.
Silzer, R. F, & Cober, R. (2010). Practice perspectives: The Science–practice gap in I-O psychology: A fish bowl exercise.The Industrial-Organizational Psychologist, 48(1), 95-103. Retrieved from www.siop.org/tip/July10/14silzer.aspx
Silzer, R. F., & Parsons, C. (2012). Industrial-organizational psychology journals and the science-practice gap. The Industrial-Organizational Psychologist, 49, 97-117.
Spector, P. E., Rogelberg, S. G., Ryan, A. M., Schmitt, N., & Zedeck, S. (2014). Moving the pendulum back to the middle: Reflections on and introduction to the inductive research special issue of Journal of Business and Psychology. Journal of Business and Psychology, 29, 499-502.