Meredith Turner
/ Categories: 561

7 Questions and Answers About AI and I-O

Calista Tavallali, Sarah Reswow, and Jerod White The George Washington University

Artificial intelligence (AI) is everywhere, from factories to self-driving cars to robotic vacuums. But what is artificial intelligence, really, and how does it influence work? Few I-O psychologists have formal training in computer science, yet most of them are aware of AI’s growing presence in the workplace. Today’s organizations are continuously adopting advanced technologies, raising questions about the intersection of AI and I-O. If you’ve ever struggled to understand the difference between a GitHub and a hubcap, or if deep learning puts you in a deep confusion, read on for a jargon-free introduction to AI and its influence on the nature of work.

What Is AI?

AI is not super-intelligent robots teaming up to take over the world (at least we hope not). Rather, the term “artificial intelligence” encompasses a vast range of technologies that enable computers to solve specific problems in ways that at least superficially resemble human thinking. Behind each advanced AI technology are algorithms, or sets of step-by-step rules that determine a machine’s actions in any given situation. Using algorithms, computers process large amounts of data and recognize patterns within the data in order to complete complex tasks (SAS, 2017). Machine learning, deep learning, and neural networks all describe techniques used by scientists to give computers advanced reasoning skills. Importantly, no current AI techniques provide machines with a complete human-equivalent consciousness. Within organizational contexts, this means that today’s AI technologies typically perform only the tasks they are designed to complete.

What Is the Progression of AI?

One may suspect that AI is a new research area, but scientists have been developing artificially intelligent machines since the 1950s. During AI’s earliest stages, researchers equipped computers to solve simple mathematical problems (SAS, 2017). Several of the breakthroughs in AI we see today result from research building off of early frameworks discovered in the 1970s (e.g., neural networks; Parloff, 2016). For example, many current organizations take advantage of recent advancements in computing power to store large amounts of employee or customer data. Initial research on neural networks recognized that computers solve problems best when they are trained with immense amounts of data, and today’s researchers have access to such data. Building upon the key algorithmic discoveries of the 20th century, current organizations generate large databases to continuously improve their neural networks and solve real world problems.

AI has more recently progressed to include technologies that mimic—and in some cases, exceed—actual human reasoning abilities. Consider the human decision-making process, which represents a critical task within most organizational functions. Today’s intelligent decision support systems (IDSS) use existing performance data to generate large scale planning suggestions in much the same way that managers do (Yam, Tse, Li, & Tu, 2001). More recently developed AI technologies have outperformed highly skilled humans in a variety of tasks, from comprehending written information (Molina, 2018), to playing games of chess (Ensmenger, 2011), to selecting highly qualified job applicants (Liang & Wang, 1994).

Given such impressive AI accomplishments, it is surprising to note that many of today’s fundamental AI techniques are qualitatively similar to those used decades ago. Jen-Hsun Huang, CEO of the leading graphics processing company, Nvidia, refers to today’s AI as “software writing software” (Cohn, 2010), an advancement made possible by early AI research on neural networks. A 2017 McKinsey Global Institute (MGI) report argues that AI technologies throughout the past decade have progressed faster than those of the 50 years prior. Using innovative machine learning techniques, today’s AI technologies no longer need scripts to complete tasks—they can discover and innovate on their own (Manyika, 2017). Still, these AI discoveries are far from fully replicating the intricacies of human intelligence. In essence, any existing form of artificial intelligence technology is just that: artificial.

How Is AI Applied to Different Kinds of Work?

AI is often thought to influence only industrial and administrative fields, but it affects virtually every industry in the workforce, including the creative and medical sectors. Creative workers use AI to assist them in completing a variety of tasks, whether it be generating food recipes, drafting sports articles, or selecting songs for specific audiences (Newton-Rex, 2017). Even more, AI is capable of independently producing paintings, music, logos, movie trailers, and full film scripts. In the 1970s, Harold Cohen created AARON, an art-producing machine operating off of one of his algorithms. Cohen taught AARON the fundamentals of painting, such as distinguishing objects and elements, to create its own work. Although AARON had never been presented with actual images of objects and elements, it was able to create similar items in its paintings on its own (Moss, 2015).

In recent years, IBM has been a trailblazer in creativity, using its Watson technology to propel AI into new fields such as video marketing. By drawing from hundreds of horror film trailers, Watson technology was able to create its own trailer for the movie, Morgan (Smith, 2016). However, the question still remains: Can AI be truly creative on its own? Although it is difficult for us to determine if machines are creatively inspired the same way that we are, AI still shows great promise in constructing original products.

Within the medical field, AI is a powerful resource that professionals can use to effectively assess and treat patients. Consider IBM’s Watson for Oncology, an AI bot that analyzes structured and unstructured clinical notes to provide doctors with treatment pathway recommendations for cancer patients (High, 2012). Bots such as these do not enforce treatment decisions, but they do serve as a critical information source for doctors to consider in high-stakes treatment cases. In addition to generating information, AI technologies also show potential in completing certain medical tasks from start to finish. One recent patent, for example, suggests that robots can successfully perform several of the psychomotor duties of pharmacists, such as retrieving and filling pill containers (Ningombam, Singh, & Chanu, 2018).

Will AI Bring the End of Work?

In a word, no. Even so, this is perhaps the most debated question regarding AI and I-O. Some forecasts for a future tech-centered workforce are dire, such as Frey and Osborne’s (2013) estimation that 47% of total U.S. employment is at risk of automation. Other researchers anticipate a more optimistic future of AI at work; economists Brynjolfsson and Mitchell used O*NET data to show that of the 30 or so tasks that comprise most jobs, only a few are easily automatable given current technology (2017). Understanding these discrepancies is a critical research endeavor in both the AI and I-O areas currently.

Today’s prominent futurists disagree on whether AI will bring the end of work, but they largely agree that is a complicated and powerful form of technology. Ray Kurzweil, director of engineering at Google, believes that AI will change the world of work just like our prior technologies have: Though some jobs will inevitably be outdated as a result of AI, many new ones will be created in the process (Kurzweil, 2014). Even SpaceX CEO, Elon Musk, who fears that AI could eventually take control of us, recognizes that “smart” technologies contribute to society in positive ways when they are carefully designed (Browne, 2018). Other futurists such as Tim O’Reilly argue that the decisions of AI programmers—not AI itself—will determine how work will change in the future. For example, programmers who create AIs that emphasize worker efficiency over satisfaction will influence work differently than programmers who balance the two variables (O’Reilly, 2017). With so many conflicting accounts from today’s futurists, I-Os should remember one reason why we work in the first place: to solve problems. If work means solving problems, we won’t run out of work until we run out of problems.

What Is the Future of AI?

Thus far, we’ve introduced a variety of AI forms that learn to complete highly complex, specialized tasks. Some researchers speculate that a single AI technology may one day demonstrate all intellectual processes exhibited by humans. Known as general AI, this form of technology would ultimately provide a machine with a consciousness that mirrors those found within humans. The possibility of a superhuman AI is widely debated across disciplines. Futurist Kevin Kelly doubts that creating a general AI is possible. Recognizing human intelligence as a multifaceted trait, Kelly argues that our current forms of AI are not actually smarter than us, but instead just different than us (2017). There are certainly qualitative differences between human and artificial intelligences today, but future researchers could blur the lines between the two.

Another future direction for AI concerns the issue of biases in decision making. Even without a consciousness to fuel misleading gut-based decisions, AI is far from perfect. Researchers must recognize that their human biases influence the effectiveness of the AI technologies they create (Knight, 2017). Indeed, some have even coined the phrase “racist robots” to describe AIs that fall victim to biased forms of reasoning (Buranyi, 2017). One famous example is the Microsoft chatbot named Tay that created anti-Semitic messages it learned from analyzing data on Twitter (Buranyi, 2017). This is not surprising, as AI learns from human data. As long as racism and bias exists in society, AI will learn from it and unintentionally reproduce similar prejudices. Future AIs must operate from powerful algorithms capable of overcoming such prejudices in order to reason in a truly rational fashion.  

What Are I-Os Doing With AI?

AI was a popular topic at the 2018 SIOP Annual Conference, with a number of presentations dedicated to discussing the developments of “smart” technologies in various I-O psychology subfields. One symposium focused on AI in psychometrics, demonstrating that the technology can be used to improve the psychological fidelity and practical utility of assessments (Barney et al., 2018). Another presentation took an interdisciplinary approach of AI, combining the experiences of consultants, a data scientist, and a lawyer to provide insights on the influence of AI in personnel selection (Hense, et al., 2018). Yet another presentation focused on the use of algorithms for identifying collaborators and building effective teams (Twyman, Newman, DeChurch, & Contractor, 2018). Today’s I-O psychologists clearly recognize that AI has implications for almost every organizational function. Indeed, AI ranked fourth on SIOP’s Top 10 Workplace Trends list for this year, a sign that current I-Os are actively engaged in AI research (2018).

What Should I-Os Do Next?

While exploring the crossroads of AI and I-O, future I-O psychologists have two primary responsibilities: to learn and to educate. The rise of AI in decision-making will undoubtedly support a "partnership between humans and machines," as HealthTap CEO Ron Gutman recently stated at a panel discussion at the World Economic Forum. Moving forward, I-O psychologists must continue to learn about AI, focusing on this partnership and its limitations: How can I-O deal with bias from robots, and how can they study jobs that have not yet been created? Questions such as these should guide future I-Os as they study AI.

I-Os will find many opportunities to continue learning about AI given its growing organizational uses. Even in the widely cited scenario of workers losing their jobs due to automation, AI can influence several organizational functions in positive ways. AI can just as easily be used to restart the talent management cycle with recruitment, selection, and training efforts for new occupations. Within selection, for example, AI can use decision tree models to code applicant data by splitting information into nodes until a final decision is made (Chui, Kamalnath, & McCarthy, 2018). Similarly, I-O psychologists will soon design training programs for jobs that do not yet exist, and AI will undoubtedly affect how those programs operate. While AI may bring certain jobs to an end, it will simultaneously provide I-Os with a number of valuable learning opportunities.

In Kelly Stewart’s SIOP podcast, Neil Morelli recently spoke to the same points while also introducing I-Os’ duty to educate. Although Morelli believes that there are many advantages to AI technology, he cautions I-O psychologists to understand when AI should and should not be used. If employees are spending excessive amounts of time completing menial, repeatable tasks, AI is an excellent means to improve efficiency (2017). However, adopting AI may not be a wise decision in cases where human judgment is required. For example, consider Uber’s decision to develop experimental autonomous cars. In March of 2018, one of these cars struck and killed a bicyclist after identifying the person as an “unknown object” (Madrigal, 2018). Stories such as these not only urge researchers to develop more reliable AI technologies, but they also encourage practicing I-Os to think carefully before adopting them. By continuing to learn about AI, I-Os increase their awareness of potential risks and provide organizational stakeholders with fully informed recommendations. Given that AI continues its trend toward ubiquity, I-Os’ learning and educating duties will remain critical for the future of work. As Satya Nadella, CEO of Microsoft, recently noted: "It's our responsibility to have AI augment human ingenuity and human opportunity" (McKendrick, 2017).


Barney, M., Becker, K. A., Gray, C. J., Lahti, K., Mead, A. D., Riley, B., Russel, C., & Thissen-Roe, A. (2018). The bleeding edge of measurement: Innovations with AI psychometrics. Poster presented at the 33rd Annual Meeting of the Society for Industrial and Organizational Psychology, Chicago IL.

Buranyi, S. (2017, August 08). Rise of the racist robots – how AI is learning all our worst impulses. Retrieved December 05, 2017, from

Browne, R. (2018, April 6). Elon Musk warns A.I. could create an “immortal dictator from which we can never escape.” CNBC. Retrieved from

Brynjolfsson, E., & Mitchell, T. (2017). What can machine learning do? Workforce implications. Science, 358(6370), 1530-1534.

Caughill, P. (2017, August 14). Elon Musk reminds us of the possible dangers of unregulated AI. Retrieved from

Chui, M., Kamalnath, V., & McCarthy, B. (2018, February). An executive's guide to AI. Retrieved from

Cohn, M. (2010, March 8). Connected: Interview with Nvidia CEO Jen-Hsun Huang. Retrieved from

Ensmenger, N. (2012). Is chess the drosophila of artificial intelligence? A social history of an algorithm. Social Studies of Science42(1), 5-30.

Frey, C. B., & Osborne, M. A. (2017). The future of employment: How susceptible are jobs to computerisation? Technological Forecasting and Social Change114, 254-280.

Hense, R., Thompson, I. B., Kaminsky, S. E., Powell Yost, A., Giouard, M., & Trindel, K. (2018, April). Shiny pennies: Influence of AI and neuroscience innovations on selection. Panel discussion at the 33rd Annual Conference of the Society for Industrial and Organizational Psychology, Chicago IL.

High, R. (2012). The era of cognitive systems: An inside look at IBM Watson and how it works. IBM Corporation, Redbooks. Retrieved from

Kelly, K. (2017). The myth of a superhuman AI. Wired. Retrieved from

Knight, W. (2017). Google's AI chief says forget Elon Musk's killer robots, and worry about bias in AI systems instead. MIT Technology Review. Retrieved from forget-killer-robotsbias-is-the-real-ai-danger/

Kurzweil, R. (2014). Don’t fear artificial intelligence. TIME. Retrieved from dont-fear-artificial-intelligence/

Liang, G. S., & Wang, M. J. J. (1994). Personnel selection using fuzzy MCDM algorithm. European Journal of Operational Research78(1), 22-33.

Madrigal, A. C. (2018, May 24). Uber’s self-driving car didn’t malfunction, it was just bad. The Atlantic. Retrieved from

Manyika, James (2017, December). “What is the future of work?” McKinsey & Company. Retrieved from

McKendrick, J. (2017, April 30). Artificial intelligence, viewed at its most practical level. Retrieved from

Molina, B. (2018, January 16). Robots are better at reading than humans. Retrieved from

Moss, R. (2015, February 16). Creative AI: The robots that would be painters. Retrieved from

Newton-Rex, E. (2017, March 07). 59 impressive things artificial intelligence can do today. Retrieved from

Ningombam D., Singh A., Chanu K.T. (2018) Multipurpose GPS guided autonomous mobile robot. In Saeed K., Chaki N., Pati B., Bakshi S., Mohapatra D. (Eds), Progress in advanced computing and intelligent engineering (pp. 361-372). Singapore: Springer.

O’Reilly, T. (2017). Using AI to create new jobs. O’Reilly. Retrieved from

Parloff, R. (2016, September 28). Why deep learning is suddenly changing your life. Retrieved from

SAS Institute Inc. (2017). Artificial intelligence – What it is and why it matters. Retrieved from

Smith, J. R. (2016, August 31). IBM research takes Watson to Hollywood with the first “cognitive movie trailer.” Retrieved from

Stewart, K. (2017, December 11). Artificial intelligence in I-O: It’s not just a fad [Audio podcast]. Retrieved from

Twyman, M. D., Newman, D. A., DeChurch, L. A., & Contractor, N. (2018, April). Inviting your next teammate: Algorithms and acquaintances. Poster presented at the 33rd Annual Conference of the Society for Industrial and Organizational Psychology, Chicago IL.

Yam, R. C. M., Tse, P. W., Li, L., & Tu, P. (2001). Intelligent predictive decision support system for condition-based maintenance.  International Journal of Advanced Manufacturing Technology17(5), 383-391.

6823 Rate this article:
Comments are only visible to subscribers.


Information on this website, including articles, white papers, and other resources, is provided by SIOP staff and members. We do not include third-party content on our website or in our publications, except in rare exceptions such as paid partnerships.