Featured Articles
Anonym
/ Categories: TIP, 564

Challenges, Trends, and Opportunities of the Testing Industry: Practitioners' Perspectives

Alex Casillas ACT, Inc. Kelly Dages FifthTheory, LLC Brandon Ferrell Hogan Assessment Systems

Tests and their results influence millions of lives every day, whether it is in school, at work, or in other settings (https://www.testpublishers.org/our-mission). Assessments are an integral part of the employment experience for selection, development, certification, licensure, and workforce skills credentialing. The testing industry faces numerous challenges, changes, and opportunities.

The Association of Test Publishers (ATP) was founded to promote and develop testing and assessment best practices and to facilitate an environment that would benefit test takers, businesses, educational organizations, and society in general. ATP is an international, nonprofit trade association representing providers of tests and assessment tools and/or services related to occupational, certification, licensure, educational, clinical or other similar uses. ATP is dedicated to the highest level of professionalism and business ethics within the test publishing community. Each year ATP holds an Innovations in Testing conference, which provides an opportunity for assessment professionals to learn about emerging technologies, new methodologies, data analytics, legislation, and so on.

In this article, members of the ATP I/O Division explore how the industry is adapting to some of the changes and challenges, and future opportunities for applied research. The authors are applied researchers who discuss how the testing industry is responding to current shifts in assessment methods, future trends that are shaping the industry, and implications for I-O professionals. 

​​Background

The past decade has seen a substantial number of changes in labor trends in large part driven by broader trends based on the rapid pace in which technology is advancing and globalization (National Academies, 2017).The testing industry is seeing a variety of challenges arise as a result of these trends.With regards to the pace of changes in technology as a source of challenges, the testing industry has shifted its delivery of assessments from a majority paper and pencil to a majority computer delivered (via the Internet) across a broad range of platforms and devices (Lawrence, 2018).Initially, this shift led to a variety of questions as to whether or not computer-delivered assessments were equivalent to paper and pencil ones. Although these questions are still being asked (particularly for high stakes assessments), the industry has accepted that computer delivery has become the primary mode of assessment administration and has started focusing on the testing experience, as well as test-taker behavior and engagement (Kantrowitz & Gutierrez, 2017).

One industry challenge due to technology involves the use and evaluation of unproctored Internet testing (UIT). This method has been gaining ground as a common method for screening candidates. However, some issues that loom large for UIT are test security and accessibility by diverse populations, particularly those from underrepresented groups (e.g., individuals from rural areas, low socioeconomic levels, English-language learners). An industry trend that is being driven by advances in technology is the increased influence of artificial intelligence (AI) and machine learning in the development, delivery, and reporting of assessment and training applications; in fact, this is the number one workplace trend (SIOP, 2019).As digital learning becomes more prominent, there is an increased need to generate vast amounts of assessment and instructional content, some of which is starting to be done via AI (Gierl, Lai & Zhang, 2018). Further, there is increased demand to provide more personalized assessment and feedback in order to provide users with information that can suggest specific instructional or training actions (Casillas, 2018).

With regard to globalization as a source of challenges, the testing industry has seen a dramatic change in the way that organizations recruit talent, with many industries recruiting globally not just locally.Given a broader set of users, this has implications for the way in which assessments are designed (e.g., increasing complexity of items and tasks), as well as how they are validated (e.g., need to support claims about equivalency across cultures and languages; Casillas, 2018; International Test Commission, 2017).Globalization has also placed an increased focus on cross-cultural research to help inform how to develop and/or improve organizational assessments, policies, and training programs.Examples of industry applications that are being impacted by globalization include identification of high potential employees, organizational training programs, strategies for communicating and motivating a diverse and inclusive workforce, and collaboration among members of virtual teams (which are often composed of workers who are distributed throughout the globe).

A few of the trends, challenges and opportunities that the assessment industry is facing are addressed by the authors.  This is not a comprehensive review of the trends, challenges, and opportunities, but provides a glimpse into how assessment professionals are handling a rapidly advancing industry.

Technology: How is technology, including AI and machine learning, influencing the way in which assessments are designed and delivered?

The assessment industry has seen technology change assessment design and delivery in multiple ways.Recent advances in the design and delivery of assessments due to technology include assessment delivery via mobile devices, high-fidelity assessments (For example: situational judgment, video vignettes, computer simulations), automatic item generation (AIG), and gamified assessments.

Mobile device testing. The ability to deliver assessments on multiple devices helps improve convenience and access, especially with developing countries and underprivileged populations.  Research has often found equivalence between computer and mobile device delivery for assessments, though not all assessments are (c.f. Brown & Grossenbacher, 2017; Dages, Zimmer, & Jones, 2017; King, Ryan, Kantrowitz, Grelle & Dainis, 2015).

Automatic item generation.  AIG uses an algorithm to create new items for specific content, based on an item template.  AIG has been used for mental ability and knowledge assessments (e.g., reading comprehension, nonverbal reasoning, professional credential testing, and medical and dental examinations [c.f. Blum & Holling, 2018; Gierl, Byrne, Spielman, & Waldschmidt, 2016]).  Using AIG may reduce costs and time needed for item writing and increase security of assessment content.  

Gamified assessments use game elements to make assessments more engaging, fun, and in some cases create a closer approximation of the job tasks.  Gamified assessments often use artificial intelligence (AI) and/or machine learning for the development of the assessments. They provide a stream of potentially scoreable data points (speed, accuracy, judgement, decision, etc.) that may help measure multiple job-relevant constructs (Fetzer, 2015).  Gamified assessments need continued research to determine the predictiveness and best application of these types of assessments (Arthur, Doverspike, Kinney, & O’Connell, 2017). 

Technology: How do you ensure that unproctored assessments remain secure?

Publishers of high-stake assessments need to ensure secure administration of the assessments, which often means the use of proctored assessment centers. Increasingly, test publishers are using various technologies to ensure high-stakes assessments can be securely administered in unproctored environments at every step of the process. Some technologies include biometrics and video to confirm identity, data analytics for cheating detection, measuring eye movements and response time, block web addresses and certain functionality (copy-paste), computerized adaptive testing (CAT), and AIG. There are security technologies used before, during, and after administration that can provide security for high-stakes assessments. These technologies incorporate AI, machine learning, and advanced analytics.

There will likely continue to be advancements, research, and increased application of mobile devices testing, AIG, gamified assessments, and unproctored assessment security.

Globalization: When doing cross-cultural assessment and training work, what are some important considerations to keep in mind?  How is the testing industry ensuring that the assessment, certification, and training solutions that it develops are fair, inclusive, and equitable for the populations for which the solutions are used?

Increasingly, assessments developed for one culture/language are being used in other cultures and languages, so it is of vital importance that these assessments have sufficient evidence to support reliable and valid scores, as well as appropriate interpretations.Thus, whenever researchers or practitioners are contemplating the translation or adaptation of an assessment to another population, it is key to ensure that the design, development, and validation efforts of such assessments are consistent with best practices in transadaptation of tests. Besides the AERA, APA, NCME Standards (2014), one of the most helpful resources is the International Test Commission’s Guidelines for Translating and Adapting Tests (ITC, 2017).These consist of 18 guidelines organized around six broad topics: (a) Precondition underscores the fact that decisions need to be made before the translation/adaptation process begins. (b) Test Development focuses on the actual process of adapting a test. (c) Confirmation includes guidelines associated with the collection of evidence to address the equivalence, reliability, and validity of a test in multiple languages and cultures. (d) Administration highlights the specification of testing conditions and instructions to minimize culture- and language-related problems that can impact validity.(e) Score Scales and Interpretation provides guidance for interpreting group score differences, and (f) Documentation focuses on the need to provide clear technical documentation, as well as documentation that can support good practice for users of transadapted tests.In addition to the Guidelines, ITC provides helpful explanations and suggestions for implementing the guidelines in practice.It is worth noting that, even if you are not engaging in a formal transadaptation project, it is helpful to refer to the ITC guidelines when using assessments with (sub)groups of individuals in the U.S., such as comparing native English speakers with English language learners.

Future Trends: What knowledge and skills will I-O psychologists need to stay abreast of the changes taking place in the world of work due to the influence of technology and globalization?

Technology is driving most of the changes in our field. It allows us to collect more data more frequently from more people in more locations more cheaply than ever before. This is Big Data applied to workplace and organizational settings, and I-O psychologists will need two additional sets of skills (both with origins in computer science) to succeed in this environment.

First, the increasing amount of available data will require I-O psychologists to become more proficient with large databases and the ability to extract information from them. These data are not solely test scores and supervisor ratings anymore, and may include traditional data alongside social media posts, text from websites, or data collected from wearables. The file types we use to teach statistics (e.g., single Excel files) used to be similar to our work after graduate training. That is no longer the case, as data can be too large or too unstructured for widely used programs.

Second, the increasing number of variables we can measure requires I-O psychologists to distinguish predictive variables from nonpredictive ones. Machine learning techniques were designed to do just this and more, and savvy I-O psychologists are able to do these analyses, often in software like R or Python--though other programs do exist. Many of these techniques require large amounts of data, relating both sets of skills.Although big data and machine learning can be incredibly powerful, these innovations still require human experts to contextualize findings relative to existing bodies of theoretical and empirical work, and, in turn, to make appropriate interpretations.Compared to other data scientists, I-O psychologists are uniquely prepared to provide this context and make appropriate interpretations.

Future Trends: How can early-career I-O professionals become more familiar with the work of the Association of Test Publishers and its members?

ATP is relevant for professionals involved with or interested in test development, test delivery, test security, psychometrics, test users, managing test programs, distance learning, educational technologies, game design for the purpose of assessment, and many others. Anyone can find out more information about ATP on its website (https://www.testpublishers.org). There is also a LinkedIn group (https://www.linkedin.com/company/association-of-test-publishers), and you can connect on Twitter at (@atpconf).

Dr. Alex Casillas is a principal research psychologist in the Research Division of ACT, Inc.  During his tenure at ACT he has led the research and development of several behavioral assessments that can be used to predict performance and persistence in both educational and workforce settings. Dr. Casillas was one of the thought leaders behind the ACT Holistic Framework, a comprehensive and research-based framework that articulates what individuals need to know and be able to do in order to succeed in education and work settings. 

Dr. Kelly Dages is a Sr. program evaluation & HR analytics consultant with FifthTheory, LLC. She has approximately 20 years of experience with developing job-relevant assessments, program evaluation, psychometrics, and validation research. Through her program evaluation research, Dr. Dages evaluates the impact of assessment programs on key business metrics such as turnover, safety, and job performance. She has developed psychometrically sound custom assessment solutions for private and public organizations to assist clients with organizational development, selection, and performance management. She frequently presents at conferences and publishes research findings. She received her PhD in I-O psychology from Northern Illinois University.

Dr. Brandon Ferrell is Manager of Product Research at Hogan Assessment Systems. A quantitative psychologist by training, he now oversees and helps conduct product-facing research into Hogan’s suite of personality and values assessments and their uses in the workplace.

 

 

 

References

AERA, APA, NCME. (2014). Standards for Educational and Psychological Testing. American Educational Research Association, Washington, DC.

Arthur, W., Jr., Doverspike, D., Kinney, T.B., & O’Connell, M. (2017). The impact of emerging technologies on selection models and research: Mobile devices and gamification as exemplars. In J. L. Farr & N. T. Tippins (Eds.), Handbook of employee selection (2nd ed.) (pp. 967–986). New York: Taylor & Francis/Psychology Press.

Blum, Diego; Holling, Heinz (6 August 2018). Automatic generation of figural analogies with the IMak package. Frontiers in Psychology, 9. doi:10.3389/fpsyg.2018.01286.

Brown, M.I. & Grossenbacher, M.A. (2017). Can you test me now? Equivalence of GMA tests on mobile and non‐mobile devices, International Journal of Selection and Assessment, 25, 1, (61-71).

Casillas, A. (2018, July). Leveraging Design Science to Develop High-Quality Tests.  In A. Casillas (Chair), Shifting to a Principled Design Science Paradigm.  Symposium conducted at the 11th conference of the International Testing Commission, Montreal, Canada.

Dages, K., Zimmer, S., & Jones, J. (2017). Pre-employment risk screening: Comparability of integrity assessment technology platforms. International Journal of Selection and Assessment, 25, 4, (390-400).

Fetzer, M. (2015). Serious games for talent selection and development. The Industrial-Organizational Psycholo­gist, 52, 117-125.

Gierl, M., Lai, H., & Zhang, X. (2018). Automatic item generation. In Encyclopedia of Information Science and Technology (Fourth edition) (pp. 2369-2379). IGI Global.

International Test Commission. (2017). The ITC Guidelines for Translating and Adapting Tests (Second edition). Retrieved from www.InTestCom.org.

Kantrowitz, T. M. & Gutierrez, S. L. (2017).  The changing landscape of technology-enhanced test administration (pp. 193-216).  In J. C. Scott, D. Bartram & D. H. Reynolds (Eds.) Next Generation Technology-Enhanced Assessment: Global Perspectives on Occupational and Workplace Testing. Cambridge University Press.

King, D.D., Ryan, A.M., Kantrowitz, T., Grelle, D., & Dainis, A. (2015). Mobile Internet testing: An analysis of equivalence, individual differences, and reactions. International Journal of Selection & Assessment, 23, 382–394.

Lai, H., Gierl, M.J., Byrne, B.E., Spielman, A.I., & Waldschmidt, D.M. (2016). Three modelling applications to promote automatic item generation for examinations in dentistry. Journal of dental education, 80(3), 339-347.

Lawrence, A. (2018, April).  Top technology trends at the Society of I/O Psychology Conference: Assessment delivery. [Blog post].  Retrieved from http://www.selectinternational.com/blog/bid/112237/Top-Technology-Trends-at-the-Society-of-I-O-Psychology-Conference-Assessment-Delivery

National Academies of Sciences, Engineering, and Medicine (2017). Information technology and the U.S. workforce: Where are we and where do we go from here? Washington, DC: The National Academies Press.

Society for Industrial and Organizational Psychology (2019, January).  It’s the same only different: SIOP top 10 workplace trends 2019.  Retrieved from http://www.siop.org/article_view.aspx?article=1894

 

Print
5899 Rate this article:
No rating
Comments are only visible to subscribers.

Categories

Information on this website, including articles, white papers, and other resources, is provided by SIOP staff and members. We do not include third-party content on our website or in our publications, except in rare exceptions such as paid partnerships.