Home Home | About Us | Sitemap | Contact  
  • Info For
  • Professionals
  • Students
  • Educators
  • Media
  • Search
    Powered By Google

               

 Workshop 1

Innovations in Computer-Based Testing: Implications for Science and Practice

 Presenters:   Craig R. Dawson, SHL, and Adam W. Meade, North Carolina State University

Coordinator:   Lorin Mueller, American Institutes for Research
  
Target Audience: This workshop is appropriate for selection professionals of all levels. Participants should be familiar with selection system design and testing, including validation strategies and basic psychometrics.
 
Computer-Based Testing (CBT), including advanced applications, such as Computer Adaptive Testing (CAT) and computer simulations, are becoming increasingly common as selection devices used for hiring. This workshop will teach those familiar with selection system design and validation the best practices around understanding, implementing, validating, and improving computer-based testing (CBT) programs. The workshop will begin with a discussion of recent innovations in testing that can be leveraged with computer-based testing technology (e.g., avatar based situational judgment tests, game-like simulations, computer evaluation of text responses) as well as what is on the horizon (e.g., testing using portable devices). The presenters will then describe the required resources, potential roadblocks, best practices, and implications of implementing CBT or CAT in existing programs from a scientific and practical perspective.
 
From a scientific perspective, this workshop will address the following questions:
  • How do you determine if the constructs assessed in your testing program will be affected by a move to CBT? Under what conditions is a new validity study needed?
  • From a validity perspective, what types of assessments are best suited to CBT? What types are inappropriate?
  • How do you assess the psychometric comparability of tests as they move from paper and pencil delivery to computer-based delivery?
  • What are psychometric considerations and scoring implications of leveraging innovative computer-based items (e.g., when identifying a spot in a picture, how close is “close enough,” and how should it be scored)? Is there a case for polytomous scoring of these items?
  • When is CAT appropriate? What are the requirements?
 
From a practical assessment, this workshop will answer the following questions:
  • How might your applicant population be affected by a move to CBT?
  • What aspects of employment law (e.g., ADA) need to be considered?
  • What are the technical requirements (hardware and software) of a good CBT program? How do these requirements differ by the type of assessment and item type?
  • How long does it take to implement CBT, assuming you have a paper-based test in-place? What factors should you consider? What are the best practices for implementing CBT?
  • What are the factors that you should consider when determining the scoring strategy of innovative items?
 
Participants will use the information provided in this workshop to determine the feasibility and appropriateness of implementing computer-based or computer adaptive administration with existing tests in their testing programs.
 
As a result of attending this session participants will be able to:
  • Identify the resources required to start a computer-based testing (CBT) program.
  • Describe the practical implications of adopting CBT, including anticipating roadblocks and managing implementation and support.
  • Select best practices to increase the effectiveness of CBT programs.  
  • Determine if CBT or Computer Adaptive Testing (CAT) is viable in their selection context.
  • Explain appropriate scoring procedures for computer-based tests and innovative items that can be leveraged via CBT.
  • Discuss the psychometric implications for alternative scoring procedures beyond dichotomous scoring.
  • Evaluate alternative CBT formats (e.g., simulations, SJTs, avatar-based formats) for assessing the KSAOs required for the job in valid and reliable ways.
 
Craig R. Dawson is SHL’s Director of Assessment Solutions and Architecture. In this role, he is responsible for leading a team of research scientists who develop, organize, and manage SHL's extensive assessment library. In addition, the team is responsible for thought leadership surrounding the development of SHL’s on-line research instruments and systems. Craig holds a Doctorate in Industrial and Organizational Psychology from Clemson University. He has presented research at annual conferences of the Society for Industrial and Organizational Psychology and the American Psychological Society on topics including technology adoption in assessment programs, leading-edge selection practices, and new approaches to job analysis. In addition, his work has been published in several research journals, and recently in Industrial and Organizational Psychology: Perspectives on Science and Practice.
 
Adam W. Meade is an Associate Professor of Psychology at North Carolina State University. His areas of research center around issues of psychological and organizational measurement, particularly in examining methodologies related to the investigation of measurement invariance. He serves as an Associate Editor at Organizational Research Methods and has published numerous journal articles and book chapters in journals such as Journal of Applied Psychology, Applied Psychological Measurement, and Journal of Occupational and Organizational Psychology, among others. He has external consulting experience, primarily in the areas of employee selection, computer adaptive testing, and organizational assessment. Adam completed his PhD in Applied Psychology from the University of Georgia. More information can be found at http://www4.ncsu.edu/~awmeade/index.htm.