Home Home | About Us | Sitemap | Contact  
  • Info For
  • Professionals
  • Students
  • Educators
  • Media
  • Search
    Powered By Google

Workshop 2 (half day)

Reliability, Ratings, and Reality: Oh My!

Presenters: Dan J. Putka, Human Resources Research Organization (HumRRO)
  James M. LeBreton, Purdue University
Coordinator: Mindy E. Bergman, Texas A&M University

Ratings are ubiquitous in organizational research and practice. Issues associated with estimating interrater reliability and agreement are found in many areas, including job analysis, assessment centers, performance evaluation, climate and culture, teams, and leadership, just to name a few. Despite the prevalence of ratings, the literature on estimating the reliability and agreement of ratings has become fragmented and arguably quite confusing. Complicating matters, most textbook examples of estimating reliability and agreement are based on “tidy” data collection (measurement) designs that often do not resemble how ratings are gathered in organizational settings. This workshop will describe a clear process that can be used to formulate reliability/agreement coefficients appropriate for measurement situations confronted in organizational settings, explain the information that such coefficients and their components are conveying (and not conveying), and illustrate the utility of the said process by interactively working through several case studies from applied research and practice. Although advanced psychometric techniques will be discussed, our presentation will not be overly quantitative in nature (i.e., nasty equations will be kept to a minimum); instead, our emphasis will be on explaining key concepts and the implications they have for practical application. This workshop should be of interest to all individuals faced with evaluating the quality of ratings data gathered in the course of their work.
 

The workshop is designed to help participants:
• Describe key definitional issues surrounding conceptualizations of error in ratings and how they relate to notions of reliability/agreement
• Describe key similarities/differences between theoretical perspectives on the reliability of ratings; namely, classical test theory and generalizability theory
• Describe key similarities/differences between various reliability/agreement estimation traditions, including deviation-, correlation-, ANOVA-, and factor analysis-based traditions
• Describe and apply a process for determining and estimating appropriate indices of reliability/agreement for measurement situations encountered in applied research and practice

Dan J. Putka is a senior scientist in the Personnel Selection and Development Program at HumRRO. He has developed and evaluated numerous types of personnel selection and promotion measures for several clients in federal civilian agencies and the U.S. military. His work has covered the full range of the development spectrum, ranging from detailed job analyses to large-scale criterion-related validity studies. In recognition of his contributions to psychological research and personnel management in the U.S. military, Dan received APA Division 19’s 2008 Arthur W. Melton Early Career Achievement Award. In addition to his client-centered work, over the past 5 years Dan has developed a program of research focused on the formulation and evaluation of methods for quantifying and modeling error in ratings. His work has been published in top-tier journals (e.g., Journal of Applied Psychology), the Encyclopedia of I-O Psychology, and most recently, a book chapter on reliability and validity for the upcoming Handbook of Employee Selection. Dan received his PhD in I-O psychology, with a specialization in quantitative methods, from Ohio University.

James M. LeBreton is an associate professor in the Department of Psychological Sciences at Purdue University. He has taught a number of doctoral-level statistics courses including courses on psychometrics, multivariate analysis, and multilevel modeling. He has published several papers focused on issues involving multi-rater performance evaluation systems, interrater agreement, and interrater reliability. During the last 10 years he has also conducted research and consulted in the areas of test development/validation and applied statistics. James currently serves on the editorial boards for the Journal of Applied Psychology and Organizational Research Methods. James earned his PhD in I-O psychology from the University of Tennessee with a minor from the Department of Statistics. He earned his BS in psychology and his MS in I-O Psychology from Illinois State University.

 

Return to Workshop List

Return to Conference Homepage