mainheader

Jenny Baker
/ Categories: 573

Speed Benchmarking Event Summary

Jerilyn Hayward, ServiceMaster

At this year’s LEC Speed Benchmarking event, more than 100 I-Os got together to share their experiences around specific topics.  Attendees were able to benchmark with each other, and conversations were led by our gracious table facilitators (listed below):

Amber Burkhart

Faisal Ahmed

Lynn Collins

Amie Lawrence

Frank Schneider

Matthew Neale

Charles Handler

Holly Payne

Mike Knott

Daly Vaugn

Isaac Thomson

Mort McPhail

Doug Reynolds

Jurgen Bank

Neal Schmitt

Erin Laxson

Laura Johnson

Neil Morelli

Evan Sinar

Leaetta Hough

Paul Sackett

 

 

 

 

 

 

 


 

Assessments for Coaching and Development  Below are some highlights of what participants at the event discussed:

  • Assessments for development require the same rigor of validation as assessments used for selection. 
  • There is a trend to offer coaching for lower level leaders as well as executives, and not just when performance issues arise.
  • Scalable assessments for development in large companies would be welcomed (but the need for qualified individuals to interpret the information may be a constraint). 
  • Assessments are worthless if nothing is done with the results.  Participants have better outcomes if a debrief is offered as part of the assessment.  Keep in mind that assessments are tools to inform development, not the development itself. 
  • Understand the unique situation of your organization in order to demonstrate value (culture, level, restructure, etc.) to leaders.

Explaining the Value of Assessments to Stakeholders  

  • Boiling the data down to essential components is critical; business stakeholders don’t need the details and they don’t need “psych-speak.” 
  • Stakeholders rely on their guts, so explain one data point of the many used in the decision-making process and tie it to their experience.  If they feel that a hiring process is working, they won’t always see why it should change.
  • We can’t always count on validity and business results aligning, and this is the chance for I-O’s to exert influence at the level required to be strategic.

Gamification, Games, and Simulations

  • The current preference is more toward simulations and gamification (incorporating game elements or features) over true game-based assessments.
  • Face validity and job relevance are critical factors in design and implementation. 
  • The rationale behind adopting gamified assessments needs to be considered (i.e., candidate experience, improved measurement of KSAOs and constructs); “because leadership wants a gamified assessment” is not acceptable justification.

Development Assessment Centers 

  • Companies are seeking more customization of high-fidelity exercises, with organization-specific capabilities/leadership models.
  • Development Assessment Centers are not an end in themselves and are being more actively connected to other development methods (e.g., coaching, workshops, assignments, micro-learning).
  • Development Assessment Centers are being extended beyond senior leader levels to the full leadership pipeline.

Advances in Measurement Science  

  • The business demand for short assessments can cause measurement problems, and no matter what the advances in measurement science are, with no buy-in they won’t be accepted within the business.
  • Measurement capability is still challenged by leaders when an individual they know to be doing well on the job fails an assessment.

Innovation in Mobile Assessment

  • Be aware of who is using mobile devices; look for ways not to lose lower SES candidates. 
  • Will generational differences between texting and typing impact scoring patterns?  How should we use simulations with data entry across device types? 
  • Can DIF be used to examine which aspects of assessment performance are related to device type versus true competence?
  • Can time-tracking features be loaded into phones in advance so they aren’t reliant on Internet connectivity?   
  • What potential do smartphone features provide to unlock video, voice, and even physical examinations when job relevant (NLP, voice capture and analysis, video feature analysis)? 
  • What potential exists in virtual reality and augmented reality, for simulations and realistic RJPs? 
  • Using mobile is now a necessity to ensure candidate flow and increase diversity. 
  • Focus on features of devices, rather than device type (e.g., big screens versus small screen, processing power, etc.) and design down to the smallest device. 
  • Changes in design can cause change in completion time and candidate experience; for short screenings, completion rates are high, but for longer leadership assessments, completion drops off. 
  • Count on having the need for help desk personnel to resolve issues. 
  • Take care to consider reasonable accommodations of disabled individuals.

Please join the SIOP Speed Benchmarking event in Austin, brought to you by the Career and Professional Development for Practitioners Committee!

 

Print
398 Rate this article:
No rating
Comments are only visible to subscribers.

Theme picker