Home Home | About Us | Sitemap | Contact  
  • Info For
  • Professionals
  • Students
  • Educators
  • Media
  • Search
    Powered By Google

Good Science—Good Practice: Cognitive Ability Testing in Executive Assessments

Tom Giberson
Oakland University

Suzanne Miklos
OE Strategies

General cognitive ability (GCA) remains the single most powerful and important predictor of job performance. I-O practitioners often include GCA in selection processes as a part of accepted best practice. Yet when looking at higher-level managerial jobs, many organization partners argue the relevance of cognitive ability testing given the perceived range restriction within this highly educated applicant group. Some organizations push back on cognitive ability testing at the most senior levels because of the academic achievements required and because of notion that social and emotional intelligence are arguably more valuable differentiators at this level. In addition, in a market that puts these higher-level candidates at a premium, recruiters consistently raise the issue of improving candidate experience by minimizing candidates’ time investment in assessment. Although the typical client organization is working to select from two to three top candidates, many candidates have at least one job offer in hand when coming in to interview. The argument boils down to this: They are all smart; we don’t need to measure problem solving. 

I-O psychologists often work in settings where we need to demonstrate understanding of the current literature and provide a balanced discussion of cognitive ability testing. To update our own thinking we can look at several recent articles touching on elements of the cognitive ability discussion. In this article, we provide a case to refocus on job analysis as a way to support the use of specific cognitive ability assessments that align with cognitively laden tasks such as executive decision making.

Schmidt (2012) raises a challenge to the notion that content validity is not an appropriate model for cognitive ability. An approach that does not include content validity leaves us saying “trust my expertise” to our partners when looking at specific managerial jobs because the numbers are often too low for criterion-related validity studies. In our practice, we have often relied on the research supporting g, or generalized cognitive ability, to support the need for cognitive testing. The Schmidt article demonstrates that with the proper job-analytic and content-validity procedures, cognitive ability measures—including tests that are de facto measures of GCA—can demonstrate content validity in addition to criterion-related and construct validity.

Because we have long known that GCA underlies performance of all kinds and that leaders are tasked with challenging decisions, our job analysis attention has turned to other elements of the leader’s role. Local criterion-related validation studies are often not possible because of sample size. We rely on validity-generalization studies and transfer validity based on studies published in manuals. Schmidt’s description of the observable outputs from cognitive tasks illustrates that problem-solving inputs and outputs are observable. He describes that all tasks, including typing, have an associated mental process and that the distinction of problem solving as mental rather than observable is a false one.

Kehoe (2012) further clarifies Schmidt’s argument by suggesting that all KSAOs have a cognitive element to them—that KSAOs and work behavior are manifestations of specific cognitive capabilities. Thus, executive roles can be analyzed to identify the work behaviors critical to successful analysis and decision making. Therefore, he argues, “appropriate experts” should be able to identify the specific measurable cognitive abilities that underlie KSAOs and ultimately job behavior. Kehoe summarizes the series of arguments required to support content validity evidence for cognitive ability tests.

  1. Identify the cognitive skills and aptitudes associated with the work domain and the link between these behaviors and job performance through job analysis and expert evaluation.
  2. Identify an appropriate sample of the work content domain for testing; this should include the work content that is most important for job performance.
  3. Experts must match the test content to the work content, which is a key part of the content validation argument.
  4. Based on 1–3, content evidence supports the inference that the test content represents the cognitive skills and aptitudes required for successful performance on the job.

Kehoe recommends that while the Principles suggest that content validity supports the argument for utilizing tests as a predictor in the selection process, we can do better. The author suggests that, when working with cognitive ability tests in the selection context, an additional step should be taken in the job analysis process. Specifically, the author argues that experts should “rate the extent to which each operationalized skill/aptitude included in the test is identified with more GCA factors.” When job descriptions have terms such as learning agility, strategic thinking, and risk management, there is an opportunity to define the tasks relative to GCA by linking the skills/aptitudes.

Strategic decision making is an example of a job task that relies, in part, on cognitive ability. This is a common KSAO in executive-level job descriptions that has different implications and meaning depending on the context in which it is required. There may be task differences based on the maturity or degree of complexity within the industry space for the executive role. An interesting read for practitioners that describes strategic decision making (McKensie, Woolk, van Winkelen & Morgan, 2009) proposes a model that essentially identifies the cognitive skills required to deal with paradox. The authors emphasize the need for complementary thinking strategies across conditions of uncertainty, ambiguity and contradiction. They interviewed six CEOs as a form of job analysis on strategic decision making to assess their proposed model. They identified three key tasks: framing the problem space, evaluating contradictory requirements, and committing to meaningful choices. These are examples of cognitive tasks that may appear in a job analysis. The authors blended the emotional elements of decision making such as comfort with ambiguity and an ability to manage the anxiety that can come from holding contradictory positions with the cognitive elements of decision making. In 2005, Menkes described the concept of “executive reasoning” as comprising both intellectual and social reasoning in a Harvard Business Review article, entitled “Hiring Smart.” Postformal thought allows adults to synthesize competing views rather than defining an either/or solution. This article provides a solid description of the thinking and emotional tasks within strategic decision making, which can enrich our job analysis for our senior-level jobs.

Finally, Reeder, Powers, Ryan, and Gibby (2012) explored several individual-difference variables and their relationship to how job candidates perceive selection assessments. Of the several experiential-type of predictors examined (previous experience with assessments, job experience, past success in similar assessment situations, knowledge of the job), they reported that knowledge of the job is an important predictor of the candidate’s opinion (positive or negative) of the selection assessments.

Keeping a fresh perspective opens up opportunities to enrich the leadership selection conversation with our HR peers. First, by spending more energy conceptualizing and building the job analysis and specific cognitive abilities leading to content validity, we are drawing a clear picture for why we recommend GCA measures for a particular role. With the richer job analysis information, case studies and simulated decision-making assessments that tap specific cognitive abilities can be enhanced. We have to continue to be better describers of jobs and skills in order to support the organization’s thinking. In highly changing industries, such as healthcare, I-O psychologists have an important role to play in helping organizations understand implications for job duties, behaviors, and specific abilities that lead to success.

Helping hiring managers understand the content validity evidence for cognitive abilities and providing shared understanding of the cognitive skills that underlie successful performance assists them in having richer candidate discussions during integration sessions. The case for face validity is also enhanced when clear connections are drawn, such that recruiters and candidates see the relevance of the assessments. Transparency into the link between cognitive components of managerial jobs and the selection tools is heightened by leveraging this group of articles.

References

Kehoe, J. F. (2012). What to make of content validity evidence for cognitive tests? Comments on Schmidt. International Journal of Selection and Assessment, 20(1), 14–18.
McKenzie, J., Morgan, C., Woolf, N., & van Winkelen, C. (2009). Cognition in strategic decision making. A model of non-conventional thinking capacities for complex situations. Management Decision, 46(2), 209–232.
Menkes, Justin. (2005). Hiring smart. Harvard Business Review, 83(11), 100–109.
Reeder, M. C., Powers, C. L., Ryan, A. M., & Gibby, R. E. (2012). The role of person characteristics in perceptions of the validity of cognitive ability testing. International Journal of Selection and Assessment, 20(1), 53–64.
Schmidt, F. L. (2012). Cognitive tests used in selection can have content validity as well as criterion validity: A broader research review and implications for practice. International Journal of Selection and Assessment, 20(1), 1–13.
Society for Industrial and Organizational Psychology. (2003). Principles for the validation and use of personnel selection procedures (4th ed.) Bowling Green, OH: Author.