Featured Articles
Jenny Baker
/ Categories: 594

Translating SIOP’s Guidelines for Graduate Education Into a Robust Competency Development Process for Master’s-Level I-O Professionals

Christopher J. L. Cunningham and Bethany Whitted, The University of Tennessee at Chattanooga

Despite the preprofessional nature of graduate programs in I-O psychology and the general orientation of our broader profession, there is currently no clear and scalable mechanism for ensuring consistent and comprehensive educational quality in graduate education programs for I-O psychology. This is particularly true for master’s programs in I-O psychology that are increasingly competing against alternative degree programs that promise quicker and more “business-relevant” career paths. To be clear, efforts to create and sustain any sort of formal accreditation of graduate programs can create more problems than solutions; this article is not intended as a call for anything so formal. Instead, our goal is to provide an example of how to build a competency-focused curriculum and associated evaluation process around SIOP’s most recent Guidelines for Graduate Education and Training (Guidelines; 2016). Specifically, we focus on how this is possible in the context of a terminal master’s degree program in I-O psychology, although our process can generalize well to any form of I-O graduate training.

First, a little background: The original Guidelines were created in 1985, with revisions in 1999 and most recently 2016. Their purpose is to direct I-O psychology curriculum planning efforts in all I-O graduate education programs (master’s and doctoral level) and provide students with a clear idea of the competencies they are responsible for mastering during their graduate education (SIOP, 2016). These Guidelines highlight and generally define the breadth of knowledge, skills, abilities, and other qualities needed to succeed as an I-O psychology professional but do not clearly delineate master’s- versus doctoral-level education or address the discipline in which the program is situated (e.g., psychology department, business school). In this article, we outline the method we followed to translate and structure the current Guidelines into an end-to-end curricular model tailored to a comprehensive terminal master’s degree program in I-O psychology. We then describe how we are using this model in an ongoing manner to guide student development and evaluate and improve our course offerings.

We completely understand that this may not seem like the most exciting or cutting-edge article you have seen in TIP. However, we challenge you to identify a topic that is more important to the continued viability of our profession. If we do not understand how to properly develop competency-focused education models and demonstrate actual competency development in our students, how can we expect to continue to be taken seriously as professionals in the increasingly crowded professional services spaces in which we operate? With this serious question in mind, our goals for this project were to

  1. Critically evaluate and align our own master’s of science (MS) degree program’s curriculum with the Guidelines to ensure a comprehensive educational opportunity for our students.
  2. Create an evaluation framework, tools, and associated process that can help us to meet ongoing university-level program evaluation and review requirements.
  3. Provide students and program alumni with a clear sense of what is required to be a competent I-O psychology professional, where they are demonstrating proficiency and mastery, and where they need to continue to work on building and sustaining competence.
  4. Provide students with competency-related language and tools to help reduce the mystery of what an I-O professional can do for employers and for others who are not familiar with the I-O value proposition.

Method

We admittedly did not set any speed records with this work; those of you regularly teaching the equivalent of a 4:4 load will understand why. We hope, though, that our efforts can help other programs make progress more rapidly. We started wrestling with a draft of the current Guidelines back in early 2016. In the summer of 2021, we landed on a framework that we believed would work for our graduate program, which provides a comprehensive master’s-level education for I-O professionals working in pretty much every area of I-O practice. Although we recognize that our model may not fit all other programs, we believe some variation on this process could be useful for any graduate program interested in leveraging the SIOP Guidelines to improve their program’s comprehensiveness, quality, and overall educational impact. For clarity, we split our narrative into two sections: competency-model condensing and evaluating competency development.

Condensing the Competency Model

The Guidelines are very informative but unwieldy for use in any setting outside of academia. Working through the following steps helped us condense the full competency set outlined in the Guidelines to a more manageable set of meta-competencies around which we have aligned our curriculum and broader educational model:

  1. With a team of talented graduate student assistants and the four core I-O faculty in our program as content experts, we reviewed the original competency definitions provided in the Guidelines and generated a set of three to four Student Learning Outcome (SLO) statements linked to each competency. As the Guidelines do not clearly differentiate between doctoral- and master’s-level competence, we drafted these initial SLOs to apply to terminal master’s students and MS program graduates (e.g., for Research Methods: “Apply scientifically oriented procedures, techniques, and tools to conduct empirical investigations in I-O psychology.” Similarly, for Job Evaluation and Compensation: “Recognize and monitor developing legal and social issues surrounding compensation”).
  2. Using the Guidelines’ competency definitions and our initial master’s-degree-specific SLOs, we cross-mapped each of the Guidelines’ 26 competencies against the learning targets for each of our core MS degree program courses at the time (2016–2017). We quickly identified a few competency areas for which our course-related coverage was thinner than desired. As a result, we revamped about a third of our course offerings to make them more comprehensive and to ensure that our core curriculum supported development of at least introductory proficiency for all the competencies in the Guidelines. We revised our competency-by-course cross mapping and then identified which competencies from the Guidelines would be targeted most strongly in each of our courses.
  3. We started exploring ways that we could group or otherwise restructure the 26 competencies from the Guidelines into meaningful meta-competencies. This process was driven by careful consideration of the competency definitions in the Guidelines and our master’s-level SLOs. We first arrived at a condensed framework involving seven meta-competencies: (a) foundational knowledge of I-O psychology; (b) theoretical bases for I-O psychology research and practice; (c) applied research and evaluation KSAOs; (d) essential I-O practitioner KSAOs; (e) influential person-level factors; (f) influential organizational and environmental factors; and (g) influential interpersonal and social phenomena.
  4. At this point, we began laying groundwork to facilitate ongoing evaluation and tracking of student competency development through these courses (more on this in the next section). Each instructor selected one to three of our master’s-level SLOs for each of the 26 competencies that they felt were most aligned with topics emphasized in their courses. In each course syllabus, these course specific SLOs are emphasized as part of the stated learning objectives for the course, helping to frame each course in terms of the I-O competencies it is designed to develop. At the end of each semester, students report on the extent to which they feel confident in their abilities to address, answer, and/or do each of the things identified in these core SLO.
  5. As we discuss in the next section, the general vision for this ongoing competency-focused evaluation has not changed; in many ways it has been strengthened and become more real by positive feedback we have received from students, alumni, and internship-site supervisors. However, after a couple of semesters of doing what we outlined in the preceding two steps, we realized that our approach was still unnecessarily complex for our intended purposes: There were simply too many separate meta-competencies to be useful in real-world conversation, especially between students and prospective employers. After further evaluation, we reduced our framework to four core I-O meta-competencies that are now at the heart of our program’s curriculum and competency development evaluation efforts: (a) psychology of work, (b) professional practice, (c) talent management, and (d) work design. You can see the details of this breakdown, including how we organized the competencies from the Guidelines in “The Details” section of this part of our graduate program’s website: https://bit.ly/3HrAjlN. We also refined the SLOs associated with these meta-competencies to better align them with the types of work we were seeing master’s-level professionals tackling in practicum and postgraduation work assignments.

Evaluating Competency Development

In addition to aligning our curricular and educational experiences to the Guidelines, we also endeavored to develop a robust and multifaceted student, curriculum, and broader program-level evaluation framework and process. The information gathered through this process helps us to track the extent to which our courses and other learning-related experiences (e.g., graduate assistantship positions, practicum placements, mentoring programs) collectively contribute to the development of student proficiency or mastery in the core competency domains. This type of outcome and impact information is increasingly important to prospective students, employers, and university-level accreditors.

Over the course of this project, we experimented with several approaches for gathering pertinent evaluation data, settling this past semester on a multipronged method that includes consideration of (a) student work products from specific courses, (b) a written reflection paper and presentation in which students connect their practicum experiences to a subset of core competencies, and (c) an evaluation of on-the-job performance from students’ practicum supervisors. As a result of this competency-alignment project, we also added two evaluation elements we believe are worth sharing: a recurring evaluation of student confidence and a competency-focused electronic portfolio (e-portfolio).

Recurring Evaluation of Student Confidence

We developed a recurring end-of-semester evaluation in which students respond to a series of behavioral statements modeled around the revised master’s appropriate SLOs noted earlier. Data from these assessments can be linked to course instructor ratings of demonstrated competence against the SLOs for a given course. This information could further be used to populate a student competency development record (i.e., fancy spreadsheet) to track personal competency development. This information could also be aggregated within a specific course, across a cohort of students, or across a set of courses to help our faculty identify strengths and weaknesses relative to these competencies.

Our decision to evaluate student learning in terms of both course grades and self-perceived confidence is driven by the understanding that demonstrated competence emanates from a combination of knowledge and the confidence to effectively apply that knowledge. This notion is supported by a large and diverse research literature, which also suggests that confidence, not just knowledge, predicts ability and motivation. This is perhaps most noticeable in much of the research pertaining to self-efficacy (e.g., Bandura, 1977; Stajkovic & Luthans, 1998; Weinberg et al., 1979). Confidence is especially important to success in the predominantly practitioner-oriented roles pursued by most master’s program graduates.

Because multi-item confidence assessments more accurately reflect competence than a single-item rating of overall confidence for a broad topic (Favazzo et al., 2014; Nuhfer & Knipp 2003), we strengthened our semesterly confidence assessment by breaking the competency-specific SLOs into behavioral questions. For instance, the question, “What are the steps in conducting a needs analysis?” is one of a broader set of questions in the section of our assessment targeting our talent management meta-competency. We used specific questions because confidence tends to be inflated when confidence ratings are not paired with questions (Favazzo et al., 2014). We further strengthened our assessment by following the advice of Nuhfer and Knipp (2003) and allowing students to complete the assessment at home, with plenty of time for self-reflection. Some practical benefits of confidence-focused assessments are that they (a) are time efficient (i.e., students do not have to generate a complete response to such questions, but instead can cognitively process how they would respond), (b) make it possible to avoid creating another typical and high-stakes “testing experience,” and (c) can be motivating to students (i.e., encouraging them to keep working on competence development in areas where reported confidence is low; cf., e.g., Bell & Volckmann, 2011).

We currently use an Internet-based survey (via Qualtrics) with branching logic to ensure that each semester’s evaluation is tailored to each student’s learning and development experiences that semester. Students identify themselves (for program-level tracking) and then select the learning and development experiences in which they were engaged during the semester from a master list of the available options. Selected experiences are then pushed forward through the rest of the assessment, and students reflect on these experiences as they rate their confidence in their ability to respond to a series of behavioral questions that target each of our four meta-competency domains. Students are also asked to reflect on and share examples of how their learning and development experiences contributed to growth in each of the four meta-competency domains.

Implementing this system helps our students understand what they should expect to learn, monitor their progress toward these learning objectives, and identify areas for further development. Accumulating data through this process helps us provide students with competency-related goal-directed feedback that they can use to work toward addressing discrepancies between their current and ideal future levels of competence. In essence, we are using the Guidelines and our condensed meta-competencies as the structure for an I-O education performance management system, minimizing ambiguity in the educational process and maximizing our ability to monitor competency development.

Competency-Focused e-Portfolio

To take this a step further, we created a customizable e-portfolio template for our students, designed within Google sites for ease of use. We encourage students interested in being considered for “graduation with distinction” honors to build and maintain this e-portfolio beginning in their first semester. These e-portfolios can become a visible and searchable record of competency development, showcasing actual work samples and work products as proof of competence. We challenge students to identify three to five pieces of evidence demonstrating their competence in each of our four meta-competency areas. This might include sharing outcomes or deliverables associated with class projects, papers, internships (as permitted by employers), or independent research. Here is a an example: https://sites.google.com/mocs.utc.edu/bethanywhitted/

These e-portfolios help us accomplish at least four goals associated with our broader competency-focused redesign and evaluation undertaking (cf. Knight et al., 2008). First, we provide students a structured way to showcase their personal and professional competencies in terms of the overarching I-O psychology competency framework, providing actual work samples to future employers (a sort of interactive résumé). Second, these e-portfolios highlight the value of these I-O competencies to potential employers and prospective graduate students who may be unfamiliar with our field of research and practice. Third, by regularly updating and maintaining these e-portfolios, we provide students a mechanism for integrating their various learning-related experiences each semester with the core competencies that are at the heart of graduate training in I-O psychology. We have already heard a great deal of positive feedback from our students regarding how this process makes coursework more meaningful for students when they understand how it is associated with their developing mastery of essential I-O psychology competencies.

Fourth, these e-portfolio sites provide our program faculty with another resource that can be used to establish students’ competence and readiness for graduation. We currently factor these e-portfolios into our process for recognizing top students, also considering course-related performance, practicum performance and impact, participation in general program activities and operations, and other evidence that a student has developed mastery in one or more of the four meta-competency domains. We are exploring ways that we may eventually integrate the evaluation elements described here into our annual comprehensive examination.

Discussion and Conclusion

The process we have taken within our graduate program to translate and apply the Guidelines for curriculum revision and ongoing development and program evaluation can be generalized to any I-O graduate program (including in-person and online offerings). We hope the information provided here will help you see the full potential of this type of competency-focused alignment much more quickly than was the case for us. Although this work took us several years to complete, that timeline was mainly due to competing priorities and a fair amount of trial and error. Graduate program faculty following a process similar to what we have outlined here could realistically complete this work in 2 to 3 months with the engagement of key program stakeholders.

We believe our progress over the past few years has improved our ability to provide and hold students accountable for a well-rounded educational experience that makes it possible for them to develop competence in critical areas that I-O psychology professionals need. We believe this combination of clear, competency-related learning objectives and ongoing competency and performance monitoring and management will lead to further developments in our curriculum, stronger motivation among our students to persevere, and a clearer and more complete understanding within the business community of the value our graduates can bring to organizational settings.

For our program, as we suspect will be the case for most master’s-level programs, the most difficult challenge was determining how to best make use of the Guidelines, which are clearly drafted for traditional doctoral-level training programs. Our experience leads us to believe it is overly simplistic to assume that doctoral I-O training leads to mastery, whereas master’s I-O training does not. Consequently, we will continue to examine how these core I-O competencies manifest themselves in different ways for master’s- and doctoral-level I-O professionals.

There are many future directions to take with this type of work. For example, we are currently preparing to test the validity of our recurring confidence assessments by correlating students’ ratings on these assessments with actual course grades, workplace accomplishments, and comprehensive exam scores. Evidence of relationships would provide further support that confidence assessments can help us understand developing student competence. Other future extensions may include (a) examining the ways in which specific courses meet specific competencies, (b) determining the perceived value and utility of competency-focused e-portfolios to I-O employers, and (c) exploring whether our more robust evaluation process reduces the value of traditional written comprehensive examinations.

There are also limitations and challenges to what we have done so far and will be able to do in the future. We are aware of many of these and are confident you thought of several while reading this article. We look forward to discussing these limitations and working to address them with input from interested readers of this journal and from engaged I-O graduate program directors who are participating in the relatively new and increasingly valuable I-O program directors alliance (https://groups.io/g/iopds).

We encourage other I-O graduate programs to consider working through some variation of the process we have outlined here. The goal is not to create homogeneity across all graduate programs but rather to help graduate programs identify the strengths they can build on to ensure the best possible education for their students. Ultimately, we would like to see an increase in consistent quality of educational experiences across graduate programs in our field. One of the beautiful realities of working with competencies, however, is that there are many ways to achieve similarly good outcomes. Please feel free to contact us if you have any questions or want more information about anything we discussed in this article.

Acknowledgments: The authors sincerely appreciate the help and insights provided by the following individuals over the course of work described in this manuscript: Brian O’Leary, Alexandra Zelin, Kristen Black, Chris Maples, and Ellie Risinger.

References

Bandura, A. (1977). Self-efficacy: Toward a unifying theory of behavioral change. Psychological Review, 84, 191–215. https://doi.org/10.1037/0033-295X.84.2.191

Bell, P., & Volckmann, D. (2011). Knowledge surveys in general chemistry: Confidence, overconfidence, and performance. Journal of Chemical Education, 88, 1469–1476. https://doi.org/10.1021/ed100328c

Favazzo, L., Willford, J. D., & Watson, R. M. (2014). Correlating student knowledge and confidence using a graded knowledge survey to assess student learning in a general microbiology classroom. Journal of Microbiology & Biology Education, 15, 251–258. https://doi.org/10.1128/jmbe.v15i2.693

Knight, W. E., Haken, M. D., & Gromko, M. (2008). The relationship between electronic portfolio participation and student success. Professional File #107, Spring. Association for Institutional Research (NJ1).

Nuhfer, E., & Knipp, D. (2003). 4: The knowledge survey: A tool for all reasons. To Improve the Academy, 21, 59–78. https://doi.org/10.1002/j.2334-4822.2003.tb00381.x

Society for Industrial and Organizational Psychology. (2016). Guidelines for education and training in industrial-organizational psychology. Author.

Stajkovic, A., & Luthans, F. (1998). Self-efficacy and work-related performance: A meta-analysis. Psychological Bulletin, 124, 240–261. https//doi.org/10.1037//0033-2909.124.2.240

Weinberg, R. S., Gould, D., & Jackson, A. (1979). Expectations and performance: An empirical test of Bandura’s self-efficacy theory. Journal of Sport Psychology, 1(4), 320–331. https://doi.org/10.1123/jsp.1.4.320

 

Print
1430 Rate this article:
No rating
Comments are only visible to subscribers.

Categories

Information on this website, including articles, white papers, and other resources, is provided by SIOP staff and members. We do not include third-party content on our website or in our publications, except in rare exceptions such as paid partnerships.