Featured Articles
Jenny Baker
/ Categories: 2024, 613

Max. Classroom Capacity: On Student Self-Assessment of Personality

Loren J. Naidoo, California State University, Northridge

Dear readers,

The theme for this issue of TIP, I-O in the Classroom: Sharing Our Science via Pedagogy, is a perfect match for Max. Classroom Capacity! When I reflected on what would be an appropriate column topic, my mind balked and wandered off, reminiscing about some of my experiences as a graduate student at Akron U. I had some great classes! One of my favorites was Dan Svyantek’s Organizational Change and Development. Dan asked us to complete some playful, web-based personality surveys, which must have been quite new at the time. One purported to identify your spirit animal. My first result was something uninspiring like a rabbit, so I retook the test until I got an animal that I liked better (The tiger! Grrrr!). I also remember a Star Wars personality test that I was appalled (and a little proud) to discover had classified me as Emperor Palpatine. My kids love Star Wars and were delighted by this anecdote, and thus, a gimmick was born for this column!1 Anyway, I think Dan’s main point was to illustrate that we need to make sure to use our expertise as scientists in our work.

Everything is proceeding as I have foreseen.

This made me think about a recent episode in my own teaching. This semester I was asked to teach a first-year master’s course in which the past practice had been to have students purchase and self-administer the Myers Briggs Type Indicator (MBTI) and discuss the results in a half-day workshop. I didn’t love the idea given the uncertain reliability and validity of the MBTI (e.g., Randall et al., 2017) and because I thought “I’m an I-O psychologist—I can make my own personality test that students can use for free!”

Always in motion is the future.

Below I describe the steps I took to develop this new measure, administer it to my students, and run a developmental workshop on it. Toward the end, I discuss how to run this as an experiential learning activity for a graduate-level class in personnel psychology. In the age of ChatGPT, where written assignments are increasingly difficult to use as assessments of learning (see my previous columnChatGPT Shakes Up I-O Psyc Education”), I anticipate a shift toward experiential activities—doing stuff”—as a means of assessing student performance. Engaging students in the process of creating and validating a self-report survey of personality seemed like a great way for students to build I-O psychology skills. Going back to the theme for this issue of TIP, what better way is there to share our science than to “do” I-O psychology in the classroom with our students?

The Death Star will be completed on schedule…

The goal was to develop, administer, and interpret a self-report survey measure of personality to serve as the basis for a developmental workshop that was a mere 2 weeks away. As the dominant theory of personality, the five-factor model (FFM) seemed like a good framework for the new measure. However, I also wanted a tool modeled after commercial personality measures that are more prevalent in the work world. Therefore, I decided to model the new measure after the Hogan Personality Inventory (HPI), which is based on the FFM, but splits Openness (inquisitiveness and learning approach) and Extraversion (ambition and sociability) into two factors and includes various occupational scales (i.e., service orientation, stress tolerance, reliability, clerical potential, sales potential, managerial potential).

Do or do not. There is no try.

If you’ve ever developed an initial set of survey items from scratch, you know that this can be a lot of work. Writing items can be tedious and time consuming. Having a deep understanding of the construct of interest is necessary. Given the short time frame, I realized that I had to streamline my process. Therefore, I examined the IPIP’s existing, free personality scales and wrote a new set of items that were aligned with the dimensions of the HPI.2 These were added to Qualtrics, and voila, that thing’s operational! It was administered to students 1 week in advance of the personality workshop.

Now, you will pay the price for your lack of vision!

The next step was to develop a template for a report in which each student would receive their personality scores, as well as descriptions that would help them to interpret their scores, ideally providing some implications for their work performance as well. I had envisioned at least three “buckets” for each dimension: “high,” “low,” and “average.” This is where I realized my first mistake: Developing items from scratch meant that there were no norms that I could use to help students interpret their scores. As a proprietary measure, the HPI doesn’t appear to make normative data for their measure available to the public. So that left two choices: Choose groups based on arbitrary scale scores as cutoffs for high/low/average scores (e.g., averages below 2 and above 3 on a 4-point scale) or use 33rd and 67th percentiles based on the data collected from students. I chose the latter option.

It's not impossible. I used to bullseye womp rats…

The next major task was creating a template for the individual reports that would be sent to students. I wanted something that would look a bit like actual assessment reports that I’ve come across over the years. Commercial reports tend to include a few key features that I thought I could replicate. There were two aspects of the report template that needed attention: its functionality and its appearance. I needed a report that would automatically generate visuals based on each respondent’s results. I also wanted the reports to look professional. With these goals in mind, I used MS Excel to generate a set of horizontal bar charts to depict scale scores and formatted the spreadsheet to look like a report when exported to PDF. I’m sure there are software solutions that would produce better looking reports than Excel, but I felt confident that I could get the functionality right, and that seemed more important.

The report template started with a description of the measure, a discussion of how scores are displayed and interpreted, and definitions of each dimension. Next the individual’s scores on all scales were displayed as a set of bar charts showing their raw scores within the full ranges, along with a percentile rank. Then came “insights” pages where an “interpretation” blurb for each scale score described general score meaning, and an “implications” blurb described potential associations with work behaviors. For the occupational scales, a third blurb listed potential jobs/careers based on their scores.

The report template was populated by copying and pasting the respondent’s name into the cell where the “Respondent:” was listed. This then served as the “lookup value” in a set of “VLOOKUP” formulas that pulled the given respondents’ scores and percentiles into the template from a master database Excel file containing the raw survey data, which fed into the bar charts. By the way, if you’re not sure what this formula is or how to use it, ChatGPT is a fantastic resource for explaining (and proposing) Excel formulas used to solve common problems.

Your feeble skills are no match for the power of the Dark Side…

One challenge was to write statements to help respondents interpret high, low, and average levels of each score. I took an initial stab at this and found that the high- and low-score statements were much easier to write than the average-score statements. But even so, after a while the text that I wrote sounded too formulaic and uninteresting. Being trained to write peer-review journal articles does not necessarily prepare you for writing for a more general audience!

So, I turned to ChatGPT to generate some content that I hoped would enrich my own writing. I asked it if it was familiar with the HPI (it was) and to generate interpretations of high, low, and average levels of each of the dimensions based on the extant literature concerning the HPI. With some more prompting around length and tone, it eventually produced content that gave me ideas for editing my own work. It was especially helpful in generating lists of potential job/careers based on occupational scale scores. An important caveat here is that it’s not entirely clear from where ChatGPT generated this content (e.g., copyrighted material?), so I was very careful to use it as a source of ideas for what to write about rather than copying and pasting its content word for word. The final statements were uploaded to the master database Excel file and displayed in the template again using a combination of IF and VLOOKUP formulas (e.g., IF the percentile rank < .33 then VLOOKUP the “low” interpretation text for that scale, etc.).

Young fool... Only now, at the end, do you understand...

Recall that I started working on the report template about a week before the workshop. A few days before the workshop, I realized that manually generating a form for each of my 50 students, saving the template as a PDF, attaching it to an email, and sending the emails might take more time than I had left. Again, I turned to ChatGPT.

It's an older code, sir, but it checks out.

I carefully drafted a detailed prompt for ChatGPT that specified what needed automating (it’s often worth the effort to be exceedingly specific to avoid troubleshooting later). I asked ChatGPT to write VBA code, the language that Microsoft products use to speak to each other, that would (a) copy the first name in the master database into the report template name field; (b) save the file in PDF format, including the student’s name as part of the file name; and (c) repeat for the next student until none were left. This generated a report for each student in PDF format. Then ChatGPT generated VBA code that would compose an email (in MS Outlook), using each student’s email address as provided in the survey, attach the corresponding report, with a subject line and email body, and repeat for all students. Then it helped me combine the two sets of code. Some troubleshooting was needed, but it was up and running in no time. Perhaps for some readers, writing such code is easy. However, as someone with close to zero knowledge of coding, being able to easily automate this process felt so empowering! It might not save a lot of time the first time you do it, but with some forethought and practice, many tedious and time-consuming tasks can be reduced to a few clicks on the mouse!

I'm looking forward to completing your training.

The workshop itself went well. Students appreciated receiving individualized feedback on their personalities and seeing how they compared with their peers. Describing the normative basis for high-/low-/average-score categories led to a stimulating conversation about the advantages, disadvantages, and implications of this approach. This progressed into a larger discussion of the principles of external validity, reliability, and construct validity. The bulk of the workshop revolved around discussions of personal strengths and areas for development, and the implications of their scores for their semester-long work in small teams, as well as their work lives more broadly.

Perhaps I can find new ways to motivate them.

Although the exercise was a success, this project would work better as a semester-long project in a graduate class in personnel psychology or something similar. This kind of experiential learning exercise provides the opportunity for students to develop specific skills and experiences that will prepare them for aspects of I-O psychology work. With some guidance, master’s students could carry out each step of the process. Readings could be assigned on item writing and personality (or whatever other construct they are interested in). Students could individually generate a pool of items and work together to pare them down. Students could help recruit a pool of undergraduates and/or work colleagues to take the survey. The question of how to norm scores and provide feedback is a great basis for a discussion on important psychometric principles. I’m certain students would create much more visually appealing and functional report templates than I did! Managing the survey data, calculating scores, running basic data cleaning and reliability checks, and figuring out how to pipe data into reports all would help them develop valuable data management and analytics skills. I think it’s also important to teach students appropriate ways of using ChatGPT (e.g., coding, content generation). Validating the new measure might be difficult to do in one semester but would make an excellent project for subsequent semesters. 

Your work here is finished, my friend.

As always, dear readers, if you have any ideas, comments, critiques, or just want to make a new connection, please email me at Loren.Naidoo@CSUN.edu.

Notes

[1] If you’re not familiar with the Star Wars movies, my apologies—please assume everything that doesn’t make sense (e.g., the updated headshot photo that my kids helped me photoshop) is an obscure reference to the movies. And it’s probably time you watch the movies. At least the first three.

2 OK, sure, this isn’t the best way to write items, but it made item-writing doable within my time frame and also raised the odds that I would end up with something usable.

Reference

Randall, K., Isaacson, M., & Ciro, C. (2017). Validity and reliability of the Myers-Briggs Personality Type Indicator: A systematic review and meta-analysis. Journal of Best Practices in Health Professions Diversity: Education, Research & Policy, 10(1), 1–27.

Print
613 Rate this article:
No rating
Comments are only visible to subscribers.

Categories

Information on this website, including articles, white papers, and other resources, is provided by SIOP staff and members. We do not include third-party content on our website or in our publications, except in rare exceptions such as paid partnerships.