Featured Articles
Jenny Baker
/ Categories: 612

Max. Classroom Capacity: A Conversation With Marcus Dickson, the Creator of Max. Classroom Capacity

Loren J. Naidoo, California State University, Northridge

Dear readers,

In honor of the “Memorable Moments in TIP History” theme of this issue, I’m delighted to welcome my mentor and friend, the creator of the Max. Classroom Capacity column, Dr. Marcus Dickson! Below we discuss AI in the context of the recent history of I-O psychology instruction.

 

Loren Naidoo: What fears do you see faculty having about AI?

Marcus Dickson: I think many faculty members think of AI as being like Chegg on steroids. I remember long, long ago there was always a concern that student organizations (stereotypically, often fraternities) would have file cabinets full of old tests that members could consult prior to exams. More recently, faculty have been concerned about students getting test or essay answers online from sites like Chegg or CourseHero so that the student would be getting grades that they didn’t earn. The remedy for that was seen as developing customized and ever-changing assessments, like making essay questions or paper assignments highly contextualized so that the answers were not already out there, and changing assignments and test questions every semester, so that having access to prior exams wouldn’t help. But the fear with AI is that it doesn’t matter whether the questions have been used before because the system can generate the answers even without that prior information. Add in the revelation that some AI tools will completely make up plausible-sounding references, and it seems like an impossible task to guard against students cheating by using AI tools.

A second fear that I see more often at the graduate level (where students are more likely to present or publish papers of original work) is the inability to tell whether the student actually did the work or even understands how to do the work. This is especially popping up related to things like writing code in R. In some ways, I see this concern as being analogous to whether students should learn to do matrix algebra by hand. The argument was always that if you can’t do matrix algebra, you don’t really understand what is happening in some analyses. Nonetheless, very few programs that I know of still teach matrix algebra in that way.

What do you think about those issues, and do you see others?

LN: By the way, I use ChatGPT to write VBA code all the time now! One additional concern more related to curriculum design than instruction is making our students “robot proof”—let’s NOT prepare our students for jobs/careers that will cease to exist because of AI. But let’s start with your first concern, AI being used (let’s call it misused) to cheat.

I agree that AI makes plagiarism more difficult to detect. But it’s not impossible. If you know your students’ actual writing styles well enough, then AI content can really stand out. There are also various AI detectors available (e.g., Turnitin has one integrated into its platform). The validity of AI detectors is unclear, but their findings are a useful conversation starter with students. In a recent class, I informed students that they were allowed to use AI on two written assignments (but not others) provided they tell me how they used it. A handful of students reported using ChatGPT only to edit what they had written to make it sound more professional. AI detectors failed to identify these cases, so detecting human–AI collaborations may be challenging. However, I did find hints of prohibited AI use in several short discussion board assignments. The posts in question had varying content but identical structures: short intro paragraph, bulleted list, short concluding paragraph. When I entered my prompt into ChatGPT (always do this!), I saw the same pattern reoccur. As a grader, I had a visceral response to these posts: They were meaningless! Boring! Forgettable! And I shared this feedback with students: “I don’t know if you used AI, but it looks like you did, and either way, in the real world, this style of writing is ineffective!” So, on a superficial level, I think AI misuse is generally detectable. Have you seen any examples of students using AI yet in your undergraduate or graduate classes? Have you developed any (other) ways to detect student use of AI? Are you worried about AI plagiarism?

MD: I regularly teach a large lecture version of Intro Psych, which has a lab associated with it, so it is my TAs who are usually the ones encountering plagiarism in all its forms. We have had a few students turn in lab assignments that appeared to be AI generated, and it’s a challenge because it isn’t always clear what the AI was used for: Was it a tool for “cleaning up” the student’s writing, or was it actually doing the writing? As for the boring part, I think part of the challenge is that we as teachers are still discovering the various ways that AI can be used in the classroom. I have some colleagues who put a blanket ban in their syllabi on using AI in any way in the class; others encourage its use in creative ways or allow its use in specific contexts, as long as the use is disclosed and described. Pedagogically, I think any of those are defensible responses, depending on what the learning objectives and the content of the course might be. It’s definitely something I continue to wrestle with, especially when moving between a large-lecture 1000-level class and a small doctoral seminar, for example.

LN: I have colleagues who are contemplating going back to paper-and-pencil exams to avoid AI. Plagiarism using AI is complicated. Using your example, is the student who uses AI to clean up her writing cheating? From my perspective, if AI allowed her to express her own ideas more clearly and succinctly, then as an instructor, I’m thrilled! Her thoughts are less obstructed by the barrier of writing, and I get to assess her ideas rather than grammar/spelling. However, if AI is generating ideas for her that she didn’t have, we are getting into trouble both from an academic integrity standpoint and from an assessment validity standpoint. Academic integrity rules may require students to cite AI-generated content, but it’s not clear exactly how this should be done. Even if properly cited, how much AI content is too much? From an assessment validity standpoint, if a student has used AI to help generate their answer, then the assessment may not be a valid indicator of the students’ knowledge (or whatever we are trying to assess), which may suggest that the assessment itself is no longer useful. Alternatively, perhaps we should think more about what we want our students to DO rather than what we want them to know. If a student can do the work that we are preparing them to do (using whatever tools they would like), then does it matter that they’ve used AI? Moreover, if using AI alone is sufficient for the task, is that a task we should be preparing our students to do in the first place?  

MD: Interesting point about assessment and whether it is accurate if AI facilitates a better answer. This is bread and butter I-O! Here’s an analogy: I was once responsible for developing the driving course for firefighters testing for promotion to “apparatus operator” (lots of duties, but driving the fire engine is one of them). In developing the test, we tested them on backing the engine into a space using a spotter. That was new—in previous tests, there had never been a spotter. And many people felt that having a spotter diluted the validity of the assessment. However, department policy stated that fire engines should never be backed without a spotter. So the “more stringent” test was actually not valid relative to the work actually to be done. In the same way, whether the use of an AI tool threatens the validity of our assessments of students really depends on what the actual work environment we’re trying to assess would be. In the same way that I was assessed in school on writing in cursive but my son was not because it wasn’t seen as part of his future, it’s likely that there are lots of places where AI-facilitated work will be the norm. So how do we test people on that?

I love when you mention what students should be able to do rather than know. Learning objectives should have action verbs—at the end of this course, the student should be able to do X, Y, and Z. Sometimes knowing things is a step on the way to doing things, but it isn’t where we should stop.

One last point here—I have been opposed to grading on APA style for undergrads for years. The vast, vast majority of students are not going into careers where they will need APA, and in their other classes, they are likely being compelled to use MLA, Chicago Style, whatever. I think in psychology we grade on APA style a lot because it can be more objective than a lot of the other things we grade on in a paper. We have lots of tools already in place to help students cite references and create reference lists, and I always encourage students to use them. Recent evidence is appearing that some AI tools, when given writing prompts, will create realistic-sounding references that just don’t exist. Students could always make stuff up, I suppose, but this seems like a new level of challenge in writing related to AI—when the tool isn’t just helping achieve a better product but is actively working in a deceptive way that the student may not even be aware of. 

LN. I love the fire engine story! That’s exactly right. AI is already a “spotter” for our students, albeit one that might actually increase the odds of an accident under certain circumstances (e.g., by inventing fake references for research papers). AI is also a spotter who, when ordered by the driver, can grab the back of the fire engine and park it by itself (as I wrote in a prior column, AI performed more than adequately on some of my multiple choice and written exam questions). So, if using AI makes it difficult to assess how much students know, what is the solution? Where do we go from here? But also, getting out of the threat framing that we started with, how do we use AI to get to our max. classroom capacity?  

MD: Hey, I see what you did there! (I have always loved the title of this column—readers should go back to the very first one years ago to see where it came from!)  I think the first step in any given class is to be clear about what the expectations and norms are, whatever they are. That’s true for anything, whether it is “Can we work together?” or “What are the parameters for writing a research proposal in this class?” or “Can we use laptops/phones in class?” or anything else. I can definitely see some cases where the instructor would establish that expectation related to AI, as noted above where colleagues have said “using AI is fine, but you need to disclose and describe its use in writing your papers.” I can also see other situations where establishing that expectation could be based on in-class discussion. I regularly find that my students have ideas on how to use different tools that I would never have thought of, and if I establish expectations a priori, without the benefit of those conversations, I could close off some really creative and appropriate ideas.

For my second thought, here’s another quick story. I very clearly remember the day that my colleague Brent Smith and I were sitting in the I-O computer room at the University of Maryland. We got our hands on a very early version of EQS when it was (one of?) the first software package that allowed you to use a graphic user interface for structural equation modeling. We both said, “This is so cool!” and then we both said, “This is really scary.” Our thought was that it was cool because it would make SEM so much easier and that it was scary because by making it so much easier, it would invite people who didn’t understand SEM to make use of it in ways that would ultimately be problematic, misleading, wrong, et cetera, and I think both our enthusiasm and our fears proved correct. In much the same way, I think AI in its various forms is really cool and really scary. It will allow us to do so much more than we can do now, so much faster than we can do now, with so much more potential for error and misunderstanding. It is going to move more quickly than we anticipate, with new applications emerging so fast that it will be very hard to keep up. We as instructors will have to revise what we do, but I hope that we can do it in a way that is “What can I do to teach in the world we now live in and will live in?” rather than only being “How do I guard against ethical violations that are ever harder to find?” That won’t be easy, but it’s going to make the latter part of my career fun, I’m sure of that.

LN. Cheers to that! Thanks Marcus!

Readers, as always, please email with comments, feedback, complaints, or just to say hi!

Loren.Naidoo@csun.edu

Print
323 Rate this article:
No rating
Comments are only visible to subscribers.

Categories

Information on this website, including articles, white papers, and other resources, is provided by SIOP staff and members. We do not include third-party content on our website or in our publications, except in rare exceptions such as paid partnerships.