LLM is the New Calculator
Integrating ChatGPT, Bard, Duet, and other LLMs into the Classroom
This fall I am teaching a Data Management course as part of Saint Louis University’s M.S. in Health Data Science program. Over the summer, two other professors and I had a brainstorming session where we tried to think through what seemed like a hard problem in academia — the proliferation of LLMs like ChatGPT and others that can take relatively simple prompts and turn them into code and explanations of results, and even full paper write-ups that typically made up assignments for a number of classes.
Plagiarism in academia has been a longstanding issue. Search engine results, Stack Overflow posts, and even sites listed as “homework aids” have been providing code solutions that get copied and pasted into assignments for a very long time. Copying and pasting lightly modified search results like papers or articles as original work is nothing new. But tools like Duet (shown above), ChatGPT, and Bard just make it easier to produce higher quality, seemingly original work with very little effort.
However, this is only in academia where this is perceived as a problem. In the corporate world, if an LLM could reduce the amount of work to finish a project by 90%, we’d celebrate it as a productivity tool. So the question my fellow professors and I asked ourselves was, “Is there a way to embrace LLMs and search results while still upholding academic integrity and ensuring actual learning is taking place?”
This is not unlike the advent of the cheap, widely available calculator in math education. If you’re a high school math teacher and you send students home with 100 simple math problems as homework, how would you know if they used a calculator to do them? You probably wouldn’t. So is math education ruined? Of course not! However, you do have to adapt. You simply assume every student has access to a calculator (and today, probably a graphing calculator that has complex functions built in.) And if that’s the case — and here’s the beautiful part of the brave new world — you simply get to ask harder, and I would argue, more meaningful questions.
Calculators did not ruin math education. They made it infinitely better. They allow us to pose challenges that force students to display conceptual understanding as opposed to computational prowess. While there is some real world value in knowing the times table (or whatever they call it nowadays) there is infinitely more value in being able to apply complex math concepts in an engineering problem. And it’s not that we weren’t doing good engineering work prior to calculators, but with calculators, we have a lot more people doing a lot more of it in dramatically shorter timeframes.
So what if we embrace LLMs and their capabilities in the same way? What does this look like?
Well, for starters (and with sincere condolonces to my students this semester) it means you can ask a lot more out of learners. If you can construct the bones of a 2-page paper on any given topic in 30 seconds or less, then I am going to ask you to do so on a wider range of more complex issues, and a lot more frequently than I otherwise would. If you can produce code that solves a simple problem with no effort, I’m going to ask you to solve significantly harder problems. I’m going to assign you to real-world, open-ended tasks that will presumably force you to research and experiment. I’m going to assume you’re using LLMs, and I’m going to not only allow it, but enthusiastically embrace it! In Data Management, for example, I’m going to not only ask you to solve a coding challenge, I’m going to give you a complex scenario that forces you to explore a challenging dataset and provide real-world value using code as the tool to explore. I don’t care if that code is generated by an LLM or something you found on Stack Overflow — as long as you’re following the instructions (and assuming I’ve crafted them well enough) you’re going to have to put in some significant effort to figure out how to apply the information you gained from those sources in a meaningful way on your data. And I get to assume that you can do a lot more of this.
Will this completely deter determined cheaters? Of course not. Someone will undoubtedly try to turn in the LLM-generated content like in my example above as their assignment, and they will receive a 0 for it when they do. (Work was too generic, did not display original thinking? Sorry, no points for you…) I will be merciless for anything that even smells like it might be plagiarized work, and I have LLMs and other tools at my disposal to help me detect it. I will expect my students to provide *real* value, *real* original thought and research into every assignment. They can use LLMs as idea creators and starter materials, and even code solutions. But in return, I expect high quality, unique results from each and every one, for each and every assignment. It’s going to be a beautiful thing. Or a terrible one, if you’re someone who isn’t used to having to perform at that level.
Welcome to the brave new world. LLMs are the new calculator, and I can’t wait to see the creativity and productivity they are going to unleash.