Large Language Models And Higher Education
A CS Teacher's Perspective on A Bad Mix Incentivized by Corporate Funding
After years of buzz, I am finally compelled to throw an “AI” essay of my own into the fray for the first time. But first, a couple of ground rules. Artificial intelligence is an intentionally vague moniker coined to obtain a grant back in 1956 that unified disparate interests in computing. Since this often causes “AI” discourse to be unhelpfully general, I want to be more specific up front: I will be focused on large language models (LLMs) in this essay. I will also assume in my discussion that large language models are as accurate and objective as is possible (though there is well documented evidence to the contrary). This will allow me to focus on issues occurring at the interface of higher education and corporate capture.
As a CS/Math professor at a small college, I've had the rare experience of not only teaching artificial intelligence itself, but also of teaching writing. I also grade all of my own assignments, so I am on the front lines observing how students have adapted to these tools. Since ChatGPT has been released, my basic observation is is quite simple and unsurprising: in classes where the outputs of LLMs are the final product of what’s supposed to be an interative process central to the class’s learning goals (e.g. writing and coding), unfettered access to them shortcuts important critical thinking development, which I have seen both unfairly elevate low effort students and unfairly penalize hard working and risk taking students. This undermines the core mission of higher ed to challenge students through a productive struggle to grow and change, and it can only benefit corporations in the long run.
Since code is the central output of the vast majority of work I assign in my courses, I have always disallowed students from using LLMs in my classes. Unlike spell check, a common false equivalence made by corporate types and academics alike, LLMs take the place of critical thinking in an iterative process. Perhaps this is of use later when students already know how to do the thinking themselves, but I've always seen it as my job to help them develop their own style and to create a safe space to grow away from LLMs initially, much like many K-12 schools disallow phones during the day.
When my students are in violation of my policy, they are not using the tools critically and incrementally as utopian visions would suggest; rather, students are putting entire assignments into the models once and turning in their outputs, usually when they are desperate. In my CS classes, I'll get perfect submissions with advanced features I never taught from students at the bottom of the class who start working on assignments the night they are due. More disturbingly, I'll sometimes get submissions which are perfect, save for 4-5 lines of code with syntax errors. This is a telltale sign of LLM use, as live coding happens incrementally, and a perfect product with a few syntax errors is akin to a perfectly constructed building with chunks of the foundation missing in key places. This sends the wrong message and undermines one of my main learning goals of teaching incremental development and thought.
It's all too easy for students to slip into these bad habits given corporate practices. Due to current investment priorities, there are endless friction-less systems startups create to enable students to cheat, and even well established companies like Microsoft and Google are integrating LLMs by default into all of their products, giving popups suggesting its use on cloud services while students are working (for instance, I have seen popups for copilot on the computers of even my most intrepid students who attend office hours).
It’s also difficult to figure out how to handle this in the classroom. Even when there is nearly conclusive evidence of LLM use, their random nature makes it difficult to prove, and I do not see myself as a cop. Sometimes, I'll have a discussion with a student where they will admit to using LLMs, and I will have them do an assignment over again. But sometimes, I have to let it slide, as the stakes for false accusations are high after the trust we build in the classroom. Thus, the current state of affairs and incentives make me question my effectiveness in my role, and it makes it particularly heartbreaking when I have to dock points on a flawed but honest submission that is from a student who has been working hard in my office hours week after week learning and growing as intended.
There are some things we can do as instructors to preempt this. For instance, we can have students do more work in class where we can engage with them directly, we can split assignments up into smaller pieces with early feedback, we can give more paper exams, and we can give oral exams. But overall, we are in an arms race, and educators who attempt to cultivate growth environments are perpetually on the back foot. This is difficult when we are already overburdened.
Why is this happening, and why are so many educators and administrators falling for the notion that we must integrate these tools into our teaching, no matter what the subject is? I believe it's all about corporate incentives; it is in the interest of companies like OpenAI to get users hooked, not only to cultivate lifelong users, but also because many of those funding the “AI bubble” share an open disdain for the higher education project. Being able to prove that higher ed “doesn’t work” will not only vindicate them, but it will also allow them to lay more people off. And they will be justified in doing so, as students will not have developed skills to compete with these tools! Given this, I challenge educators and administrators to be ever mindful of this stance when dealing with these companies (unlike the University of Michigan, for instance), and not to compromise our mission of shepherding students through their own process.
In the meantime, I am grateful to my institution and to other schools who support their faculty who challenge corporate dogma and hype on this issue and who let faculty draw their own boundaries. And I hope my example as a CS teacher who understands these technologies well, but who draws firm boundaries, gives a counter-example to the idea that all those who don’t allow LLMs in their classrooms are simply “anti technology” or “burying their heads in the sand.”
Update 6/30/2025
I got an intriguing email from someone I know with a “rebuttal” (in their words) to this post. They had some good points, and they made me realize I need to clarify that I’m not saying LLMs have no place in any class in higher ed, though I can see how it may have come off that way. This is why I was careful to highlight classes where it replaces the critical thinking of the primary learning goal of the class, like intro to computer science or intro writing, where the goal is to find one’s own voice and to cultivate one’s own style. By contrast, I think it would be fine to use an LLM in a statistics course, for example, to generate R code, where the primary focus is not on the art of coding. In that case, I still have concerns about corporate capture and environmental costs of these tools, though perhaps the models with open weights and continual improvement of the technology will eventually address both of these concerns.
To throw some uncertainty into this, though, high profile CS schools like CMU and Princeton are embracing LLMs even in intro CS courses. Also, a colleague at Ursinus had a very cool idea about making assignments in intro courses to develop tests for the outputs of LLMs to make sure they work, which definitely does address an important learning goal in the course even with LLMs in the mix. I have also heard of LLMs being useful as an “around the clock tutor,” though I worry a bit about their accuracy and also about students not pushing themselves to engage with their peers and with faculty as much as they should.
So, as usual, it is quite confusing, and all of my opinions are in flux. I just want to make sure we slow down and have a careful, honest discussion about how it’s going. Luckily, it sounds like many people are doing that.