I'm a uni prof and here's what your kids are actually "learning" with AI
What happens when kids don't learn to think for themselves?
Hi new readers! You can subscribe to MR below and get our posts every week. Please “like” this post via the heart below and restack it on Notes if you get something out of it. It’s the best way to help others find our work.
I wrote this essay using my real human brain and real human fingers—and if that kind of effort is useful to you in these robot/bigtech/broligarch times, it would mean a lot if you’d support my work with a paid subscription. It only costs $5—the cost of a coffee! Almost as free as ChatGPT, but like way better.
Book Club Note!: MR Summer Book Club is reading “Atmosphere” by Taylor Jenkins Reid and meeting on Zoom this Sunday, July 13th, at 7 pm Eastern. RSVP here to attend, and see details at the end of this post! Book Club is open to paid subscribers at any level.
Note: Hi! This is a bit different from our usual programming, but as a writer and writing prof I’m offering some (messy, human) thoughts on recent research on AI and student learning—specifically literacy and reading and writing education.
I offer this because 1) reading, writing and critical thinking are major parts of my personal nerd life, and I think that’s true for a lot of readers/parents/citizens here too who care about freedom of thought, freedom of speech, and democracy, and 2) I think a lot of experiences in our homes and classrooms are missing from these public conversations about AI, so I’ll bring you into my home/classroom. I’d love to hear your experiences too in the comments, if you’ll share them.
In our house, we have a Tale of Two Cities unfolding. My partner is a director of an undergraduate writing program at an urban state-funded college. I’m on the writing faculty at an exclusive “top tier” private university.
Between the two of us, at a conservative estimate I’d say we have read and graded about a bajillion college student essays. When we are grading, it is our custom to shout to each other from the other room when one of our students writes something fantastic, or when they write something especially…not fantastic.
You complained about your profs? They complained about you too. To their partners, probably, while on the couch surrounded by tons of pillows, consuming copious amounts of Diet Coke and Pop Corners chips (maybe that’s just me). Grading is one of the worst parts of the job, so we indulge in this ritual.
But, we also love celebrating when a student improves or grows. We (literally) cheer when a student puts in the work to master a difficult rhetorical skill, writes something moving, or otherwise does great work and gets something original and insightful down on the page. We shout to each other from the couch to the kitchen: “Ha! She did it!! Listen to this, this is so smart!!!” This, of course, is one of the best parts of the job.
But in the last 18 months or so, these exchanges have changed dramatically. For my partner, who teaches at a state-funded school that largely serves first-gen students and non-traditional students—his comments have gone from the usual highs and lows that we share, to comments that are 90% about his students’ use of ChatGPT and other AI.
He’s not commenting on his students’ writing anymore. He’s commenting on robot writing, or as it’s accurately known: AI slop—the bland, low-quality writing created by chatbots and AI.
This is definitely a new low for the job. Even as I wrote that sentence (which I wrote with my actual human brain and fingers!) it bums me out on an existential, what-dystopian-sh*t-are-we-living-through level.
My partner estimates that at least 50% of the time, he’s not engaging with his students and their ideas anymore. Instead, when he opens a document, he’s often faced with a wall of generic AI dribble. Sometimes this is so obvious and egregious that it’s comical.
For example: In a creative writing class, several of his students turned in the same story (which had glaring logical inconsistencies—one of the characters died and then somehow had magically returned in the end of the story) because they had all just entered the same assignment as the prompt, and gotten the same story from ChatGPT. Apparently, they had not even bothered to read over “their” story before turning it in.
In his comp classes, research essays are littered with fake sources that have been generated out of thin air by AI (known as hallucinated sources). My partner can discover this by clicking through on the Works Cited page, and seeing that the sources don’t exist. But the real crushing thing, again, is that the students didn’t bother to click through and see if they exist—even if just to cheat well!
When confronted, some of his students didn’t realize that their sources weren’t real, because they were just copy-pasting the AI results from Gemini AI that pop up in a Google search.
What’s most alarming to both of us is that we suspect that not all students are doing this purely out of laziness (though some certainly are). Some students do this because they don’t know the difference anymore between what a real source and real citation looks like, and a fake one.
Put another way: They don’t know anymore what’s fake, or when they are being duped.
There have been rising concerns that teachers are facing pressure to meet graduation rates at the same time that school funding is cut—resulting in more students being pushed through the system. The result is that more students are graduating and arriving at college who are unprepared. Obviously, AI stands to exacerbate this problem exponentially: it can allow students to “complete” their work having learned very little, while teachers are saddled with too-large classes to give students individual attention.
It stands to reason, then, that we are seeing the first wave of students to arrive at college (and into adulthood) having relied on AI to do their work, and having learned very little about writing, research, and critical thinking. (See this alarming story about a Hartford student who graduated high school last year literally not knowing how to read or write, and is now suing the state).
This raises the question: How long until AI makes students functionally illiterate? It seems we may already be well on our way.
Whatever AI is meant to be a “tool” for, it’s not a tool for understanding, or learning, or even having a firm grasp of reality—based on our experience of reading a bazillion college papers. In fact, it’s leading to the opposite of those things in regards to literacy, and I’m not convinced that it was meant to do more than that. I don’t think students are using AI ‘wrong.” I think this is what AI was designed to do.
I want to pause here and say, to be clear, my point here isn’t to blame students or teachers—quite the opposite. I’m saying that the systems that we have that are failing students and teachers already, and the problems and inequities that they create—seem to be being exploited and made much worse by AI.
For example, I mentioned that I teach at an “elite” private university that’s one of the most expensive colleges in the country, and that my partner and I feel like we are experiencing a “Tale of Two Cities” as the impacts of AI unfold in our classrooms. I have some of the same complaints about AI in my classes too, but they are not nearly as severe as my partner’s.
Do my students use AI? Of course. Is it possible for me, or anyone, to even know exactly how much they are using it? No. But when I’m reading their work it still usually seems to engage with the texts and ideas that we are wrestling with in class, it is often still striving toward saying something personal and interesting.
In other words, it’s not all just AI slop. And when they are using AI they apparently still know when it looks and sounds fake. They know the difference between real and fake.
In our class discussions, there are definitely blank stares and other states of unpreparation, but most of the students, most of the time, can engage and speak about the ideas critically. (Another teacher recently told me that some of his freshman students seem to have lost the ability to speak in class, because they seemingly can’t answer a question anymore—even a question about their own experience—without consulting a chatbot. Which…ahhhhh!)
This is worrying because my students came from some of the best k-12 schools, and one thing that their lives of relative privilege seems to be buying them right now is the ability to still be able to write, think for themselves, and discern what’s real from what’s not.
One of the beautiful aspects of my partner’s job at his public college is that it offers the privilege of education—the ability to learn and think for oneself—to a student population that’s largely first gen and working class. In fact, one of the reasons that my partner took the job is because his school prides itself on the fact that over the decades, it has gained a reputation as a “passport to the middle class.” In the past, it has led the nation in the competition for the prestigious Lehman Graduate Fellowships.
It is not situated in Manhattan—where my school is—but in the borough of Queens. The borough that is famed to be the most diverse place on earth with more nationalities packed into a few square city miles than anywhere else. The borough of AOC and Zohran Mamdani.
The work that my partner is seeing right now is not the kind of work that leads to a Lehman Fellowship, much less to free-thinkers and leaders like AOC and Mamdani.
I understand that not everyone needs to be a writer or a revolutionary leader, and that some students just need to get their accounting degrees and move on with their lives. But the right to an education—and functional literacy—is a basic right protected by law in the U.S. Everyone deserves that, including the accountants.
But isn’t AI just another “tool”? And we just have to learn to use it properly?
When ChatGPT was released a couple years ago, the notion that it’s just another “tool” seemed to quickly become the prevailing narrative. It seemed that the reaction from a lot of universities was to first lose their collective shit and panic, and then quickly fall into this soothing narrative: AI is merely another tool and we don’t need to be afraid of it! We just have to help our students know how to use it the right way!
At first I, like a lot of educators, were like, “Suuure. Let’s see how this goes.”
Now I think a lot of us would like to call bullshit.
Given the mountain of evidence of what our students are bypassing and not learning—it appears that AI is a “harmless tool” for learning for our kids the same way that social media is “harmless entertainment” for our kids.
Casting the technology as harmless and inevitable serves our tech overlords to the tune of trillions of dollars, at the expense of our children.
I think more of us are confident in calling this out now that damning evidence about the results of AI use on learning are starting to roll out.
Last week, MIT released a study on ChatGPT’s impact students’ brains. Researchers used an EEG to record the writers’ brain activity and found that compared to control groups, ChatGPT users had the lowest brain engagement and “consistently underperformed at neural, linguistic, and behavioral levels.”
In other words, it’s harming their development and actively making them less smart.
It also found that when students use AI “to help with certain tasks like idea generation” something that it has been touted as a “tool” for students—the users go very quickly to just using it to generate all their work (“copy-paste”) and completely bypass the writing and learning process. Which, is surprising to no one who has ever met a teenager, or anyone who ever had to write a paper or do homework!
Microsoft released similar research on AI and cognitive development in April, finding that AI use “deteriorates the quality of human thought.” They found that AI use results in a loss of cognitive faculties, and that regular AI use “deprives the user of the routine opportunities to practice their judgment and strengthen their cognitive musculature, leaving them atrophied and unprepared."
This is pretty much exactly what educators are seeing in the classroom.
"Specifically, higher confidence in GenAI is associated with less critical thinking, while higher self-confidence is associated with more critical thinking." -April 2024 Microsoft Study
Which is to say—AI is not harmless at all, and attempts to cast it as such are false. I have come to believe that our children and young people are now guinea pigs for the capitalist experiment of AI the same way that they were guinea pigs for the experiment of unregulated social media a decade ago.
This quote from one of the MIT researchers haunts me:
“What really motivated me to put it out now before waiting for a full peer review is that I am afraid in 6-8 months, there will be some policymaker who decides, ‘let’s do GPT kindergarten.’ I think that would be absolutely bad and detrimental,” she says. “Developing brains are at the highest risk.”
Yikes!!! If we find the AI-high school education worrying, wait until the AI kindergartners hit adulthood.
Meanwhile, Elon Musk’s AI chatbot, Grok, was shut down this week because it was tweaked to basically allow for more right-wing views, and it went on a racist tirade, calling itself “MechaHitler” using what appears to be the voice of Joe Rogan crossed with Charlie Kirk, and well…Hitler, to answer prompts. So no, chatbots not harmless, and yes, chatbots can clearly be manipulated and in turn manipulate users.
All of this gives me the same vibes as when Mark Zuckerberg testified before Congress that Facebook/Meta was just another form of media—just a tool, guys! He’s really sorry about all the bad things happening, but please stop over-reacting like social media is to blame, you silly luddites!
Then a whistleblower from inside his own company leaked explosive company documents detailing that the company knew in acute detail from its own research that its products were causing widespread harm. Facebook knowingly allowed itself to be misused and sow misinformation and deteriorate democracy during the 2016 presidential election, and Facebook also knew that its algorithms and techniques were causing an epidemic of teen depression, anxiety, and rise in suicides. But Facebook/Meta not only continued to use our kids as an experiment, it started developing a program to addict, I mean—attract—kids under the age of 13.
Big Tech knew it was doing widespread harm, and it buried it and didn’t change course because, well, it made them a LOT of money. You understand, right guys? It’s just business!
Now it seems they want to sell us the same story on AI.
Is AI just an “inevitable” part of life now? Is AI writing as good as human writing anyway?
No, AI use—especially in schools—is not completely “inevitable.” I don’t think it’s completely avoidable, either, but that’s not the point. The point is that if we can regulate and pivot to help our kids avoid the harm of social media, we can do it with AI, too.
Where does the narrative that it’s “inevitable” come from, and who does it serve? It’s part of the manipulation of Big Tech. As author Vauhini Vara says: “Sam Altman and Mark Zuckerberg and the other CEOs of those big technology companies describe a future that serves their interests as if that is the one only possible future.”
Remember how tech executives famously did not allow their children to use social media, or get smart phones when they were young, even as they peddled them to our kids—because they knew the real harm?
I wonder if a similar thing may be happening with AI. Look, maybe some educators will learn how to “integrate AI responsibly” into their writing curriculum. But if the elite are any indicator as they have been in the past, that may be a red flag.
At the university where I teach, where the global elite send their children, there’s no big movement to integrate AI into writing and critical thinking courses. Instead, some instructors have elected to have new equipment delivered to the classrooms. They are crates of lined notebooks.
We are learning and re-learning to write in them with our crabbed handwriting, and weakened-from-typing fingers. The room has the smell of fresh paper, and there’s the whisper of pencils moving over crisp parchment. The penmanship is often bad—but the writing is human, and between the lines are only human thoughts—which is to say—actual thoughts.
I could write a whole section on Writing Studies pedagogy and the habits of the mind and cognitive and intellectual goals and practices of those courses (in short, no they are not merely for grammar and syntax, in fact that’s not even taught in departments like mine.) I’ll just say— the emphasis in these courses is on understanding public and intellectual discourse, and learning what your contributions—your experience, your voice, your point of view—could add to that.
Chatbots cannot do any of that because they do not have bodies and they do not have experiences, and they cannot say anything new. They are the most advanced predictive text devices ever made—which means they can make a facsimile of human thoughts and experiences based on a bazillion documents they are drawing from as data points. But data points are not experience, and predictive text is not inspiring, and cannot create something novel. By definition, AI is drawing only from what’s already out there (which, by the way, is mostly the viewpoints and experiences of straight white men! But I digress, that’s a separate essay). 1
In my classroom, students bite their lip trying to come up with the next word—not a robot prediction of the next most likely word to follow that phrase based on a meaningless ocean of data—the word that best describes their unique experience that they are having, right now, in their unique human body, fired by electricity not from an outlet, but from their beating, human heart.
Maybe that sounds corny. Maybe being “human” is becoming corny—that’s what the current Big Tech-Fascist-Oligarchy movement’s message is. And if that’s the case, and writing makes us too human—I say let’s write our hearts out, and teach our kids to do the same.
What’s your experience with your kids, your work, your teaching, your own writing, in regards to AI? I would love to hear more from people’s real-life experiences, please feel free to share.
Would you like more writing on AI and education? Because I could definitely nerd out more.
If you’re loving MR, give us a like and leave a comment so new readers can find us!
For the price of a coffee, you can be part of something in the world that’s still real and not owned by a tech bro-oligarch! Win!
MR Reading Women Book Club is reading “Atmosphere” by Taylor Jenkins Reid. We will meet on Zoom Sunday, July 13th at 7 pm Eastern—you need to register here to attend and get the Zoom link. Looking forward! Book club is available to paid subscribers at any level.
If you can’t make the Zoom call and would like to do an online book club in post form, we can do that too. Let me know here.
I could write a whole section about how no one wants to read AI writing, so why would we teach it? The reason I know that no one wants to read it is because I don’t know anyone who reads it except for ME and other profs, because we get paid to read it.
My other point is that I know no one wants to read it because if that were true, I would just put in a prompt every single day and churn out something like this, and let the robots do all the work for this NL while I sit back eating Pop Corners! But so far no one wants to read that, much less pay for it. Also, I had ChatGPT write a version of this essay for me to see what I would get—maybe I’ll show part of it in an upcoming essay! But suffice to say it did not produce a publish-able piece and I still spend hours writing this.
This essay is hopefully just the beginning of readers and writers talking about the impacts of AI. This part is crucial — “I’m saying that the systems that we have that are failing students and teachers already, and the problems and inequities that they create—seem to be being exploited and made much worse by AI.”
Students use AI because it’s there — a thing no one asked for!
Thanks for taking the time and effort to write such an awesome piece on AI. Every single point you made resonated deeply. One thing I was wondering was whether you are allowed to give oral exams at your universities? Many decades ago when I did a year abroad in Denmark, I had both lengthy in person written exams and oral exams.