This essay is hopefully just the beginning of readers and writers talking about the impacts of AI. This part is crucial — “I’m saying that the systems that we have that are failing students and teachers already, and the problems and inequities that they create—seem to be being exploited and made much worse by AI.”
Students use AI because it’s there — a thing no one asked for!
Thanks for taking the time and effort to write such an awesome piece on AI. Every single point you made resonated deeply. One thing I was wondering was whether you are allowed to give oral exams at your universities? Many decades ago when I did a year abroad in Denmark, I had both lengthy in person written exams and oral exams.
Thx Dina and thx for reading! This is such a good question! Another were low-tech option just sitting there. There’s lots of talk about going “analog” on my dept but I haven’t heard of this yet. Great idea tho :)
In my son's senior HS AP gov class, if you fail a test, or score below 80 and want to retake it for a chance at a better score, all retakes are oral exams. This is at a massive public HS (3800+ students). The teacher is a former attorney, now HS teacher, and he believes that to know if a student truly understands the material, he needs to engage them in conversations 1:1 with the material, so if you want a retake of the paper exam, you do it orally. I love this idea, and my son used it as an option on more than one occasion (and always increased his scores as a result).
Wow! Love it. You’d have to hire more teachers to scale this bc meeting e students is so time intensive, but all research says this 1:1 time has amazing outcomes.
Brilliant! In my American education, I only ever did oral exams in foreign language classes in K-12, and then they only came up again at the college level in thesis defense and dissertation defense sessions. It seems like a very effective exam method for certain subjects!!!
Excellent article! I just said to my boss this morning that they knew the deleterious effects of social media on kids and now they are shamelessly doing the same damn thing with AI. All to increase the already obscene levels of wealth hoarded by the broligarch class.
I'm not going to lie: I hadn't thought of using notebooks as a solution to AI in schools 😅 I am not an educator but I tend to think of complex solutions before simple solution, hence my dismay as the simplicity of "just" going back to notebooks.
Loved this piece. Thank you for the time and energy you put into it 🙏
Thx so much Mike, and thanks for taking the time to read :)
I know, right? I just started using notebooks myself! There is a lot of talk about going “analog” in various ways. Another reader here just brought up oral exams. Duh!
I saw that comment about oral exams and loved it too! I'm going to get a notebook and journal in it to see how I like it. I've always typed things out since computers came along but I might want to hop on this trend too! Only one way to find out!
YAAAAAAAS. I mean obviously word processors have their use and their place, but writing things out by hand uses a totally different part of the brain. Plus, blue book handritten tests would force students relying on AI slop to at least read their own assignment a little bit before submitting it.
It is refreshing to see a different perspective. This is exactly why I enjoy being here on substack. Loved reading this and it gave me lots of food for thought. As a scientist in Silicon Valley, I constantly hear, "Use Al for everything." But when it comes to learning and critical thinking, we shouldn't outsource that to Al.
Planning the family vacation with AI is a great idea. At home, I use it mainly for offloading repetitive tasks: meal planning, grocery shopping, organizing booking reservations, calendar activities etc. Recently I used it to brainstorm ideas and organize activities for my daughter's birthday part and it was pretty good actually.
Regarding Silicon Valley and what I mean by everywhere, here are some examples:
* Productivity: Definitely for enhancing productivity and coding. Often developers use AI to write, debug and review code. PMs use it to summarize meetings, organize information and find easily their notes/emails.
* Startups: Tons of founders are using AI to build MVPs or even automate entire parts of their businesses (customer service bots, internal tools, etc.).
* Transportation: Self-driving cars are in full operation in San Francisco and now also in Bay Area (Mountain View, Palo Alto, etc)
* Healthcare: AI is being used for faster and more accurate diagnostics (analyzing medical images like X-rays or MRIs).
But even in daily life we use AI without us even realizing it. This includes spam filters in email, personalized content recommendations on streaming services, smart home devices that learn our preferences (like thermostats), and even the autocorrect on our phones.
Respectfully, you give a lot of examples that are not correctly classified as AI. When people talk about generative AI, they are referring to LLM’s, large language models that build responses based on huge databases of text or images. Not everything that uses an algorithm (personalized content recommendations, spam filters, thermostats) is AI.
Everything that I said is an example of AI. The question was about the use of AI and not particularly LLMs in Silicon Valley.
Not all AI is LLMs. LLMs are a specific type of AI, particularly those designed for natural language processing tasks like text generation and understanding. AI is a broader field encompassing many different types of models, while LLMs are a subset focused on language-based applications.
This is great, Lane. I’m actually going to print and share this with my students and incorporate it into our discussion on AI use. There’s a lot in here I bet they don’t know (should know), and will be surprised to learn!
Thanks so much, Erin! I'm really flattered that you'd share it with your students. I think I'm going to share some of it with mine to--which feels kinda funny, but also I agree they need to know this. It's their education and their minds at stake!
Would be v interested to hear your experiences w your students if you're up for it :)
Already we are seeing a lack of critical thinking skills and increased reliance on not just AI, but the ask-and-answered features of Google. It's a bit terrifying to think about how much worse it can get.
Great piece! *typo alert* Search for the word "sew" - should be"sow". I teach creative writing to children and I don't have to worry about citations so I don't quite have the same concerns as you. But I did have a student turn in something that no child ever would have written. It was so robotic and generalized and uninteresting. If nothing else, kids' brains are amazingly weird. This child denied that he had used AI and his parents stood behind him, but I hope they both learned a lesson that he can produce more with his own weird brain than an AI ever could with all its so-called access to knowledge.
I've also been writing about AI, but from the point of view of what makes work worthwhile. Typing a prompt into a chatbot does not make your work worthwhile in any way. It devalues human thoughts, ideas, and values. It presupposes that the most obvious next word is the best next word, which is absolutely untrue. As any writer knows, the best next word is the one that no one expected to read.
So much this: "I've also been writing about AI, but from the point of view of what makes work worthwhile. Typing a prompt into a chatbot does not make your work worthwhile in any way. It devalues human thoughts, ideas, and values. It presupposes that the most obvious next word is the best next word, which is absolutely untrue. As any writer knows, the best next word is the one that no one expected to read."
And I have not thought about how it devalues work! Such a good point. Would love to hear more what you're thinking along these lines, if you're up for it.
The way I look at it, the problem here is not just the AI. These tools are tools, and as I tell my students, you can use a hammer to build a nice box or to hit someone over the head. It's your choice how you use the hammer.
But that's not completely true: We don't make choices in a vacuum. Right now, people are literally having AI shoved in our faces. I don't get an opt-out for AI from Google (search). The rest of the tools I use, such as Docs, Word, Adobe products, etc. keep shoving it at me and making me turn it down over and over.
On top of that, we are facing a cultural onslaught that I'm capable of resisting. But I know a lot of people aren't. For example: I am self-publishing a children's novel soon, and I could have used AI to design the cover image for free. But instead, I'm paying a real artist to do it. I am doing that because, first of all, I have the means to pay her, and second, I deeply value artistic work. Other people who need, for example, a piece of art for a project they're doing don't have the financial means and/or don't have the strength of values behind their decision. And then that AI keeps being shoved in their faces.
I personally think that any creative person who would try to fob off AI output as their own creative work is just despicable. The value of creative work is almost completely in the humanity and the effort, not in the product. Our culture, however, keeps telling us it's the product. And that suddenly "anyone can be an artist" or "anyone can be a writer." Now, I happen to agree with that, but I don't think that using AI for creative work makes you an artist—it makes you the very opposite of an artist because you shirked the part of the job that actually IS the job. The product is not the job; the process is the job.
I can look at my walls and prove that statement, because alongside art that I paid for and some posters of famous artists' work, I display art that my children made, art from long-dead relatives, and art from people I know who are not as skilled as a famous artist. But to me, all those pieces of art (except the posters—they're just a reminder of art) occupy the same level of respect on my walls. Because it's the person and the process that make creative output worthy.
When it comes to writing, I understand that there are some people who need to write an email, say, for a job application and can't quite get the words to sound right. They're not writers. So fine, in that case I guess AI could be a useful tool. And in this modern world, one could argue that since they can also use AI to do their writing at work, it's not cheating to use it for the email. But still, all of us writers know that the process of writing makes us better writers, more thoughtful people, more able to understand our own opinions and the opinions of others.
So yes, I believe that on every level, use of AI will degrade our abilities and creativity. Just as penmanship has suffered since the typewriter and memorization has suffered since we don't depend on the our memories to recall information, so will we suffer degradations of skills with AI. But I believe it will be more. I believe many of us are going to lose confidence in our own limited abilities to get through this life, and our own infinite capabilities to expand our abilities. And if AI steals that from us, they steal the essence of what it is to be a productive human.
AI definitely (dramatically) accelerated all the things that I hate about college, but I think the underlying issue is that we’ve collectively broken the University.
Accreditation attempts to standardize and commodify. Focus on job preparation has removed the soul of education. Well intentioned efforts to get more people into college have turned it into a requirement that many students see as an obstacle to be beaten (or skipped) rather than a challenge that’s valuable in its own right.
I was ready to leave academia by the time chatGPT came out. By now it’s unbearable. But without ChatGPT we would have gotten to almost the same place within a generation—except the first gen students might have the least access to the tools to deny their own education (I.e. a Chegg membership and someone to write their essays for them).
I've been teaching economics at a regional state school for about a decade.
The two big forces are students and administrators. Admin is too distracted with high-level things (enrollment, compliance, etc.) to know what's happening on the ground. Students are less prepared than ever just as the stakes seem more impossibly high than ever--resulting in a heady mix of anxiety and apathy.
There's a lot to say, so I'll just dump out an incomplete and unordered list of some of the big things that come to mind:
- Culture. Our provost once announced that we should try to encourage "a culture of assessment" (notably *not* "a culture of learning"). Despite having lots of evidence of things getting worse there's nothing we can do about it because administrators want to make sure curriculum is adequately "flexible" so transfer students can graduate quickly. (There's a lot more to say about the culture outside of the academy, but if you look too closely at that problem it morphs into some sort of H.P. Lovecraft monster, so I'll leave that alone.)
- Education vs. Training. Everything has to be "practical" and valuable to employers. So long humanities, I guess students are going to have to find depth, humanity, and perspective from TikTok and network news. There's no room for genuinely liberating education when all the oxygen in the room is taken up by training students to be office drones. I once taught a class on Agent Based Modeling which, I think, provides a way of thinking about the world that could dramatically expand students' ability to understand complex systems (you know, like society!). But what my chair wanted to know is what sort of job skills students would pick up. Wisdom takes a back seat to "measurable learning outcomes".
- Commodification. Three units of knowledge (i.e. 3 credits) must be clearly defined in advance. Once the semester is over and students have purged their memory, we assume they "know" Algebra I, or Public Speaking, or whatever. But there isn't time to reinforce that knowledge in the next class because they're on to a new bundle of knowledge. This structure and bureaucratization is a double-edged sword--but I think it hurts more than it helps once you scale it up too far. I think this is especially true for the many undiagnosed-ADHD students whose brains bristle at the bureaucracy of school but *could* be learning things so much more deeply than their peers in the right environment. I see a lot of these students because the most ambitious non-ADHD students tend to end up in "better" schools.
- Primacy of signaling. At the end of the day (at least in the part of the industry I've been living in) higher ed is primarily in the business of ritual human sacrifice. I earned tenure by demonstrating that I've suffered through committee work and academic publishing, not by actually accomplishing anything. *Some* signaling is inevitable, particularly when you're in a system that involves more people than Dunbar's number. But when the signal matters more than what it's supposed to represent (i.e. hard work that was worth doing) we end up with lots of make-work for its own sake.
- Common pool problems. It's easier (especially when you're already overworked) to pass a student who is *almost* there--then they're someone else's problem. It's easier to give too many degrees than too few such that the signaling value is diminished. Grade inflation and credential inflation result in students feeling like they've got to go get a masters degree to distinguish themselves from everyone else--not because they want to go that much deeper into some topic. There are so many steps where some decision maker is incentivized to make things slightly worse in the long run. Software engineers use the term "technical debt" for similar situations--not only are we racking up $bajillions in student loans (resulting in students who are financially trapped and thus unable to take risks and innovate), but there's this sort of psychic debt that erodes the epistemological foundations of our culture.
- Waste of human potential. Students' neural plasticity slows right down around age 25 (mind you, I'm not a neuroscientist, so check my work). We take 4+ years of prime learning time and have students use that time learning how to navigate a bureaucracy while pretending to learn economics (the assessments show that many of them don't even pretend any more). There's so much learning-by-doing that students *could* be doing, but they've got to use their scarce time temporarily learning content they'll forget immediately after the final exam. Similar problems plague faculty--we spend so much time documenting all our hard work that we don't have much left to actually do hard work. And we certainly don't have space to take the risk of doing anything properly new.
- Support. The general public doesn't care enough to support state schools. There's only so much we can do when budgets don't allow instructors enough time or support (e.g. tutors, TAs, functioning technology, etc.) to focus on students.
If I felt like these were just issues of mismanagement, I'd be looking to go to a different school. But I don't think my school is particularly bad relative to the industry as a whole. I see these as systemic issues that are likely to get worse across the industry over the next decade.
Do you think that degrading critical thinking and cognition is the deep-down point of AI? I sound like a conspiracy theorist. But really! Look at what social media has done. And now this? In another decade… scary to consider. Maybe it’s what ‘they’ want… 🤔
I mean, you’re definitely not the first to ask this question! Under-educated population, easily manipulated by chatbots? Kinda perfect for a fascist agenda…
Check out writer Meg Conley she has a lot of smart things to say along these lines!
Hmmm, I guess I’m less interested/qualified to speak to ethics, and more concerned about the effects on the user in re: to critical thinking/education. That’s kind of an ethics question, but not probably what you mean?
What do you think? I will say we haven’t even touched on the ways that AI contributes to climate/environmental issues. Which is def a question of ethics!
I was getting at the effects of AI on the user. We are meant to be teaching students how to use AI ethically in their completion of course work- not using it to plagiarise, but to generate ideas and critically engage with those generated responses. That was the context to my question.
Thx for clarifying. I think the substance of my piece and the MIT research is to point out that what you describe is not how AI is used, and it’s not how it was designed to be used. So I'm arguing that the notion that using AI the way that you describe here is a farce.
Also tho, I’m glad you raised this bc I think the framing here you point out is important. Namely, some people invented a cheating machine to do students’ homework for them in order to make $$, and now they want the schools to figure out how to teach it…and make it not a cheating machine.
It’s silly. We should be asking the creators to justify the ethics/purpose of the cheating machines. No one asked for it!
That’s a bit simplified but I think it’s pretty close to reality!
Yes, I tend to agree. The genie is out of the proverbial bottle, and there is no going back. AI is here to stay, and it maybe conceived as a cheating machine but the problem remains- how do we mitigate its negative effects? Certainly, it requires some regulation.
I read and think and know a lot about AI, and I agree with you completely. But I also know that, as the focus of most of the investment money and talent in the US now, it's not going anywhere. People are using it now as a "tool" to do things the same way as they did them before, so we need to ask them to do different things. Writing essays and papers makes as little sense now as learning the multiplication tables, so how can we teach kids what's really important using different methods? How can we help them think and act independently in the face of technology designed to co-opt not just their minds, but their hearts and souls?
I agree that it’s not going anywhere. Say more about “writing essays makes little sense now” based on your own expertise. I find it interesting when people say that given that that here we are…you as a reader and me as a writer and having a convo about it.
And as far as I can tell there are still plenty of readers (on this site ex) and yet no one who wants to read AI writing much less pay for it :)
I would argue that writing essays, papers, and learning to do math in your head are actually very much needed for the health of our brains. I'm not sure how rote memorization (multiplication tables) comes in regarding brain health, but I would be hesitant to throw the proverbial baby out with the bathwater. Like the author, I also work in higher ed, and it is shocking to see how quickly the deterioration of basic "daily life" critical thinking has impacted our incoming freshmen these last few years. It is alarming and I fear we are rapidly progressing to a dystopian hell future where thinking is outsourced to AI.
This essay is hopefully just the beginning of readers and writers talking about the impacts of AI. This part is crucial — “I’m saying that the systems that we have that are failing students and teachers already, and the problems and inequities that they create—seem to be being exploited and made much worse by AI.”
Students use AI because it’s there — a thing no one asked for!
Thx Allison! Nobody asked for this 😅
Thanks for taking the time and effort to write such an awesome piece on AI. Every single point you made resonated deeply. One thing I was wondering was whether you are allowed to give oral exams at your universities? Many decades ago when I did a year abroad in Denmark, I had both lengthy in person written exams and oral exams.
Thx Dina and thx for reading! This is such a good question! Another were low-tech option just sitting there. There’s lots of talk about going “analog” on my dept but I haven’t heard of this yet. Great idea tho :)
In my son's senior HS AP gov class, if you fail a test, or score below 80 and want to retake it for a chance at a better score, all retakes are oral exams. This is at a massive public HS (3800+ students). The teacher is a former attorney, now HS teacher, and he believes that to know if a student truly understands the material, he needs to engage them in conversations 1:1 with the material, so if you want a retake of the paper exam, you do it orally. I love this idea, and my son used it as an option on more than one occasion (and always increased his scores as a result).
Wow! Love it. You’d have to hire more teachers to scale this bc meeting e students is so time intensive, but all research says this 1:1 time has amazing outcomes.
Brilliant! In my American education, I only ever did oral exams in foreign language classes in K-12, and then they only came up again at the college level in thesis defense and dissertation defense sessions. It seems like a very effective exam method for certain subjects!!!
You're right! If we invested in this kind of small group 1:1 learning in schools it would be a game changer.
Excellent article! I just said to my boss this morning that they knew the deleterious effects of social media on kids and now they are shamelessly doing the same damn thing with AI. All to increase the already obscene levels of wealth hoarded by the broligarch class.
We are on the same page! 😅
I'm not going to lie: I hadn't thought of using notebooks as a solution to AI in schools 😅 I am not an educator but I tend to think of complex solutions before simple solution, hence my dismay as the simplicity of "just" going back to notebooks.
Loved this piece. Thank you for the time and energy you put into it 🙏
Thx so much Mike, and thanks for taking the time to read :)
I know, right? I just started using notebooks myself! There is a lot of talk about going “analog” in various ways. Another reader here just brought up oral exams. Duh!
I saw that comment about oral exams and loved it too! I'm going to get a notebook and journal in it to see how I like it. I've always typed things out since computers came along but I might want to hop on this trend too! Only one way to find out!
Same! Obviously I have to write on a keyboard here lol but I’m trying notebooks in class along w my students :)
I am obsessed with the idea of requiring students to turn in their final essay in handwritten form 🤩 notebooks for all!!!
There are people bringing back the old blue books. Do you remember those??
YAAAAAAAS. I mean obviously word processors have their use and their place, but writing things out by hand uses a totally different part of the brain. Plus, blue book handritten tests would force students relying on AI slop to at least read their own assignment a little bit before submitting it.
VERY true. Students really don't read prompts/instructions closely anymore, either !
It is refreshing to see a different perspective. This is exactly why I enjoy being here on substack. Loved reading this and it gave me lots of food for thought. As a scientist in Silicon Valley, I constantly hear, "Use Al for everything." But when it comes to learning and critical thinking, we shouldn't outsource that to Al.
Thx and thx for reading! Would be interested to hear what the folx in Silicon Valley are using AI for when you say “everything.”
I’m not a techie but I used it to plan my family my family
Vacation and honestly was great!
Planning the family vacation with AI is a great idea. At home, I use it mainly for offloading repetitive tasks: meal planning, grocery shopping, organizing booking reservations, calendar activities etc. Recently I used it to brainstorm ideas and organize activities for my daughter's birthday part and it was pretty good actually.
Regarding Silicon Valley and what I mean by everywhere, here are some examples:
* Productivity: Definitely for enhancing productivity and coding. Often developers use AI to write, debug and review code. PMs use it to summarize meetings, organize information and find easily their notes/emails.
* Startups: Tons of founders are using AI to build MVPs or even automate entire parts of their businesses (customer service bots, internal tools, etc.).
* Transportation: Self-driving cars are in full operation in San Francisco and now also in Bay Area (Mountain View, Palo Alto, etc)
* Healthcare: AI is being used for faster and more accurate diagnostics (analyzing medical images like X-rays or MRIs).
But even in daily life we use AI without us even realizing it. This includes spam filters in email, personalized content recommendations on streaming services, smart home devices that learn our preferences (like thermostats), and even the autocorrect on our phones.
Respectfully, you give a lot of examples that are not correctly classified as AI. When people talk about generative AI, they are referring to LLM’s, large language models that build responses based on huge databases of text or images. Not everything that uses an algorithm (personalized content recommendations, spam filters, thermostats) is AI.
Everything that I said is an example of AI. The question was about the use of AI and not particularly LLMs in Silicon Valley.
Not all AI is LLMs. LLMs are a specific type of AI, particularly those designed for natural language processing tasks like text generation and understanding. AI is a broader field encompassing many different types of models, while LLMs are a subset focused on language-based applications.
This is great, Lane. I’m actually going to print and share this with my students and incorporate it into our discussion on AI use. There’s a lot in here I bet they don’t know (should know), and will be surprised to learn!
Thanks so much, Erin! I'm really flattered that you'd share it with your students. I think I'm going to share some of it with mine to--which feels kinda funny, but also I agree they need to know this. It's their education and their minds at stake!
Would be v interested to hear your experiences w your students if you're up for it :)
Already we are seeing a lack of critical thinking skills and increased reliance on not just AI, but the ask-and-answered features of Google. It's a bit terrifying to think about how much worse it can get.
Plus they are engaging in destructive scanning of print books to train the chatbots!
Ahhhhh
Author @carley Moore here has written about having her books used to train AI. So gross! It’s also training itself on what I write here 🤪
Powerful. Thank you for this.
Thx and thank you for reading, Katie!
Lane! Thank you for writing the truth here and giving me even more courage to resist! So much brilliance and honesty. Much needed too!!
Thank you Carley!! A brilliant teacher and writer.
🩷🩷🩷🩷 I have overposted your essay lol but it’s so good!
Thx babe!! I’ll take all the love I can get ❤️❤️
Great piece! *typo alert* Search for the word "sew" - should be"sow". I teach creative writing to children and I don't have to worry about citations so I don't quite have the same concerns as you. But I did have a student turn in something that no child ever would have written. It was so robotic and generalized and uninteresting. If nothing else, kids' brains are amazingly weird. This child denied that he had used AI and his parents stood behind him, but I hope they both learned a lesson that he can produce more with his own weird brain than an AI ever could with all its so-called access to knowledge.
I've also been writing about AI, but from the point of view of what makes work worthwhile. Typing a prompt into a chatbot does not make your work worthwhile in any way. It devalues human thoughts, ideas, and values. It presupposes that the most obvious next word is the best next word, which is absolutely untrue. As any writer knows, the best next word is the one that no one expected to read.
Here's my latest: https://babblery.substack.com/p/i-dont-want-my-work-to-be-easier
Thx!! We humans tend to make typos I appreciate the heads up :)
I'm glad that you are, indeed, a real human! 😺
So much this: "I've also been writing about AI, but from the point of view of what makes work worthwhile. Typing a prompt into a chatbot does not make your work worthwhile in any way. It devalues human thoughts, ideas, and values. It presupposes that the most obvious next word is the best next word, which is absolutely untrue. As any writer knows, the best next word is the one that no one expected to read."
And I have not thought about how it devalues work! Such a good point. Would love to hear more what you're thinking along these lines, if you're up for it.
I've been writing a series of thoughts on how AI degrades our work and creativity. So far they're here:
https://babblery.substack.com/p/i-dont-want-to-work-faster
https://babblery.substack.com/p/i-dont-want-my-work-to-be-easier
(coming tomorrow) https://open.substack.com/pub/kidslearn/p/ai-and-the-modern-student
The way I look at it, the problem here is not just the AI. These tools are tools, and as I tell my students, you can use a hammer to build a nice box or to hit someone over the head. It's your choice how you use the hammer.
But that's not completely true: We don't make choices in a vacuum. Right now, people are literally having AI shoved in our faces. I don't get an opt-out for AI from Google (search). The rest of the tools I use, such as Docs, Word, Adobe products, etc. keep shoving it at me and making me turn it down over and over.
On top of that, we are facing a cultural onslaught that I'm capable of resisting. But I know a lot of people aren't. For example: I am self-publishing a children's novel soon, and I could have used AI to design the cover image for free. But instead, I'm paying a real artist to do it. I am doing that because, first of all, I have the means to pay her, and second, I deeply value artistic work. Other people who need, for example, a piece of art for a project they're doing don't have the financial means and/or don't have the strength of values behind their decision. And then that AI keeps being shoved in their faces.
I personally think that any creative person who would try to fob off AI output as their own creative work is just despicable. The value of creative work is almost completely in the humanity and the effort, not in the product. Our culture, however, keeps telling us it's the product. And that suddenly "anyone can be an artist" or "anyone can be a writer." Now, I happen to agree with that, but I don't think that using AI for creative work makes you an artist—it makes you the very opposite of an artist because you shirked the part of the job that actually IS the job. The product is not the job; the process is the job.
I can look at my walls and prove that statement, because alongside art that I paid for and some posters of famous artists' work, I display art that my children made, art from long-dead relatives, and art from people I know who are not as skilled as a famous artist. But to me, all those pieces of art (except the posters—they're just a reminder of art) occupy the same level of respect on my walls. Because it's the person and the process that make creative output worthy.
When it comes to writing, I understand that there are some people who need to write an email, say, for a job application and can't quite get the words to sound right. They're not writers. So fine, in that case I guess AI could be a useful tool. And in this modern world, one could argue that since they can also use AI to do their writing at work, it's not cheating to use it for the email. But still, all of us writers know that the process of writing makes us better writers, more thoughtful people, more able to understand our own opinions and the opinions of others.
So yes, I believe that on every level, use of AI will degrade our abilities and creativity. Just as penmanship has suffered since the typewriter and memorization has suffered since we don't depend on the our memories to recall information, so will we suffer degradations of skills with AI. But I believe it will be more. I believe many of us are going to lose confidence in our own limited abilities to get through this life, and our own infinite capabilities to expand our abilities. And if AI steals that from us, they steal the essence of what it is to be a productive human.
I guess I have a few thoughts.... 😺
AI definitely (dramatically) accelerated all the things that I hate about college, but I think the underlying issue is that we’ve collectively broken the University.
Accreditation attempts to standardize and commodify. Focus on job preparation has removed the soul of education. Well intentioned efforts to get more people into college have turned it into a requirement that many students see as an obstacle to be beaten (or skipped) rather than a challenge that’s valuable in its own right.
I was ready to leave academia by the time chatGPT came out. By now it’s unbearable. But without ChatGPT we would have gotten to almost the same place within a generation—except the first gen students might have the least access to the tools to deny their own education (I.e. a Chegg membership and someone to write their essays for them).
Thx for commenting! Would be interested in hearing more about what you teach/study and what shifts have felt most damaging, if you’re up for it.
I've been teaching economics at a regional state school for about a decade.
The two big forces are students and administrators. Admin is too distracted with high-level things (enrollment, compliance, etc.) to know what's happening on the ground. Students are less prepared than ever just as the stakes seem more impossibly high than ever--resulting in a heady mix of anxiety and apathy.
There's a lot to say, so I'll just dump out an incomplete and unordered list of some of the big things that come to mind:
- Culture. Our provost once announced that we should try to encourage "a culture of assessment" (notably *not* "a culture of learning"). Despite having lots of evidence of things getting worse there's nothing we can do about it because administrators want to make sure curriculum is adequately "flexible" so transfer students can graduate quickly. (There's a lot more to say about the culture outside of the academy, but if you look too closely at that problem it morphs into some sort of H.P. Lovecraft monster, so I'll leave that alone.)
- Education vs. Training. Everything has to be "practical" and valuable to employers. So long humanities, I guess students are going to have to find depth, humanity, and perspective from TikTok and network news. There's no room for genuinely liberating education when all the oxygen in the room is taken up by training students to be office drones. I once taught a class on Agent Based Modeling which, I think, provides a way of thinking about the world that could dramatically expand students' ability to understand complex systems (you know, like society!). But what my chair wanted to know is what sort of job skills students would pick up. Wisdom takes a back seat to "measurable learning outcomes".
- Commodification. Three units of knowledge (i.e. 3 credits) must be clearly defined in advance. Once the semester is over and students have purged their memory, we assume they "know" Algebra I, or Public Speaking, or whatever. But there isn't time to reinforce that knowledge in the next class because they're on to a new bundle of knowledge. This structure and bureaucratization is a double-edged sword--but I think it hurts more than it helps once you scale it up too far. I think this is especially true for the many undiagnosed-ADHD students whose brains bristle at the bureaucracy of school but *could* be learning things so much more deeply than their peers in the right environment. I see a lot of these students because the most ambitious non-ADHD students tend to end up in "better" schools.
- Primacy of signaling. At the end of the day (at least in the part of the industry I've been living in) higher ed is primarily in the business of ritual human sacrifice. I earned tenure by demonstrating that I've suffered through committee work and academic publishing, not by actually accomplishing anything. *Some* signaling is inevitable, particularly when you're in a system that involves more people than Dunbar's number. But when the signal matters more than what it's supposed to represent (i.e. hard work that was worth doing) we end up with lots of make-work for its own sake.
- Common pool problems. It's easier (especially when you're already overworked) to pass a student who is *almost* there--then they're someone else's problem. It's easier to give too many degrees than too few such that the signaling value is diminished. Grade inflation and credential inflation result in students feeling like they've got to go get a masters degree to distinguish themselves from everyone else--not because they want to go that much deeper into some topic. There are so many steps where some decision maker is incentivized to make things slightly worse in the long run. Software engineers use the term "technical debt" for similar situations--not only are we racking up $bajillions in student loans (resulting in students who are financially trapped and thus unable to take risks and innovate), but there's this sort of psychic debt that erodes the epistemological foundations of our culture.
- Waste of human potential. Students' neural plasticity slows right down around age 25 (mind you, I'm not a neuroscientist, so check my work). We take 4+ years of prime learning time and have students use that time learning how to navigate a bureaucracy while pretending to learn economics (the assessments show that many of them don't even pretend any more). There's so much learning-by-doing that students *could* be doing, but they've got to use their scarce time temporarily learning content they'll forget immediately after the final exam. Similar problems plague faculty--we spend so much time documenting all our hard work that we don't have much left to actually do hard work. And we certainly don't have space to take the risk of doing anything properly new.
- Support. The general public doesn't care enough to support state schools. There's only so much we can do when budgets don't allow instructors enough time or support (e.g. tutors, TAs, functioning technology, etc.) to focus on students.
If I felt like these were just issues of mismanagement, I'd be looking to go to a different school. But I don't think my school is particularly bad relative to the industry as a whole. I see these as systemic issues that are likely to get worse across the industry over the next decade.
Do you think that degrading critical thinking and cognition is the deep-down point of AI? I sound like a conspiracy theorist. But really! Look at what social media has done. And now this? In another decade… scary to consider. Maybe it’s what ‘they’ want… 🤔
I mean, you’re definitely not the first to ask this question! Under-educated population, easily manipulated by chatbots? Kinda perfect for a fascist agenda…
Check out writer Meg Conley she has a lot of smart things to say along these lines!
Do you think AI can be used ethically, and if so, how?
Hmmm, I guess I’m less interested/qualified to speak to ethics, and more concerned about the effects on the user in re: to critical thinking/education. That’s kind of an ethics question, but not probably what you mean?
What do you think? I will say we haven’t even touched on the ways that AI contributes to climate/environmental issues. Which is def a question of ethics!
I was getting at the effects of AI on the user. We are meant to be teaching students how to use AI ethically in their completion of course work- not using it to plagiarise, but to generate ideas and critically engage with those generated responses. That was the context to my question.
Thx for clarifying. I think the substance of my piece and the MIT research is to point out that what you describe is not how AI is used, and it’s not how it was designed to be used. So I'm arguing that the notion that using AI the way that you describe here is a farce.
Also tho, I’m glad you raised this bc I think the framing here you point out is important. Namely, some people invented a cheating machine to do students’ homework for them in order to make $$, and now they want the schools to figure out how to teach it…and make it not a cheating machine.
It’s silly. We should be asking the creators to justify the ethics/purpose of the cheating machines. No one asked for it!
That’s a bit simplified but I think it’s pretty close to reality!
Yes, I tend to agree. The genie is out of the proverbial bottle, and there is no going back. AI is here to stay, and it maybe conceived as a cheating machine but the problem remains- how do we mitigate its negative effects? Certainly, it requires some regulation.
I read and think and know a lot about AI, and I agree with you completely. But I also know that, as the focus of most of the investment money and talent in the US now, it's not going anywhere. People are using it now as a "tool" to do things the same way as they did them before, so we need to ask them to do different things. Writing essays and papers makes as little sense now as learning the multiplication tables, so how can we teach kids what's really important using different methods? How can we help them think and act independently in the face of technology designed to co-opt not just their minds, but their hearts and souls?
I agree that it’s not going anywhere. Say more about “writing essays makes little sense now” based on your own expertise. I find it interesting when people say that given that that here we are…you as a reader and me as a writer and having a convo about it.
And as far as I can tell there are still plenty of readers (on this site ex) and yet no one who wants to read AI writing much less pay for it :)
I would argue that writing essays, papers, and learning to do math in your head are actually very much needed for the health of our brains. I'm not sure how rote memorization (multiplication tables) comes in regarding brain health, but I would be hesitant to throw the proverbial baby out with the bathwater. Like the author, I also work in higher ed, and it is shocking to see how quickly the deterioration of basic "daily life" critical thinking has impacted our incoming freshmen these last few years. It is alarming and I fear we are rapidly progressing to a dystopian hell future where thinking is outsourced to AI.
Thx Noelani! Would be interested what you’re seeing from your students, if you’re up for it :)