Resisting GenAI in Education.
It's time for mass refusal, folks.
Note from Emily: This essay first appeared on my website on May 14, 2025. I’ve since updated it slightly, and the new version is the one you see here.
A few months ago, I overheard my daughter, a seventh grader, on FaceTime imploring her friends not to use ChatGPT to write their history essays.
When I asked the district how they justified having Generative AI (widely acknowledged as highly problematic) available on school-issued devices, I was told that students “need access to this transformative technology.” (Since I first wrote this essay in May, the district has just blocked access to ChatGPT on K-12 devices. A small victory, but my daughter’s friends have informed me that students now just use Microsoft CoPilot instead. Digital whac-a-mole, I call this.)
A few weeks later, one of my daughter’s teachers told me she was overwhelmed by her large class sizes– 36 students per class– and how she was falling behind in grading student essays. When she asked the district for support, they recommended she use AI to assess student writing.
When I asked the district about this recommendation, I was told they are “unaware” of such guidance. I will continue to ask for more information.
But at the end of the day, all of this is ridiculous. There are plenty of reasons to be concerned about the safety of Generative AI tools for children, of course, but we also have to ask why computer scientists working for powerful and wealthy technology companies get to have any input into how teachers should teach and children should learn.
When did we abdicate this responsibility?
When did we consent to treating children as widgets and teachers as automatons?
When did we decide that this time around, the snake oil will be different?
It is time for us to refuse the use of Generative AI tools in schools, especially by children.
I have been quoting Jeff Goldblum from Jurassic Park a lot these days: “Your scientists were so preoccupied with whether or not they could, they never stopped to think if they should.”
In 2025 we are battling monsters of a different sort: Generative AI tools shoved hurriedly into K-12 education with scant evidence of safety, efficacy, or even a clear understanding of what problem it is attempting to solve. (When teachers ask for more support in reducing class sizes, for example, I hardly imagine they are seeking additional technology-based solutions.)
First, a clarifying point: when we talk about “AI,” we’re really talking about “Generative AI” and that in and of itself is a difficult concept to understand. “Generative AI” includes products like ChatGPT (owned by OpenAI/Microsoft); Gemini (Google); and Claude (Anthropic).
So…What is Generative AI and What Does It Do?
I give full credit to Benjamin Riley of Cognitive Resonance for helping me attempt to explain this next part. Go follow him, subscribe to his Substack, and read his words. He’s really brilliant.
Generative AI works like “autocomplete on steroids” but even more so: it predicts the next most likely word used in a sentence, based on a series of complicated mathematical equations that are assigned to the frequency of words used in the training set. Think of it like a tool that uses probability to predict what comes next (or math to predict language).
Generative AI predicts text based on what it has been “trained” on, which are called LLMs, or “large language models” and which, yes, include copyrighted material, intellectual property, and likely your own LinkedIn posts. It does not “know” or “think” or “search”; it simply “predicts.”
We hear the word “hallucinations” to describe what happens when AI provides an answer that we humans do not consider true. But the GenAI tools are not trying to “find” anything, really -- they don’t function like search engines. So there’s nothing different happening when it hallucinates versus when it produces something that we consider true -- the process is the same and it’s all about predictions.
When we enter a “prompt” into Generative AI, it makes a prediction based on statistical weights using the data it was trained on. Often, the predictions GenAI makes are untrue, but they are delivered with great confidence (a problem especially for a young child using the tool). For this specific task, GenAI actually works pretty well at spitting out fluent statements, regardless of their veracity.
However, what Generative AI can do is very, very different from what you and I do when we are thinking. When humans think, we don’t scan past knowledge to predict what thoughts will come next. We use context, previous experiences, feelings, abstract thoughts to come up with ideas and new thoughts. As a result, GenAI is not “thinking.” In fact, GenAI is a tool of “cognitive automation” and can indeed be used to automate certain tasks, but it can never operate as a thinking human.
Generative AI is a tool– and much like a car is a tool, we should understand how it works before we operate it. Currently, that is not what is happening in schools. Children are given access to ChatGPT and told to “use it responsibly.” We would never hand children a drug-filled syringe to teach them about drug use, yet that is exactly what we’re doing with GenAI.
Why Should We Be Concerned About GenAI Tool Use in Schools?
There are numerous reasons to be concerned about GenerativeAI use in education. Here are a few. Another outstanding and brilliant person to follow for more on this is Anne Lutz Fernandez, and her “Help Sheet: Resisting AI Mania in Schools” which she updates regularly.
The goal of education is to engage students in a cognitive process. GenerativeAI displaces the cognitive process– it does the “thinking” for students. (As Ben Riley describes, this would be like going to the gym to workout and then letting a forklift move the heavy weights around for you. ChatGPT is a “cognitive forklift.”
“ChatGPT is a “cognitive forklift.” -Ben Riley, Cognitive Resonance
Our brains are not fully developed until we are well into our twenties or even thirties. Thus the temptation for students to use ChatGPT to do their work for them is significant and their executive function skills are not developed enough to “resist” this temptation. Students will use it to do their homework, write their essays, and cheat on their tests. Telling them to “use it responsibly” is irresponsible and completely misunderstands brain development. Any time someone uses the phrase “kids need to learn how to use ____ responsibly” just replace ____ with “cigarettes” to put things in perspective.
The use of Generative AI tools in school is an extremely slippery slope. Using Generative AI embedded in other EdTech tools is problematic enough. When those tools are offered to students as “tutors” or “buddies” (or worse, “therapists”), any adult concerned about the welfare of children should recognize the peril this presents. Adults are regularly duped by AI companions; children are far more vulnerable.
As Ben Riley points out, if the world’s best computer scientists haven’t been able to solve the hallucination problem of ChatGPT or OpenAI, then why on earth would we expect far weaker Generative AI tools for teachers not to hallucinate? And why would we accept in education a “tool” that is frequently “wrong”? Schools, as usual, get the rejected dregs of technology’s loser products (see: Chromebooks).
“[With GenAI products] schools, as usual, get the rejected dregs of technology’s loser products (see: Chromebooks).”
We cannot keep attempting to solve every problem with technology. GenAI is a hammer in search of a nail. What problem is it really attempting to solve? Is this what teachers are asking for? Previously, “Bring Your Own Device” and “Flipped Classrooms” were attempts to “help” teachers via technological solutions and those failed miserably. Why should we expect Generative AI tools to be any different, especially if they’re made by the same people? Fool me once…
If we are increasingly recognizing that student personal devices (like smartphones and smartwatches) are distracting to the learning experience, then the next step is recognizing that EdTech tools like GenAI and 1:1 devices are equally distracting to the learning process. How absolutely confusing it is to tell students, “You can’t bring your personal internet-connected device to school because it’s distracting” but then hand them an internet-connected device for school and tell them, “Use this but don’t get distracted”?!
Personalized learning has done more harm than good. Schools may claim that GenAI allows for more personalized learning, but this isn’t a benefit. Learning happens in a communal environment, with other people and in the context of relationships to teachers. GenAI and “personalizing instruction” siphons students into individual silos of thinking and prevents the cognitive conflicts and social struggles necessary to become thinking adults. In my view, interfering with the development of these skills is a direct threat to democracy.
The way Generative AI is currently being deployed in schools is neither thoughtful nor intentional. If we truly want to explore the opportunities with Generative AI tools, then we need teachers who not only understand it as such, but who think about thinking and want to generate critical thinking in their students. GenAI is a woefully incomplete tool for teaching and dangerous for use by children.
At the end of the day, the goal of education is (or should be) to engage learners in the cognitive process. GenAI displaces the cognitive process. Teachers are the experts of their students. True effective and meaningful learning is collaborative, full of struggle, occurs in the context of human relationships, and cannot be standardized and measured.
Questions to Ask Schools About Generative AI
A mass refusal of Generative AI is the quickest way to compel change, given how rapidly technology changes (and how slow regulation and policy follow). We only need to look at where the Tech Elite send their own children to school (low-tech, nature-based schools) or at countries like Sweden and Finland who are rolling back on their use of EdTech tools in the classroom to see the writing on the wall: Unless you’re making changes now as a school to move away from internet-connected, Generative AI-enabled devices and tools, you’re moving in the wrong direction.
While mass refusal is a needed tool in this fight, it is only one tool, and I do believe strongly in and advocate for the value of building relationships and working together to make change. School administrators who are open to conversations about the role of technology and Generative AI tools in school offer a starting point for change.
If your school administrators are open to the conversation, here are some questions to pose:
What problem is our school trying to solve by using GenerativeAI products?
Why is there such a sense of urgency to implement and use these tools? What is the risk if we decide to move slower?
What evidence-based research did you use to make a decision to provide young children with such powerful and potentially dangerous tools?
How is the use of Generative AI products in our district or school in alignment with our school mission statement and goals?
What measures are in place to identify hallucinations and ensure that when they occur they will be countered with factual information, without further increasing teacher burden or relying on additional technologies?
How are teachers encouraged to use Generative AI tools? Do they use it to assess student writing?
So many of these GenAI products are embedded in existing EdTech products our school already uses. Is it even possible to refuse or opt out?
What Gives Me Hope
It seems like there are suddenly a lot of computer scientists who are now also experts on teaching and learning and want us to all use Generative AI in our classrooms. I liken this to designing a surgical tool, then telling the surgeon because I designed the tool, she should allow me to perform the surgery.
That would be completely ridiculous and that is how I feel about the sudden influx of “educational consultants” from Generative AI and EdTech companies who claim to be able to fix education with their tools by turning me into a robot and my students into widgets.
That’s not how teaching works, and the implication that someone who may be brilliant in his own field (computer science) can step into my experience (teaching) and tell me how to do it better really boils my blood. I am the expert of my students, and each year, with each subsequent group of students, my teaching shifts and adapts to the needs and personalities of each new class. Remember, the computer scientist building these products likely attended school in an era of physical books and paper and pencils, with a teacher helping him to think about thinking itself and apply those ideas to the world around him, so he could grow up and become a brilliant computer scientist. He didn’t get handed crappy internet-connected Chromebooks and get told to “stay on task.”
Today’s college students likely spent the first five or ten years of their lives in a relatively low-screen world, where they experienced play and social interactions in the real world and tactile, three-dimensional experiences at school. When college students today use ChatGPT to write their essays, we are seeing only the beginning of a wave of students who increasingly received and interacted with digital technologies earlier and earlier in childhood.
Today’s “iPad kids” won’t hit higher education for another ten years. With declining functional literacy rates and critical thinking skills, how will this upcoming generation be prepared to invent and innovate like the computer scientist of today who invented the ‘education software’ being foisted upon our children?
In spite of how dark things can seem about the world right now, I do have hope for a better future. Here are three areas that give me hope:
Children are starting to rebel against all this technology in schools, like by sticking pencils or paperclips into the USB ports of their Chromebooks. I don’t condone dangerous activities, but Chromebooks are internet-browsers, not learning tools. If this is how children can express their frustration with the use of these products, then I take that as a signal that they want change too.
My own university students are shocked by the things I tell them about modern-day education. They cannot believe kindergarteners have iPads or that middle schoolers are given access to ChatGPT by their own school administrators. They want things to be different for their own future children some day, and that speaks volumes.
We don’t know what the future holds. It’s possible that Generative AI efforts will crumble and fall (go listen to Ed Zitron’s podcast “Better Offline” for more on this bubble), but even if that’s the case, today’s students still shouldn’t serve as collateral damage. Harms are being done as we speak. So today, until things change, we have to fight back by refusing to allow our children to participate in any Generative AI tools in school. If enough parents say “No,” then schools will be forced to look again.
At the end of the day, we are fighting for teachers, teaching, and an educational system that raises critical thinkers, who will then grow up to be active participants in a democracy.
That is worth fighting for.
Resources:
Anne Lutz Fernandez’s excellent document, “Help Sheet: Resisting AI Mania in Schools”
Ben Riley of Cognitive Resonance can be found here on Substack or here on his website.
Many resources, including a template for opting out of GenAI tools in school, are available to my paid subscribers. There are also numerous free resources available too.
To watch my full interview with Ben Riley of Cognitive Resonance, you can view that webinar recording here.



