AI Is Already in Our Classrooms. Now What?
AI Is Already in Our Classrooms. Now What?
The question isn’t whether AI belongs in schools—it’s how teachers can engage with it thoughtfully, critically, and on their own terms.
Ask 10 people what AI is and you will get 12 answers. It is a tool. It is a threat. It is a plagiarism machine. It is the future of learning. It is full of knowledge. It is full of lies. It is global surveillance. It is a personal tutor. It is coming for our jobs. It is going to save education.

Everyone has an opinion. Almost no one agrees on what they are actually talking about. The frustration is earned. The hype is real. And educators are caught in the middle.
As Justin Reich puts it, “AI is not invited into schools through a process of adoption, like buying a desktop computer or smartboard—it crashes the party and then starts rearranging the furniture.” Most ed tech gets adopted through a structured process: Someone forms a committee, runs a pilot, makes a purchase. AI skipped all of that. It just showed up.
Recently, a local school board member texted me that teachers at her high school are using AI to grade student work, and she’s concerned the district has no policy for it. That is one conversation, but it represents hundreds happening right now across the country. By multiple estimates, roughly half of schools and districts in the U.S. still have no AI policy at all.
AI is already in our classrooms. The question is not whether to engage with it but how.
AI is already in our classrooms. The question is not whether to engage with it but how. And right now, most educators are navigating it alone. They didn’t choose this. AI showed up in their classrooms whether they were ready or not.
I chose to focus my work here because the educators I work with across the country asked for it. Through our Center for STEM Professional Learning at Scale at BU Wheelock, we work with teachers and districts in 29 states, and the message we keep hearing is the same: We need help making sense of this, and we do not know who to trust. That is not a technology problem—it is a professional learning problem. And while there is no shortage of frameworks and guidance documents, most educators do not have the time to dig through them while also doing their jobs. Parents are asking questions, too, and districts often have nothing to tell them.
And it is not just the tools themselves. In the dozens of state guidance documents, organizational frameworks, and professional development materials I have reviewed, the same pattern appears: Classroom activities suggested as best practices that have rarely been tested with actual students. AI-generated science simulations are recommended as replacements for hands-on investigation, as though watching a model of a phenomenon is the same as figuring one out. These ideas sound plausible enough that no one stops to ask whether anyone has actually done this in a classroom and observed—let alone measured—what happened. When the guidance itself is untested, we are asking educators to build on a foundation that does not exist.
The Negatives Are Real and Must Be Named
The harms are not hypothetical. They are documented, and they start with learning itself.
A large-scale field experiment published in the Proceedings of the National Academy of Sciences studied nearly 1,000 high school math students in Turkey and found that unrestricted access to GPT-4 improved performance by 48% while the tool was available, but when it was taken away, students performed 17% worse than peers who never had AI access at all (Bastani et al., 2025). The learning looked like it was happening. It was not.
The students who had the most help performed the worst on their own. The most common message students sent to the unguarded AI tool was simply asking for the answer. They copied solutions without reading or understanding them. And perhaps most telling, the study found a significant mismatch between perceived and actual performance. Students believed they had learned more than they had. That should concern all of us.
However, a version of the tutor designed with pedagogical guardrails, meaning it gave hints instead of answers, asked students to do the reasoning themselves and largely mitigated those negative effects. Students using that version increasingly engaged in substantive ways over time, attempting their own answers and asking for help rather than asking for solutions.
The lesson is not that AI cannot support learning. It is that design matters enormously, and right now most of what students are accessing has no guardrails or built-in design principles at all.
This is one study in one context, and the authors are careful to note its limitations. But it is also the most rigorous field evidence we have so far, and the pattern it identifies is consistent with what educators are reporting in their own classrooms every day. Educators and policymakers around the world are grappling with the same questions, often with even fewer resources.
There is also emerging evidence that well-designed AI tools can support educators in specific contexts, particularly when they work alongside teachers rather than replacing their judgment. The lesson is not that AI cannot support learning. It is that design matters enormously, and right now most of what students are accessing has no guardrails or built-in design principles at all.
The harms extend well beyond the classroom. These models are trained on massive datasets that include copyrighted and uncredited work. The technology is overwhelmingly being built and pushed by tech companies chasing profit, not by educators trying to improve learning. AI reinforces and amplifies existing biases around race, gender, disability, and socioeconomic status. The communities most impacted by those harms are the ones least likely to benefit from the technology.
And the human labor that makes AI possible—the data labeling, the content moderation—is frequently exploitative work outsourced to people in the Global South, with investigations documenting data labelers in Kenya, India, the Philippines, and Venezuela earning $1–$2 USD an hour, often without job protections or benefits. The equity concerns do not stop there. If well-resourced schools adopt AI while under-resourced schools avoid it entirely, we risk widening the very gaps we are trying to close. Disengagement is not a neutral choice—it has consequences, too.
And Yet: The Case for Engagement
When it comes to education specifically, AI threatens to strip away the things that matter most about learning: the creativity, critical thinking, and curiosity. The messy process of struggling with an idea and coming out the other side understanding it. When we reduce learning to optimization and efficiency, we lose the productive friction that builds real understanding.
When AI is used thoughtfully, grounded in what we know about learning, it can be a genuinely powerful tool for educators. The harms will not fix themselves, but they will get worse without educators in the room.
What does thoughtful engagement actually look like? In the science classrooms where I work, it looks like a teacher using AI to quickly surface patterns in student responses after a lab so she can spend the next day in real conversation about what her students actually understood and where her students’ sensemaking was headed. The AI did not teach the lesson—it gave the teacher better information to teach it herself. The students did not interact with AI at all, but they got a better class because their teacher did.
In my own research, I am piloting an AI coaching system for science teachers that uses structured curriculum data rather than a large language model, so that responses are deterministic, traceable, and auditable. The system cannot recommend something the curriculum does not support.
And in a recent study, when we surveyed educators spanning Pre–K through higher education, including teachers, coaches, curriculum coordinators, and consultants representing nearly 900 years of collective classroom experience, the answer was clear: They want tools that support disciplinary reasoning and keep humans at the center.
They are not asking for AI to do their jobs. They are asking for AI that respects their expertise.
The question is never “How do we get AI into the lesson?” The question is “What does my professional judgment tell me about where this helps learning and where it gets in the way?” Every teacher in every subject is capable of answering that question. Most have not been given the space or the support to do so.
In every case, the teacher is the one making the decision. The student is the one doing the thinking, with support and collaboration. And the relationship between them remains at the center of the learning. The point is that educators and students have agency over AI, not the other way around.
Why This Work Matters
The school board member I mentioned is not an outlier. Tech companies move fast and higher education moves slow. If we do not find a way to keep up, the only voices shaping how teachers understand AI will be the ones driven by profit, not learning.
Teachers are the experts in their classrooms. They know their students, their communities, and their contexts better than any technology company or policy document ever will. What they need is access to honest, research-grounded information, time for collaborative reflection and teacher research, and the space to draw their own conclusions. And students deserve to be part of this conversation, too. They are building their own relationships with these tools, and they need the critical thinking skills to do that well.
We can hold two things at once: AI is doing real harm, and we still need to help each other navigate it. If we walk away because the technology is flawed, we leave educators and students with nothing but marketing hype. That is not an acceptable outcome.
That is why we are building what we are building at BU Wheelock: research, partnerships with schools and districts, public conversation, and new graduate programs designed not to train people to use AI tools but to prepare educators who can make good decisions about their classrooms. Because the educators doing this work every day deserve more than a webinar from a tech company. They deserve a real intellectual home for the hardest questions in education right now. And they deserve to be the ones shaping the answers.
The AI & Education program at Boston University Wheelock College of Education & Human Development centers student learning and human connection. Applications for fall 2026 enrollment are now open. Interested in learning more? Visit the EdM in AI & Education and the graduate certificate in AI & Education.
TJ McKenna is the program director for AI & education and a clinical assistant professor in science education at BU Wheelock College of Education & Human Development. He is also the director of the Center for STEM Professional Learning at Scale and associate director of educator engagement and impact at the AI & Education Initiative.
Comments & Discussion
Boston University moderates comments to facilitate an informed, substantive, civil conversation. Abusive, profane, self-promotional, misleading, incoherent or off-topic comments will be rejected. Moderators are staffed during regular business hours (EST) and can only accept comments written in English. Statistics or facts must include a citation or a link to the citation.