Gotcha Culture
Why AI Detection is Teaching Students the Wrong Lesson
If you follow AI in the media, even casually, you’ll notice that educators are still deeply divided on how to manage its use. Some believe that even if AI detection tools are flawed, they must be part of every assignment submission.
The reasoning is simple enough: if we don’t have some way of monitoring AI use, what’s to stop students from submitting AI-produced work as their own? That’s not a reckless or punitive position. It comes from a real concern for academic integrity and learning.
On the other side are those who argue that any use of AI monitoring tools, and the follow-up conversations they trigger, risk harming students because of false positives. Once a flag is raised, they argue, the interaction becomes accusatory. Monitoring tools are flawed, often biased against non-native speakers, and poor evidence on which to jeopardise trust. We are teachers, not police.
The truth, as usual, probably lives somewhere in between.
Anything to Declare?
I recently experienced my own version of AI policing, but this time I was on the receiving end.
After publishing my last slantwise article, The Human in the Loop, a reader ran a paragraph of my writing through Pangram, a new AI detection tool that claims extremely high accuracy. In the comments, they told me Pangram had attributed the paragraph to being written by AI.
I was horrified. Not because I don’t use AI, I do, but because I teach ethical AI use and hold myself to the same standards I ask of students. When I published the article, I felt confident in my process and fully believed the ideas, words, and structure were 100% mine. The accusation made me second-guess all of that. I felt guilty and defensive all at once.
The feeling was akin to being asked by a customs agent if I have anything to declare. Even when I’m fairly sure the $15 t-shirt I bought is well within the spending limit, the question alone makes me doubt what I know to be true. When authority asks, certainty wobbles.
It turned out the commenter was testing the screening tool as part of his own work as an educator and wanted to discuss what might have triggered the flag. It wasn’t a hostile comment, but it felt like it. And the emotional response lingered.
What stayed with me wasn’t the misunderstanding, it was the feeling it triggered. It’s humiliating. And even when you believe you’ve acted ethically, being asked if you did makes you doubt yourself.
And then I thought: this is how students must feel.
The Chilling Effect
Fear of being challenged about AI is having a chilling effect on students. One student I spoke with described her AI use as a careful, deliberate process—more restrained than many educators assume. She used AI for feedback, gave it explicit restrictions, and made sure it didn’t do the work for her. AI helped her plan, organise, and keep moving but she did the thinking and writing.
Even so, she didn’t declare her AI use.
What struck me most wasn’t how she used AI. It was why she wouldn’t disclose it. She was afraid that drawing attention to her AI use would trigger a suspicion of academic malpractice, not because she was hiding anything, but because “it’s not normal” to include a declaration and because she didn’t want to admit to doing something she hadn’t been explicitly permitted to do.
In other words, she was navigating invisible rules. She trusted her own integrity, but not the system’s response to transparency.
That’s the quiet harm of gotcha culture. Even ethical students learn that non-disclosure feels safer than openness.
Teachers Aren’t Okay Either
On the other side of the equation, teachers are feeling increasingly frustrated by ubiquitous AI use.
In my neck of the woods, the term is wrapping up and grades are being prepared. As final assignments trickle in, an alarming number of students are getting flagged by Turnitin for high AI use, use that isn’t cited or discussed with teachers. This even after students had taken AI literacy lessons where transparency was explicitly framed as essential to ethical use.
So how are teachers supposed to know whether AI was used ethically if students don’t feel safe being transparent?
A high AI flag has to trigger a follow-up conversation, but many teachers feel deeply uncomfortable initiating them. One colleague described it as an “adversarial approach to our responsibilities,” potentially compromising the relationships at the heart of good teaching. Others feel frustration, even betrayal, believing their students have crossed a line into cheating.
Almost all feel overwhelmed as yet another responsibility is layered onto teachers, further shifting accountability away from students and onto those assessing them.
What if everyone is right?
The Problem Isn’t AI. It’s Assessment.
I know I’m joining a chorus when I say this, but we have to change the way we assess.
In Five Principles for Rethinking Assessment with Gen AI, Leon Furze argues that reform shouldn’t start with policing AI use at all. It should start with good pedagogy.
Assessments must be valid and generate trustworthy evidence of learning. If they don’t, no amount of surveillance will fix that.
We need to design for reality. Students will encounter AI in real-world workflows. Pretending assessment exists in an AI-free bubble isn’t rigorous—it’s nostalgic.
Transparency and trust matter. Students want guidance. They want to do the right thing. But they can’t meet expectations that haven’t been clearly articulated.
Assessment must be understood as a process, not a moment. High-stakes, one-shot tasks are poorly suited to an AI-saturated world. Drafting, conferencing, reflection, and iteration aren’t add-ons anymore; they’re evidence.
And finally, we must respect professional judgement. Blanket rules and surveillance tools don’t just erode trust; they create unsustainable workload. Teacher expertise, exercised over time, still matters more than any detector ever will.
What We Can Do Now
These changes won’t happen overnight. Educational systems are notoriously rigid, and many schools are bound to external assessment frameworks like the IB or AP that limit radical redesign.
But there are changes we can make now.
We can prioritise live learning: experiential, inquiry-driven work that students are invested in. When students care about their thinking, they’re less likely to offload it. Assessments that require students to reflect on lived classroom experiences or build on them, remain harder to outsource to AI.
We can integrate learning portfolios across subjects, giving real status to reflections, drafts, artefacts, and oral defences of learning. Process evidence should stand alongside tests and timed tasks, not beneath them.
We can be explicit, again and again, about when AI is allowed, when it isn’t, and how to use it well for a specific task. Statements of AI use should be required for all assignments, so students get used to explaining how ideas, structure, and language were built, with or without AI.
Monitoring vs. Policing
None of this means abandoning accountability. Students are human. They’re under pressure. Sometimes they’ll make poor choices. Until assessment practices catch up, screening tools may still have a place, but only as conversation starters, not gotchas.
For that to work, teachers need training and support. School leaders must help teachers approach AI conversations with curiosity rather than accusation. I’ve seen this work. In a recent conversation with a student flagged for high AI use, I asked whether he noticed a pattern. He quickly identified that the flagged sections were those he felt least confident about. Once unpacked, he could see that while his overall process was ethical, he leaned more heavily on AI for language and ideation where his confidence dipped. Because we had rich process data, the conversation felt safe and productive.
The key is to shift the culture from policing, which is rule-focused, to monitoring, which is learning-focused.
All of this takes time. But the first step is moving away from product-heavy assessment. When assessment is designed well, when it’s valid, authentic, transparent, and process-oriented, an AI flag becomes just another data point for learning, not a verdict.
Until then, the real danger of gotcha culture isn’t that we’ll fail to catch misconduct. It’s that we’ll teach students that honesty is risky, trust is conditional, and learning is safer in in silence.
And that’s an assessment failure we can’t blame on AI.


A worthwhile read and some good observations. Always constructive to put yourself in the other guy's shoes.
Clearly AI management will be a process with no quick solutions.
I really like the point you make in the "what we can do now" section about the live learning as evidence. This is a signal of where I am hopeful teaching, learning and assessment are headed; into embodied curricular experiences that are connected with a living curriculum. You have certainly captured the current assessment conundrum in which traditional education finds itself. I am looking forward to reading more about where you see pedagogical practice emerging.