The Human in the AI Loop
Motivating students to do hard things when AI makes everything easy.
In AI education we talk a lot about keeping “the human in the loop.” It has become the mantra of AI literacy—a kind of moral seatbelt meant to reassure students and teachers alike. The human should generate ideas; the human should revise AI’s output; the human should remain responsible for anything that’s produced. This, we tell students, is the best way to ensure ethical AI use.
But I find myself wondering: Who is the human we are keeping in that loop? How does that person react to confusion, friction, or doubt? How do they view themselves as thinkers in a world where production has become AI-turbocharged? How do they respond to temptation or delayed gratification? What pressures do they face in this new landscape?
When you shift the focus from the workflow to the person inside it, motivation suddenly becomes the whole story. Even when a student dutifully employs “the human in the loop” strategy when using AI, there is still ample room for cognitive offloading and deskilling. Motivation can be the key that determines whether or not this happens. Motivation isn’t a switch; it’s a narrative students use to make sense of struggle: “This is too hard,” or “I’m not good at this.” These stories don’t just influence behaviour: they sometimes give permission to avoid the discomfort that learning requires.
Students do not turn to AI for a single reason. Some are perfectionists, frightened that their own ideas won’t measure up. Some are impulsive, overwhelmed, or simply exhausted. Others are carrying far heavier things than schoolwork, and AI becomes a way of lightening the load. AI didn’t create these tendencies, but it meets them: smoothly, instantly, and without judgment. For the student who fears getting it wrong, AI offers polish. For the student who feels behind, it offers speed. For the student who is overwhelmed, it offers relief.

I’ve seen this play out repeatedly. Recently, the grade 12 students at my school were given time off timetable to work on a research project required for graduation. They had spent months with supervisors, revising drafts and refining ideas. This day was set aside for final polishing. Late in the afternoon, a student writing a math research paper began walking through hallways and common spaces, laptop open, asking every passing adult with any math background to review his work.
His supervisor had already told him the paper was strong and had offered some suggestions. He had worked diligently for months and used every support available. Yet in the final moments, doubt overwhelmed him and his anxiety crowded out his confidence. Eventually he made a series of changes to the paper based largely on ChatGPT’s revisions.
We’ve all encountered students like him: bright, capable, conscientious, and undone at the last minute by the fear that their best might not be good enough. How do tendencies like this make students especially vulnerable to using AI in ways that undermine learning. Not out of deceit, but out of self-protection. The work is not to tell students to “try harder” or to imply that using AI is a moral failing. Rather, it’s to help them recognize their own patterns with honesty and curiosity. To give them the language to notice when they’re leaning on AI out of fear, or fatigue, or the simple human urge to avoid discomfort.
This kind of self-discovery begins with simple metacognition. Naming the feelings that accompany challenge, such as frustration, self-doubt, and anxiety, can validate student experiences and foster an environment where transparency is the norm. This creates openings for conversations about how and why to make good choices when it comes to AI use.
Recently I’ve been reading 10 to 25 by David Yeager. He critiques the “neurobiological incompetence model” which is the idea that adolescents are half-formed humans, limited by immature brains. Instead, he argues that young people aren’t defined by poor self-control; they’re driven by different motivations than adults, namely social standing, respect, and belonging. When we understand this, the problem of student motivation looks less like a deficit and more like a mismatch. Students aren’t unmotivated or resistant to effort; they’re motivated differently.
Yeager proposes that the most effective way to support young people is to pair high expectations with genuine respect and support. His research offers practical strategies: be transparent; ask questions rather than issuing directives; offer “wise feedback” that conveys belief in a student’s ability; make purpose and belonging visible; and frame stress as something that can fuel growth rather than signal failure.
Building on Yeager’s work, one of the most powerful things we can do is frame productive struggle as a source of pride. When students wrestle with an idea and arrive at clarity, they feel the satisfaction of having earned their thinking and of joining a community of young people who take intellectual challenge seriously. Our job is to pair high expectations with steady support that communicates, “You can do this, and the struggle is the point.” By valuing difficulty, normalizing frustration, and offering feedback that highlights growth, we help students see themselves not as avoiders of hard things but as thinkers capable of meeting them.
One tool I’ve been experimenting with is the use of AI intention statements. At the start of a task, after I’ve clarified what forms of AI use are permitted, I ask students to write a brief statement of intention: How and why do you plan to use AI? At what point in your process? For what purpose? They can opt out of AI entirely or use any of the approved strategies. After completing the task, they add a declaration of their actual AI use as part of their citation practices.
The learning lies in the comparison. Where did their intentions hold? Where did they shift? What triggered that change: confusion, time pressure, perfectionism, impatience? These reflections help students anticipate their own patterns and develop the self-awareness needed to make thoughtful choices.
Keeping “the human in the loop” has never been about inserting a person into a process that involves AI. It’s about cultivating the kind of human who wants to be in the loop—who still feels the tug of curiosity, the satisfaction of an earned insight, the dignity of figuring something out. And that has never been more challenging, or more necessary, than it is now.

Really appreciated your writing here on different motivations for using AI, which so often just get categorized as "cheating". I've also become frustrated by the repetition of the "human in the loop" idea, which led me to write a short post, Braiding AI, where I argue to put AI in human loops rather than vice versa: https://xolotl.org/braiding-ai/
Absolutely love this. So hard to find material regarding pedagogies around AI that speak to educators mentoring teens, not college students — helpful to teachers in high school, where relationships make or break the program and experience. Thanks for this!