Generative AI and Hybrid Intelligence for Fostering Teacher Noticing
The project aimed to explore whether interacting with an AI chatbot acting as a student, could support prospective mathematics teachers in understanding and responding to students’ mathematical thinking. Implemented in the context of a mathematics teacher education programme within the masters Science Education and Communication, seven participants identified “critical events”, which are moments where students’ mathematical thinking becomes visible and can be built on by the teacher, from their experiences during their school internships and from academic literature. They addressed these in a dialogue exercise with CoPilot, an AI chatbot. The intervention involved uploading written descriptions of critical events and engaging in guided dialogue with the AI system to interpret student thinking from multiple perspectives (from feedback from their peers, instructor and AI systems). In general, prospective teachers felt the dialogue with the chatbot helped them learn and practice their noticing skills. In follow up interviews, the participants reported they saw it as a useful preparation before teaching in real classrooms, although few of the participants were sure about fully trusting it, as it differs a lot from actual students.
Background information
This project originated from the recognition that prospective teachers often have limited opportunities to gain practical classroom experience, due in part to the limited availability of internships and opportunities to teach in real settings. Even during real classroom practice, teachers rarely receive immediate and detailed feedback on their teaching decisions, which makes it difficult to reflect and improve their teaching practice. To address this gap, the AI-supported dialogue task was designed to offer additional practice in a safe and accessible environment, without the time constraints that real classrooms entail.
The dialogue includes a prompt to be copy-pasted in an AI-system such as CoPilot and enables a scenario where prospective teachers can practice their noticing skills. The scenario includes a suggestion of a critical event, but the prospective teachers are encouraged to use their own critical events based on their internship or other experiences.
This approach allows participants to prepare better for real teaching situations while experimenting with different teaching strategies, especially when opportunities for role play and intervision sessions are limited. The AI environment provides a safe space for making mistakes and trying out new ideas. The project aims to support teachers by giving them an opportunity to practice, evaluate, and refine their teacher noticing skills.
The workshop
We conducted a post-course workshop with eight prospective mathematics teachers to deepen their ability to notice and interpret critical classroom events. Participants, all with strong mathematical backgrounds, engaged with a carefully selected algebraic misconception from a Grade 9 lesson . They copied the event and a pre-designed prompt into Microsoft Copilot to analyze student thinking, explore possible misconceptions, and consider pedagogical responses. Copilot acted as a student agent that interacts with the teacher. GenAI was used to surface alternative explanations, generate potential student reasoning, and support comparison of interpretations. We chose this structured prompting approach because research on teacher noticing highlights the value of examining multiple perspectives, and studies on GenAI in teacher education indicate that AI can serve as a reflective aid when paired with targeted prompts (Rotem & Ayalon, 2024). Using the university-licensed CoPilot ensured a safe environment while enabling controlled exploration of AI-generated insights.
Following the AI exercise, a plenary discussion was held in which participants shared their experiences. Finally, a one-on-one follow-up interview was conducted to reflect on the interaction and insights gained. The whole intervention lasted 60 minutes with dialogue exercises taking up 40 minutes of that time.
The results
Overall, participants reported that they found the AI-supported dialogue useful for practicing their teaching skills. Many noted that the tool would be especially valuable for beginning teachers as they could practice in a safe environment without time constraints . More experienced participants felt that the AI was less realistic than real classroom interactions as the simulation was mainly a text-based interaction between the prospective teachers and the AI system acting as a student agent. However, they still viewed it as a tool for newcomers. Some participants described the dialogue as realistic and comparable to classroom situations, while others felt it could not fully capture the dynamics of real student interactions. Across the group, the feedback and evaluation component was seen as the most beneficial aspect, as it allowed participants to get feedback by the AI system, their peers and the instructor, expanding their perspectives and identifying aspects they might have missed otherwise. In contrast with the majority of the seven prospective teachers (PTs), one participant expressed skepticism about the role of AI in teacher preparation, emphasizing that teaching deeply relies on authentic human interaction. A key challenge was that PTs consistently disregarded more conceptual ideas, especially the student’s transfer of strategies from prior lessons, even though these ideas are pedagogically significant. This pattern mirrors research showing that PTs often struggle to engage with the conceptual roots of student misconceptions and tend to respond procedurally. Another outcome was the limited use of shaping moves: most PTs guided the student toward the correct method rather than probing their thinking. This was unsurprising yet noteworthy, as it reflects known difficulties in responding in ways that build on students’ reasoning rather than correcting it.
Lessons learned Tips
Central AI policy
All AI-related activities on this page must be implemented in line with Utrecht University’s central AI policy and ethical code.
Responsibility for appropriate tool choice, data protection, transparency, and assessment use remains with the instructor.