This project looked at how GenAI can support learning in Industrial Organisation and Competition Policy, a bachelor-level microeconomics course. It was explored how 27 students perceived the ease of use and usefulness of ChatGPT when asked them to use it during a mathematics  learning activity. WThe team collected open-ended surveys that showed that students found ChatGPT a helpful tool for learning through breaking down complex ideas and solving them through smaller steps. A few students were cautious about relying too heavily on AI and potentially weakening their own understanding. Still, the general attitude toward using AI-tools was largely positive.

Background information

We initiated this project to address the growing use of AI tools in students’ day-to-day study routines, particularly in mathematics subjects where guidance on responsible use is limited. The core problem concerned the lack of evidence on how GenAI can meaningfully support microeconomic problem-solving without undermining conceptual understanding. Here, we explore its potential as a scaffold for clearer reasoning. While the application of language processing AI for e.g., written essay assignments, has produced first insights, applications in the context of mathematics education, as required e.g., in microeconomics, have found limited attention (Crompton & Burke, 2023; Koehler & Sammon, 2023). Students are increasingly using AI tools in their day-to-day studies, making it essential to examine how these tools should be responsibly integrated into academic practice (ensuring transparency, verification of outputs, transparency and support for independent reasoning). We worked with higher education students in order to gain insight into students’ learning experiences and identify benefits and challenges that emerge when AI-tools are integrated into mathematics learning activities. To explore this, we developed surveys focusing on key aspects of  the Technology Acceptance Model (TAM), a framework that describes how users accept and use technology, focusing on perceived usefulness and perceived ease of use.

 

The project

In the context of a mock exam for their microeconomics course, students had the opportunity to practice solving exercises using AI systems such as ChatGPT. Students were instructed to use AI to fill gaps left by traditional materials, especially missing steps in worked examples. Some of the students also attended a session on how to use AI systems such as ChatGPT as a helping learning aid, e.g., for explaining things they do not understand in the solution of a problem.

The intervention design focused on the following key elements:

  • Structured guidance and self-regulated learning (partially as not all students attended the separate session on using ChatGPT) for teaching students to use AI for feedback and reflection, not just for providing solutions (Chang et al., 2023)
  • Personalization and adaptive support: as all bachelor students had different knowledge gaps and struggled with difference aspects of the exercises, Gen AI can provide scalable, individualized support for learners in higher education (Chan & Hu, 2023).
  • Reflection and collaborative activities as incorporating open-ended surveys, peer discussions, and critical reflection on AI-generated responses, help students develop evaluative skills and deeper understanding, while also surfacing challenges and ethical concerns (Chan & Hu, 2023)

First, students engaged with ChatGPT through prompts related to a microeconomics learning activity by uploading the documents with the exercises to an AI system and asking for their solution and explanations on parts they did not understand. When encountering problems such as different solutions, unexpected results, or having content knowledge gaps, they asked the AI to explain them. The students completed open-ended post-surveys reflecting on their experiences afterwards.

The results

Outcomes showed clear benefits: faster problem-solving, step-by-step explanations, and immediate feedback, but only when students had sufficient foundational knowledge and strong prompt-engineering skills (for example in the surveys many students mentioned that the prompts they used were not efficient for getting the intended results while for others this worked fine). Those lacking prompting experience either received inaccurate or overly complex responses, particularly on advanced computations or graph-based problems. So a major challenge was heterogeneous digital literacy: some students quickly adapted and refined prompts, while others struggled, leading to mistrust or ineffective use. The limited use of domain-specific tools (e.g., plugins, which are extra tools you can attach to a program to give it new abilities) also constrained accuracy for technical tasks. Many students reverted to passive strategies such as copying entire study guides into ChatGPT, which reduced engagement and weakened learning outcomes.

Based on the survey results, it was notable that students simultaneously recognized both the risks of over-reliance and the tool’s productivity value. This duality aligns with research (Johnson, 2023; Chang et al., 2023) showing that generative AI is most effective as a complement and not a substitute for conceptual understanding and active learning.

The extent to which minor differences in prior knowledge and prompting skill shaped success was more pronounced than expected. The successes stemmed from students who could iteratively refine prompts and critically interpret AI feedback, while the challenges reflected unclear inputs, weak conceptual grounding, and limited digital literacy (conditions that amplified ChatGPT’s tendency to produce misleading or overly complex answers).

 

The next step

The findings show the value of scaffolding AI use rather than assuming students will intuitively adapt and help colleagues design clearer guidelines for learning, strengthen prompt-engineering instruction, and anticipate disparities in digital literacy between students.

The project had several limitations, including a small sample size, inconsistent documentation of students’ prompts, and variation in the ChatGPT versions used. In the future, we want to explore how students perceive generative AI over a longer period of time in a more structured/systematic manner. A longer observation period can potentially reveal how students’ strategies, trust, and critical judgment evolve over time, offering richer evidence for institutional policies on responsible AI integration and more durable support structures across courses and programs.  We would like to integrate AI support throughout the semester and track students’ learning experiences with regular surveys and daily reflection journals.

We also aim to conduct interviews in addition to existing surveys to better understand how students experience learning with AI. It would also be useful to ask students to record their prompts and AI responses in a more structured way, and to give them guidance on prompt engineering. We found that unclear prompts often led to vague or wrong answers from ChatGPT, so better prompting would help reduce these mistakes and improve learning.

 


Lessons learned

  • Structured digital-literacy support and explicit guidance on prompt design worked particularly well (e.g., promptingguide.ai​ ). Students with clearer prompts and basic GenAI knowledge received more accurate responses and used AI more productively. Targeted, step-by-step prompting also increased engagement and strengthened conceptual understanding.
  • Many students copied full questions into ChatGPT, which reduced active learning. Next time, we would provide earlier training on incremental prompting and require students to document their reasoning.
  • ChatGPT also struggled with graphs, symbolic math, and advanced computations; we would integrate domain-specific tools and teach students to verify AI outputs.
  • This project showed that GenAI is effective only when paired with strong human judgment and foundational knowledge. AI can scaffold learning, but without structured guidance and critical engagement, it risks reinforcing errors or encouraging passivity. This reinforced the importance of teaching students how to use AI, not just that they can.

 

Tips

  • Teach students to break tasks into small, purposeful prompts that state their goal, prior knowledge, and the specific step they need help with. Model this during problem-solving by thinking aloud as you refine a prompt, demonstrating how clarity improves AI responses. Show how to iterate: begin with a simple request, assess the output, then adjust the prompt to increase precision. This helps students guide the tool rather than accept its answers at face value. Resources such as promptingguide.ai could be handy for students (and teachers) with limited prompting experience.
  • Additionally, it is critical that you discourage copying full questions into the AI; teach step-by-step prompting that promotes active engagement and deeper learning. Instead of pasting an entire problem into the AI, students can frame a targeted request such as: “I understand the general concept but struggle with the first step of the calculation. Can you guide me through how to set it up and explain the reasoning behind that step?”
  • Use domain-specific tools or plugins (e.g., the Wolfram plugin) when working with symbolic math or graphs, and clarify where general models fall short.

 

 

Central AI policy
All AI-related activities on this page must be implemented in line with Utrecht University’s central AI policy and ethical code.
Responsibility for appropriate tool choice, data protection, transparency, and assessment use remains with the instructor.

 

Back