02 April 2025

Knowledge item

Using Generative AI for Legal Tutorial Assignments

This project explored how students use Generative AI (ChatGPT) to solve legal tutorial assignments in a third-year bachelor’s course. A small intervention with six students assessed AI’s utility, challenges, and impact on learning outcomes. Results revealed advantages in quick response generation but highlighted issues with legal depth, critical thinking, and completeness of answers. Recommendations include educating students and teachers on AI use and promoting critical thinking in assignment design.

The challenge

Generative AI tools like ChatGPT are increasingly accessible, but their role in legal education remains unclear. Teachers often lack insight into whether and how students use these tools for assignments, raising concerns about their influence on learning outcomes. This project aimed to explore the opportunities and risks of AI in education, focusing on its utility in legal tutorial assignments. Key goals included understanding how students engage with AI, identifying challenges they face, and evaluating its impact on learning critical skills like legal reasoning and preparing for tutorial assignments.

The intervention

The intervention involved voluntary participation from six students in the course Vennootschappen en Rechtspersonen (BA3). Participants individually completed a short legal analysis assignment—a tutorial-style exercise designed to develop their reasoning skills. First, they attempted the assignment on their own, and then they used ChatGPT to generate an answer to the same prompt. They compared both responses, identifying similarities, differences, and potential strengths or weaknesses.

Students received no guidance on how to use ChatGPT, except for using ChatGPT to complete tutorial assignments and log the query and answers and afterwards reflect on the output. Students documented their reflections in writing and shared insights through a survey. In doing so, the survey captured broader insights into their experiences, including their prompt strategies, perceptions of AI-generated answers, and the challenges they encountered.

This intervention was based on several studies that indicate that AI generates inaccurate or incomplete answers. The intervention intended to let students become aware of these inaccuracies/incompleteness. This background shaped the design of the assignment and the reflection process, ensuring that students critically engaged with AI rather than relying on it uncritically. By analyzing both their own and AI-generated responses, students gained a deeper understanding of legal reasoning and argumentation, while also reflecting on the limitations and affordances of AI in legal education.

The result

Students appreciated ChatGPT’s quick responses to their legal analysis assignments, particularly in terms of efficiency and ease of use. They noted that AI-generated answers were well-structured and clearly written, which made them accessible. However, many students also highlighted significant drawbacks. Most pointed out a lack of legal depth, such as missing references to statutes and case law, and some expressed concerns that relying on ChatGPT too heavily reduced their critical thinking. This over-reliance may have stemmed from the perceived authority and fluency of ChatGPT’s responses, leading students to trust its output without thoroughly questioning its reasoning.

Another key issue was that AI-generated answers, while logically organized, often lacked the nuanced argumentation and specificity required in legal contexts. Some students may not have given precise prompts, which could have contributed to generic or incomplete responses. This highlights the importance of teaching students how to craft effective prompts and critically evaluate AI-generated content.

Selection bias and low participation limited the generalizability of results, but the findings emphasize the need for structured guidance on AI use in legal education. Based on the (limited) data collected, such guidance should include:

  • Teaching students how to frame prompts that elicit legally precise and well-reasoned responses.
  • Encouraging students to verify AI-generated content by cross-referencing legal sources.
  • Discussing the risks of cognitive complacency and emphasizing the importance of independent reasoning.
  • Defining appropriate contexts for AI use—when it can be a helpful tool versus when it might hinder learning.

These insights offer valuable considerations for educators integrating AI into their courses, ensuring that students engage critically with AI tools rather than relying on them unreflectively.

What’s next?

This project served as a pilot for a broader project involving surveys for both students and (assistant) professors. The follow-up will involve a larger cohort of students completing similar assignments while reflecting on AI’s role in their learning process. Professors will also be surveyed to gather their perspectives on the impact of Generative AI on teaching and assessment practices. Insights from these surveys will inform the development of targeted workshops and resources for integrating AI into education. Additionally, the findings will help shape policy recommendations on ethical AI use in legal education and guide strategies to enhance critical thinking and digital literacy among students.

Lessons learned

  • ChatGPT provides quick responses but lacks the depth required for legal studies.
  • Over-reliance on AI risks diminishing critical thinking and engagement.
  • Students prefer shorter prompts, which may limit AI’s utility for complex tasks.
  • Selection bias and voluntary participation can impact the representativeness of findings.

TIPS

  • Educate students and teachers on the strengths and limitations of AI tools. This includes teaching that AI tools like ChatGPT excel at generating well-structured and accessible legal text but struggle with legal depth, such as accurately citing statutes and case law. Additionally, AI-generated responses may sound authoritative yet contain reasoning errors or omissions. Understanding these strengths and weaknesses can help students use AI more critically.
  • Design assignments that require critical analysis beyond AI capabilities. Assignments should emphasize legal reasoning, case interpretation, and argumentation—areas where AI tends to oversimplify or provide generic responses.
  • Encourage prompt engineering to optimize AI outputs. Training students to refine their prompts can improve the relevance and accuracy of AI-generated content, mitigating issues like vague or surface-level responses.
  • Regularly assess AI use and adapt teaching strategies accordingly.
    Monitoring how students interact with AI helps identify potential over-reliance, guiding adjustments in teaching approaches to reinforce independent critical thinking.

 

 

You are free to share and adapt, if you give appropriate credit and use it non-commercially. More on Creative Commons

 

Are you looking for funding to innovate your education? Check our funding calender!