Open Pool Exams
In complex mathematical courses, students often struggle with abstract reasoning. Written exams are usually the main form of assessment, but do not sufficiently motivate or guide weaker students when the subject matter is demanding. There is a need for assessment formats that are more transparent, motivating and supportive of sustained engagement. Therefore, this project introduces the Open Pool Exam intervention to increase motivation, foster regular engagement and scaffold deeper learning strategies for students struggling with abstract content. Data include pre- and posttest results, student surveys and interviews.
Background information
The intended learning outcomes in mathematics and theoretical computer science include accurately recalling and applying formal definitions, constructing and explaining proofs and transferring knowledge to unfamiliar problems. In the Open Pool Exam format, students receive a pool of exam-style questions at the start of the course, covering key concepts and learning goals at varying levels of complexity. Students are informed that some of these questions will appear on the final exam, alongside new ones. This approach creates transparency and increases motivation, while still requiring knowledge transfer to novel tasks. Engaging with the questions enables retrieval practice, feedback and clarification. The expected effects of the Open Pool Exam intervention are grounded in educational theories such as expectancy-value motivation, deep and surface-level learning, test-enhanced learning and self-regulated learning.
Aims
This project aims to answer the following research question:
- How, and to what extent, is the Open Pool Exam intervention effective in supporting struggling students to reach the intended learning outcomes in a course with complex and abstract mathematical content?
Project description
This project will be carried out in the context of the pilot course ‘Logic’, but results can inform assessment in other courses with demanding mathematical content as well. To answer the research question, data will be collected using a pre-test to measure baseline proficiency in the learning objectives of the course and provide a benchmark to compare later progress with, and a post-test in the form of the final exam. After the final exam, students complete a survey with items targeting motivation, self-regulation, study strategies, and perceptions of the intervention. Survey findings will be complemented with student interviews.
References
- Eccles, J. S., & Wigfield, A. (2002). Motivational Beliefs, Values, and Goals. Annual Review of Psychology, 53, 109–132. https://doi-org.utrechtuniversity.idm.oclc.org/10.1146/annurev.psych.53.100901.135153
- Roediger, H. L., & Karpicke, J. D. (2006). Test-enhanced learning: Taking memory tests improves long-term retention. Psychological Science, 17(3), 249–255. https://doi-org.utrechtuniversity.idm.oclc.org/10.1111/j.1467-9280.2006.01693.x
- Wiggins, B. L., Lily, L. S., Busch, C. A., Landys, M. M., Shlichta, J. G., & Shi, T. (2023). Public exams may decrease anxiety and facilitate deeper conceptual thinking. Journal of STEM Education: Innovations and Research, 24(2). https://www.jstem.org/jstem/index.php/JSTEM/article/view/2624
- Zimmerman, B. J. (2002). Becoming a self-regulated learner: An overview. Theory Into Practice, 41(2), 64–70. http://mathedseminar.pbworks.com/w/file/fetch/94760840/Zimmerman%20-%202002%20-%20Becoming%20a%20SelfRegulated%20Learner%20An%20Overview.pdf
- Marton, F., & Säljö, R. (1976). On qualitative differences in learning: I — Outcome and process. British Journal of Educational Psychology, 46(1), 4–11.