Towards unbiased assessment of adaptive expertise
Higher education educates students to take up grand societal challenges, for which they need adaptive expertise. Since current assessment tools for adaptive expertise are mainly self-assessments including bias, this project creates, tests and validates real-world design scenarios as an alternative assessment tool for adaptive expertise using generative AI.
Background information
To address grand societal challenges, students and professionals need to develop adaptive expertise – the ability to perform at a high level when facing new or challenging problem situations. Adaptive expertise involves both expertise and innovativeness. However, measuring adaptive expertise is difficult, as most measurement instruments for adaptive expertise rely on self-assessments, which might lead to bias. A potential alternative to assess adaptive expertise are design scenarios which present real-world and open-ended problems. Nevertheless, current design scenario efforts do not consider the individual’s current expertise and the relatedness of the task to the individual’s knowledge domain.
Project description
For this reason, this project develops and validates design scenarios that vary in task complexity and task relatedness. We developed a series of 72 design scenarios using artificial intelligence (AI) (GPT4.0), that varied systematically on two aspects: complexity and scientific domain. Next, we asked a panel of 530 students and young professionals to propose solutions to the problems presented in four of these design scenarios that were presented to them. We used Llama3, a local large language model, to assess to what extent the solutions demonstrated adaptive expertise.
Aims
This project aims to develop a valid assessment tool for adaptive expertise that takes individual expertise and relatedness to the individual knowledge domain into account.
Results and conclusions
After several tests, it was concluded that AI assesses the solutions in a consistent and reliable manner. Moreover, the assessment of the solutions could be related to three variables that should be related to adaptive expertise: (1) the relatedness of the problem domain to the content of the education program, (2) the relatedness of the complexity of the task to the level of the education program, and (3) self-assessed adaptive expertise. Statistical analyses showed that all variables are related to the assessed adaptive expertise in a manner that is theoretically expected, which is proof for the validity of the approach.
References
- Carbonell, K. B., Stalmeijer, R. E., Könings, K. D., Segers, M., van Merriënboer, J. J. G. 2014. How experts deal with novel situations: A review of adaptive expertise. Educational Research Review, 12, 14–29.
- Fisher, F., Peterson, P. (2001). A tool to measure adaptive expertise in biomedical engineering students, in 2001 Annual Conference (pp. 6–120).
- Kua, J., Lim, W-S., Teo, W., & Edwards, R. A. (2021). A scoping review of adaptive expertise in education. Medical Teacher, 43(3), 347–355.
- Walker, J. M. T., Cordray, D. S., King, P. H., & Brophy, S. P. (2006). Design scenarios as an assessment of adaptive expertise. International Journal of Engineering Education, 22(3), 645–651.