AI in Higher Education

01 December 2023

Educational project

AI in Higher Education

This USO project seeks to strengthen reflective awareness among staff and students about the risks and possibilities of generative AI. For this, educational interventions will be set in place to discover risks and possibilities. These interventions are intended to generate a representative body of findings about the risks and possibilities of using generative AI in educational contexts, and will hopefully provide us with enough input to produce informative guidelines, instructions and strategies for generative AI use.

Background information

The recent developments in the field of Generative AI have brought about a range of technologies that promise to transform higher education. This USO project seeks to strengthen “reflective awareness” among staff and students about the risks and possibilities of generative AI (such as Chatbots). What we mean by “reflective awareness” is that staff and students not only possess passive knowledge about these risks and possibilities but also develop a critical skill set of AI literacy. We seek to achieve this through concrete educational interventions (at course level) to discover these risks and possibilities. The interventions are intended to generate a representative body of findings about the risks and possibilities of using generative AI in our educational contexts and provide us with enough input to produce informative guidelines, instructions, and strategies as to when and how to use (and not use) generative AI.

Project description

The interventions are categorised into four broad categories, each worked out in a separate work package. During the two-year lifespan, all packages go through the same phases: the inventory, monitoring, creation, and evaluative phase. The interventions take place during the four educational blocks in the two academic years.

In this work package, members explore how AI can support teachers in generating educational content and enhancing instruction while reducing administrative burdens. In general, generative AI offers a variety of possibilities to assist teachers and at the same time poses new risks for the teaching process, requiring concrete risk mitigation strategies for teachers and their students (Kasneci et al. 2023). Members experiment with AI assistance in their courses and present their findings, and discuss known gaps and how to fill them using controlled experiments in the monitoring phase.
Coordinator: Sjoerd Dirksen

In this work package, members explore, examine, and summarize the best practices for supporting student learning with Generative AI technologies. Generative AI models such as GPT4 have achieved remarkable quality when producing on-demand unstructured and structured thematic texts and other types of media. Besides, the GPT4 model has been purposefully finetuned to generate instruction-oriented content. This opens possibilities for students to employ such models as learning support tools that can be used beyond answering factual questions. Identifying the possibilities and limitations of such models in terms of instructional support, domain coverage, and most importantly, didactic viability is among the most important objectives not only for this project but for the modern system of education at large.
Coordinator: Sergey Sosnovsky

Members of this work package investigate the opportunities for AI to continue fair and efficient assessment of student performance, paving the way for more personalized learning experiences in both curricular and professional education. It involves both issues concerning validity and reliability of assessment as well as more formal responses towards fraud and plagiarism. Simultaneously, we aim to understand how generative AI is employed by students for content creation, providing insights into emergent academic practices. Members will experiment in their courses with such tools in close cooperation with students and present their findings, discuss the possibilities and limitations, test them using controlled experiments. In addition, in close contact with exam commissions across the university, guidelines will be developed to assist exam boards (and staff) in dealing with issues of potential fraud and plagiarism and how to respond.
Coordinator: Laurence Frank 

Members of this work package propose an interdisciplinary exploration of generative AI and data fundamentals, focusing on tool criticism (Van Es et al. 2021). It entails a dual-track approach. The first track, ‘Generative AI Literacy’, strives to identify necessary fundamental AI skills and literacies for university students and a survey of students and teachers on how they are now using Generative AI. The second, ‘AI Policy and Integration in the Classroom,’ seeks to integrate these skills and literacies in teaching followed by the collaborative development of an AI-use code of conduct specific to their discipline. Expected outcomes encompass a toolbox ‘Critical and Responsible Generative AI Use’ for teachers to incorporate tool criticism in their educational practices and to provide hands-on exercises for students that focus on raising critical questions about generative AI tools needed for responsible use. It also includes a method for effectively co-creating codes of conduct with students.
Coordinator: Karin van Es

Aims

This USO project has a highly explorative nature and seeks to grasp the consequences of generative AI for higher education. We seek to build a shared body of knowledge through educational interventions about the risks and possibilities of generative AI, and their use or avoidance in teaching and learning. This hopefully leads towards a stronger reflective AI awareness among staff and students.

Results

The results and findings of the wide variety of interventions will be presented through a common platform for both staff and students. By including, in addition to the consortium members, colleagues and students in working with these interventions, we seek to contribute building reflective awareness of the nature of generative AI, its possibilities, limitations and risks. Educational interventions will be presented in a format using the CIMO method, making them comparable.

References

Kasneci, Enkelejda, et al. “ChatGPT for good? On opportunities and challenges of large language models for education.” Learning and Individual Differences 103 (2023): 102274.

Van Es, Karin, Mirko Tobias Schäfer and Maranke Wieringa. 2021. “Tool Criticism and the Computational Turn: A ‘Methodological Moment’ in Media and Communication Studies.” M&K Medien & Kommunikationswissenschaft 69 (1): 46-

Print

You are free to share and adapt, if you give appropriate credit and use it non-commercially. More on Creative Commons

 

Are you looking for funding to innovate your education? Check our funding calender!