Generative AI and using GenAI for essay co-writing
In the course Digital Innovation, part of Bachelor Sciences and Innovation Management, students write an argumentative essay about a controversial topic, related to digital technologies. They are asked to provide evidence for both sides of the debate and make an informed judgement at the end. During the years 2022-2023 and 2023-2024, the use of ChatGPT is integrated into this assignment. The goal of the assignment was to teach students how to build critical-reflective skills, verification skills and co-writing skills by having the students assess one AI-generated essay improve it, and reflect on their results. In this assignment, the students were provided with an AI-generated essay, asked to assess it using a rubric, tasked with improving it following their assessment, and filled in a reflection sheet based on guiding questions.
The Challenge
The project was started shortly after ChatGPT became available because students are likely to use it to write their essays. The goal was to provide them with structure and skills to use these tools in a reflective and responsible way. Also, to demonstrate the shortcomings and biases of tools like ChatGPT in argumentative writing.
In year 2022-2023, the teachers generated the essays using ChatGPT, to ensure a uniform starting point – especially since not all students were proficient users of such tools at such an early stage. Whereas this indeed ensured that all AI-generated essays were uniform, the teachers realized, that this set up was artificial and omitted the crucial step of students generating texts using AI tools themselves. They found it important that students realize how prompting influences the outputs of ChatGPT. As a result, in year 2023-2024, the assignment was revised. During a workshop, the students were instructed how to generate their essay themselves. They chose an essay topic from a list the teachers prepared. Then they used a rubric – developed by the teachers – to assess the quality of the AI generated essay. This rubric included criteria such as “sufficient evidence from reliable sources”, “precise definitions”, “connection of arguments”. This way students learned to spot the weaknesses of AI-generated texts which are often related to those criteria. The second part of the assignment asked students to improve or even deviate from the AI-generated text based on their earlier assessment in the first part. This aimed to help them explore where the human input is most valuable in this co-writing process. To support this the teachers organized two feedback moments during the assignment period to discuss the progress of the students. In the third part of the assignment, students completed a reflection sheet with questions about their learning experience and their perception of responsible use of these tools. The teachers asked them to provide specific examples from their personal experience rather than generic statements. Overall, the learning experience of the students included both learning about the technology and its shortcomings in terms of text generation (as this relates to the content of our course) and learning to provide arguments for both sides of a controversial topic and making an informed judgement at the end. To assess the assignment, the teachers included specific questions in the course evaluation and presented their assignment design to fellow teachers at a number of education-related meetings. They found this assignment design effective for teaching students about: Particularly, the first part of the assignment (i.e., assessment of AI generated essay) was a valuable way to challenge the students to interrogate and critique what seems very comprehensive and well-written at first glance. Providing the rubric with explicit essay criteria was helpful to guide the students’ attention to specific quality issues and weaknesses/strengths of the AI tools. The feedback from the students received through course evaluations in two years was positive. They appreciated that AI tools were integrated in a course and guidance was provided how to assess AI-generated texts. The students found these tools helpful with structuring the essay and providing initial arguments. Additionally, the teachers observed that students usually find that AI-generated essays are already pretty good and sometimes struggle to find how they can improve them (besides verifying information and adding sources). They also very rarely challenge the conclusions made by ChatGPT (which is almost always un-nuanced and generic). A few students expressed that they felt restrained by having to start from AI-generated essay with pre-determined structure. The teachers wrote a publication summarizing their conclusions from the first year 2022- 2023. The teachers plan to continue with this assignment this academic year 2024-2025 with the same set-up. They want to explore more how they can support higher-performing vs lower-performing students as they observe that the students use AI tools differently and benefit differently. The teachers are open to share any materials they developed: rubrics, mini-lecture, assignment instructions One point for further development is thinking more about the co-writing aspect. In the current assignment design, they could not implement true co-writing with AI, rather students giving feedback to AI once and then taking over the writing process.
1) the strengths and weaknesses of AI tools for argumentative writing and
2) the responsible use of such tools.
Lessons learned & Tips