Generative AI and Basic Psychological Needs of BA students

18 November 2025

Educational project

Generative AI and Basic Psychological Needs of BA students

This research explores how generative AI impacts the psychological needs of university students, based on Self-Determination Theory. Through a think-aloud task using GenAI in combination with interviews, the pilot investigates how students experience autonomy, competence, and relatedness while using AI for academic purposes. The participants are students enrolled in the History and Philosophy of Science course. The team aims to gain insight into the thought process of students whilst using AI, which can be used by policymakers to form a nuanced opinion on the use of GenAI in education.

Background information

Self-determination theory (SDT) predicts a strong correlation between academic performance and support of basic psychological needs (BPNs), being competence, autonomy, and relatedness. The mechanism is internalization of motivation by BPN support, leading to more autonomous forms of motivation that in turn have been demonstrated to lead to better performance in a range of activities from sports to academia (Cerasoli et al., 2014; Meulenbroeks et al., 2024; Ryan & Deci, 2017; Vansteenkiste et al., 2009). Note that this study focuses on basic psychological needs support and does not measure the eventual (expected) change in motivation itself. Different hypotheses can be put forward as to the support or frustration of BPNs by the use of Large Language Models. On the basis of the theory, cases can be made for the support or frustration of competence, for example. Do students really experience competence while outsourcing? The influence of the use of LLM’s on BPNs has received little attention so far. Since LLM’s are becoming widespread, the team wants to see whether their application in an academic writing assignment fosters or frustrates science students’ BNPs. Their goal is to use a think-aloud writing task followed by a semi-structured interview to gain initial insights in this question and to get a first grip on possible mechanisms.

Project description

The team selected two volunteer students from the History and Philosophy of Science bachelor course (convenience sampling, voluntary basis), “Early Modern Knowledges and the Intercultural Encounter”. After providing informed consent, both students were given a one hour assignment to make a summary of a scientific article in a field familiar to them. Participants were asked to use ChatGPT (without further instructions) in the assignment to explore real-time interaction with generative AI. They were invited to voice their actions and thoughts during the execution of the task. After the task, students participated in a short semi-structured interview on their experiences. Since the intervention focused on the psychological aspects, the final summary was not assessed. The think-aloud task and interview were audio recorded, transcribed, and coded with an axial coding scheme based on BPN support or frustration.

The students were very eloquent in voicing their thought process. The think-aloud protocol worked well for them. They used ChatGPT in different ways, prompting it to deliver a summary of the article, the skeleton of the article, the main points of the article, or the structure of the article. They then referred back to the original source to check to judge the quality of the results produced by GenAI. These particular students felt biased against the use of ChatGPT. They would normally not use AI to write summaries as it goes against the purpose of reading literature, namely understanding the research. Both students explicitly expressed their reservations against using GenAI in this type of assignment. Two mechanisms were observed, which illustrate different ways in which students did use ChatGPT during the task and the related support or frustration of basic psychological needs:

 

  1. With GenAI used as a sparring partner, i.e., by asking questions about the text and then referring to the text to check, student utterances during the interview reflected a support of their basic psychological need of competence. However, GenAI was reported to return outputs that were not what the student had prompted for, thus apparently frustrating autonomy.
  2. In the other mechanism, competence was mainly frustrated. The student considered getting to the core message of the article as the learning goal of the assignment. The student asked for a summary of the article but stated that they did not trust the GenAI output in view of possible biases. Therefore, the use of GenAI frustrated rather than supported their need for competence. It did not assist them in reaching the learning goal. In this case, autonomy was also frustrated because of the same perceived implicit biases in ChatGPT, such as Western, male, and commercial. The fact that this bias was perceived as implicit and always there, the student felt not in control.

Since relatedness in SDT is purely linked to interpersonal interaction, we did not take this into account. The students performed the task alone. However, for future studies we could look into the way LLM’s are perceived as having human characteristics.

The team was somewhat surprised by the evident resistance of these students to the use of GenAI. They actually regarded the use of LLM’s to create a summary as a type of cheating, as getting to a result without really understanding what is going on. Note that this feeling of cheating was mentioned even though the task was not graded.  One student put it very eloquently: “I know [GenAI] may influence me in ways I don’t understand”. This warrants further, in-depth study on the hidden system prompts and biases.

Lessons learned

  • When AI is used as a sparring partner, competence is supported (mechanism 1 above).
  • Autonomy is frustrated whenever AI is perceived to be taking over the actual learning task.
  • Autonomy is also frustrated as AI is not perceived to be following the prompts.
  • Students volunteered some remarks about irritation related to the accommodating tone of the AI answers.

 

Take home message

  • Basic psychological needs matter. When supported, they lead to more autonomous forms of motivation, better well-being and better performance. The use of GenAI may either frustrate or support basic psychological needs. We recommend to close align the learning goals with the learning activities (constructive alignment) and thus with the use of GenAI (or not).
  • Start with the learning goals of the course. Do they change if GenAI is allowed?
  • What are the teaching activities that are needed to reach these goals?
  • Ask yourself these questions:
    1. Which educational methods and tools are necessary for these teaching activities?
    2. Is GenAI use supportive in reaching the learning goals? If so, please provide dedicated and scaffolded activities. If not: use other methods to reach those learning goals.

 

References

Cerasoli, C. P., Nicklin, J. M., & Ford, M. T. (2014). Intrinsic motivation and extrinsic incentives jointly predict performance: A 40-year meta-analysis. Psychological Bulletin, 140(4), 980-1008. https://doi.org/10.1037/a0035661

Meulenbroeks, R., van Rijn, R., & Reijerkerk, M. (2024). Fostering secondary school science students’ intrinsic motivation by inquiry-based learning. Research in Science Education, 54(3), 339-358.

Ryan, R. M., & Deci, E. L. (2017). Self-Determination Theory: Basic Psychological Needs in human motivation, social development, and wellness. Guilford.

Vansteenkiste, M., Sierens, E., Soenens, B., Luyckx, K., & Lens, W. (2009). Motivational Profiles From a Self-Determination Perspective: The Quality of Motivation Matters. Journal of Educational Psychology, 101(3), 671-688. https://doi.org/10.1037/a0015083

Print

You are free to share and adapt, if you give appropriate credit and use it non-commercially. More on Creative Commons

 

Are you looking for funding to innovate your education? Check our funding calender!