29 April 2025

Knowledge item

Improving Experimental Design for Cognitive Psychology assignment

This project aimed to enhance experimental designs in a cognitive neuroscience course using Generative AI tools (in this case Perplexity). Students used Perplexity with pre-defined prompts to identify experimental confounds, improve randomization strategies, and optimize experimental designs, such as determining the number of trials or selecting stimuli. By using AI, the students refined experimental procedures, identified possible pitfalls or biases, and explored alternative methods to improve data reliability and validity.

Outcomes suggest improvements in experimental designs and a greater awareness of methodological issues, such as potential confounds and control factors. These improvements provide valuable insights for conducting future experiments and understanding how knowledge of cognitive processes is obtained.

The Challenge

In a level 2 Cognitive Neuroscience Bachelor course, where students have to design an experiment, students often have difficulty identifying experimental confounds, focus on methodological accuracy, and optimizing their designs. The goal of the assignment is  to help students understand how experimental designs contribute to insights into brain/cognitive functions, ensuring their designs are scientifically valid.

 

Key difficulties shown by students include:

  • Recognizing and controlling for confounding variables.
  • Implementing randomization strategies.
  • Balancing within- vs. between-subjects designs.
  • Managing issues such as the number of participants and trials within an experiment. (sample size)

 

An AI-intervention – i.e. the use of pre-defined prompts in Perplexity – was specifically designed to guide students in addressing these challenges by activating critical thinking about the components of experimentation beyond simply treating it as a ‘measurement,’ ‘test,’ or ‘question.’

 

 

Students, worked in groups of 3 or 4 students and were allowed to use Gen AI (Perplexity) to improve their experimental design, following a predefined document with pre-defined questions (prompts) for the AI tool.

These prompts were aimed to:

  • Identify confounds: Perplexity detected issues like participant fatigue, visual stimulus properties, and randomization biases, prompting students to standardize stimuli and randomize trial sequences.
  • Optimize randomization strategies: e.g. strategies like block designs, counterbalancing ensuring unbiased designs.
  • Choose within or between-subject design type: Most groups adopted within-subjects designs for increased statistical power, supported by literature and AI recommendations.
  • Determine number of trials and the duration of an experiment: AI feedback helps to trade-off statistical power with participant fatigue, advising an ideal number of trials and intervals.
  • Address ethics: Students incorporated AI-guided ethical considerations, including informed consent, privacy, and participant comfort.

 

The prompts were informed by prior teaching experiences and theoretical insights into common pitfalls in experimental design. They were structured to guide students toward operationalizing their research questions effectively, helping them to think about how specific design choices directly influence the validity and reliability of their results. By framing the prompts around methodological issues, helps to make students more aware of how clever experiments are needed to answer research questions, rather than just serving as a procedural task.

 

For most groups, the integration of Gen AI resulted in notable improvements in experimental designs:

  • Enhanced accuracy: Confounds like stimulus size, environmental factors / variability, and randomization errors were addressed, resulting in more robust and reliable experimental designs.
  • Improved randomization: Block and trial-level randomization strategies experimental confounds, improving data.
  • Increased awareness of ethical aspects such as the importance of informed consent and participant wellbeing.

 

Challenges included adapting the AI-generated suggestions to specific experimental contexts while balancing their goals with practical constraints (e.g. like sample size and online testing environments).

 

The outcomes were consistent across student groups: refined designs, greater methodological awareness.

 

 

Future steps include:

  • Use the same approach in a new group of students (Term 2, 2025) to evaluate the current outcomes of Term 2, 2025).
  • Sharing insights with colleagues involved in similar courses across the AI and Psychology colloquium.
  • Addressing possible issues: such as the balance between the use of AI suggestions and students own creative decision-making in experimental design.

 

Insights gained from this project could inspire educators to adopt AI tools in other courses, fostering critical thinking and innovation in experimental design while providing hands-on learning opportunities.

 

 

 

Lessons learned

  • Students were able to use Generative AI to identify possible design confounds and improvements.
  • Using fixed prompts, made it possible to create AI-guided feedback that enhances methodological insights.
  • The integration of AI into course assignments like these promotes active learning by inspiring students to critically evaluate AI suggestions and adapt them to experimental contexts.

Tips

  • Start with small, structured AI-assisted tasks, with predefined prompts.
  • Emphasize the importance of critical evaluation of the AI feedback.
  • Use AI to complement, not replace
  • Discuss the role of AI in enhancing educational processes

 

 

 

 

 

You are free to share and adapt, if you give appropriate credit and use it non-commercially. More on Creative Commons

 

Are you looking for funding to innovate your education? Check our funding calender!