Improving Experimental Design for Cognitive Psychology assignment
This project aimed to enhance experimental designs in a cognitive neuroscience course using Generative AI tools (in this case Perplexity). Students used Perplexity with pre-defined prompts to identify experimental confounds, improve randomization strategies, and optimize experimental designs, such as determining the number of trials or selecting stimuli. By using AI, the students refined experimental procedures, identified possible pitfalls or biases, and explored alternative methods to improve data reliability and validity.
Outcomes suggest improvements in experimental designs and a greater awareness of methodological issues, such as potential confounds and control factors. These improvements provide valuable insights for conducting future experiments and understanding how knowledge of cognitive processes is obtained.
The Challenge
In a level 2 Cognitive Neuroscience Bachelor course, where students have to design an experiment, students often have difficulty identifying experimental confounds, focus on methodological accuracy, and optimizing their designs. The goal of the assignment is to help students understand how experimental designs contribute to insights into brain/cognitive functions, ensuring their designs are scientifically valid.
Key difficulties shown by students include:
- Recognizing and controlling for confounding variables.
- Implementing randomization strategies.
- Balancing within- vs. between-subjects designs.
- Managing issues such as the number of participants and trials within an experiment. (sample size)
An AI-intervention – i.e. the use of pre-defined prompts in Perplexity – was specifically designed to guide students in addressing these challenges by activating critical thinking about the components of experimentation beyond simply treating it as a ‘measurement,’ ‘test,’ or ‘question.’
Students, worked in groups of 3 or 4 students and were allowed to use Gen AI (Perplexity) to improve their experimental design, following a predefined document with pre-defined questions (prompts) for the AI tool. These prompts were aimed to: The prompts were informed by prior teaching experiences and theoretical insights into common pitfalls in experimental design. They were structured to guide students toward operationalizing their research questions effectively, helping them to think about how specific design choices directly influence the validity and reliability of their results. By framing the prompts around methodological issues, helps to make students more aware of how clever experiments are needed to answer research questions, rather than just serving as a procedural task. For most groups, the integration of Gen AI resulted in notable improvements in experimental designs: Challenges included adapting the AI-generated suggestions to specific experimental contexts while balancing their goals with practical constraints (e.g. like sample size and online testing environments). The outcomes were consistent across student groups: refined designs, greater methodological awareness. Future steps include: Insights gained from this project could inspire educators to adopt AI tools in other courses, fostering critical thinking and innovation in experimental design while providing hands-on learning opportunities.
Lessons learned
Tips