21 October 2025

Knowledge item

Assessment cycle

The assessment cycle is a structured and systematic process that supports the design, implementation, evaluation, and improvement of assessments within higher education. It promotes the idea that assessments are not just a tool for grading, but an integral part of learning and curriculum development. The cycle consists of eight interconnected phases, each contributing to the quality and integrity of the assessment process (see Figure 1 based on Berkel e.a., 2023; Bijkerk, 2015; Sluijsmans, Joosten-ten Brinke, Schilt-Mol, 2015; van de Veen, 2016).

 

 

Figure 1. Assessment cycle


After formulating the learning objectives for your course, you can start addressing the question of how to determine the extent to which students have achieved these objectives—that is, how you will assess them. You decide on the assessment method(s) and consider which learning activities are suitable for helping students attain the learning objectives. To establish the basic design of the assessment within a course, you can use the five guiding questions proposed by Stiggins (1999).

Questions to shape your basic design

  1. Why assess? – Define the purpose: learning (formative), decision-making (summative), or evaluation (van Schilt-Mol, 2022).
  2. What to assess? Determine content and mastery level, using constructive alignment (Biggs, 1996) and mastery frameworks (Bloom et al., 1956; Kratwohl, 2002; Miller, 1990).
  3. How to assess? Choose assessment methods, assign weight, and justify choices.
  4. Who assesses and who is assessed? Decide on assessor(s) (teacher, self, peer, or external) and determine if the assessment is individual or group-based.
  5. When to assess? Set frequency and timing: start, during, and/or end of course.

 

 

According to Bigg’s (2025) model of constructive alignment, aligning learning objectives, teaching activities, and assessment is essential for ensuring quality of education. Completing the assessment matrix helps achieve this alignment and supports the validity of your assessment. It shows how learning objectives are addressed, which assessment methods are used, the corresponding Bloom’s levels (Bloom et al., 1956; Kratwohl, 2002) and how each assessment contributes to the final grade. It is a key tool for making valid, well-founded decisions about student performance.

Example of an assessment matrix

This assessment matrix, from the fictional course Sustainable Urban Development, links each learning goal to its corresponding assessment methods, ensuring all goals are addressed. Each assessment method shows its weight toward the final grade and the mastery level, indicating both the goal’s importance and the level at which it is assessed.

 

Assessment matrix for the course Sustainable Urban Development

 

Learning objectives Assessment method Formative practice
30% 20% 50%
 

 

 

After completing this course, the student is able to:

Individual case report Exam (closed-ended questions) Group-based project
1) Analyse urban spatial challenges based on social, economic, and ecological dimensions. Analyse
2) Apply relevant planning theories and models to assess spatial policy measures. Remember, Understand, Apply
3) Design feasible and sustainable spatial interventions for urban areas, with attention to stakeholder perspectives. Create
4) Evaluate policy documents and planning visions in terms of coherence, feasibility, and sustainability Evaluate
5) Effectively communicate spatial proposals in both written and oral formats to diverse stakeholders. Create

 

After designing the basic structure of your course assessment, you develop a detailed design for the chosen assessment type. This can take two forms:

  • Exam specification table – a blueprint for written, digital or oral exams.
  • Assignment specification form – a framework for assignments and performance-based assessments.

 

Examples of both are provided below. Once the exam specification table or assignment specification form is complete, the next step is to develop the specific assessment materials.

Exam specification table

An exam specification table is used for written, digital exams with open and/or closed questions as well as oral exams. The table outlines which topics (based on the learning objectives) are being assessed, at what cognitive level, and how many questions (items) are assigned to each. For exams with open questions, the specification table may also indicate the number of points assigned to each question or topic as well as their relative weight. Based on this table, you then develop the exam questions and a corresponding answer model.

 

Below you can see an example of an exam specification table for an exam with open-ended questions, based on learning objective 2 from the fictional course Sustainable Urban Development: “Apply relevant planning theories and models to assess spatial policy measures.”

 

Topics Remember Understand Apply Questions per topic
1. Rational Planning Theory 3 3 6
2. Communicative Planning Theory 3 4 7
3. Advocacy and Equity Planning 4 4
4. Strategic Spatial Planning 3 3
5. Cost-Benefit Analysis & Evaluation Models 3 3 6
6. Systems Thinking and Complexity in Planning 4 4 8
7. Integration of Models in Policy Assessment 6 6
Total questions per Level 7 10 20 40

Assignment specification form

When designing assignments or performance-based assessments—such as papers, presentations, portfolios, practical assignments, theses, or internships—an assignment specification form can help shape a clear, well-structured task and define transparent evaluation criteria. Below we provide an example of an assignment specification form from (van de Veen, 2016). A step-by-step guide for constructing an assignment specification form can be found here.

 

Instructions
A key part of this course is to develop skills in analysing and further exploring various debates in the academic literature about the history of Europe as a whole, and in particular about the diverse interpretations and images of Europe. During the course meetings, you have explored and debated various theories, issues, and points of view. In this essay, you will have an opportunity to explore one topic in depth and to develop your own argument in detail in writing.
Resources
·         Course readings as provided

·         Course debates

·         Relevant scholarly literature (which you need to find yourself)

Products
·         An essay (length: 2500 words, with a 10% margin either way)

·         The essay constitutes 50% of the final mark for this module.

Assessment criteria
The assessment sheet are provided in the appendix.
Supervision and help
You may submit a first version of the essay by November 20th. You will receive written and oral feedback on this version the week after. If you have questions, please send an email to jim@arts.nl.
Submission and feedback
Submission date: December 6th – no extensions possible.

A digital as well as a printed version of the essay has to be submitted.

  • Printed version: hand in during the lecture or at the pigeonhole of Jim Smith in room 6.1 at the Faculty of Arts building.
  • Digital version: send by email to Jim@arts.nl..

Standard setting is also important in the construction phase of an assessment. It defines how performance will be judged and what counts as passing. This may involve assigning a mark (e.g., 6.5 out of 10) or providing a performance qualification such as sufficient/insufficient, or additional categories like good and excellent. It involves setting a cut-off score, either using an absolute method (comparing results to fixed criteria), a relative method (comparing results to the performance of the group), or a combination of both. In the Netherlands, absolute methods are often used, with a cut-off score of 5.5. out of 10.

 

In this phase, students take the assessment. The conditions under which they do so can affect the results—especially their reliability—so it is important to plan them carefully. Assessments can be administered in two types of environments. In a controlled environment, usually set at a fixed time, access to tools like the internet or generative AI can be limited. In an uncontrolled environment, students complete assessments over a longer period with little to no teacher oversight. Each setting requires different considerations during the administering phase, as outlined below.

Administering assessments in a controlled environment

When administering an assessment in an uncontrolled environment, it is essential to carefully account for the following conditions:

  • An assessment environment should be quiet, well-lit, and peaceful to allow students to demonstrate their learning calmly.
  • Pre-exam information, in the course manual and on a desk sheet at the exam, should include: exam duration, answering method, allowed materials, and exam conditions.
  • Scoring and grading details must be clear: indicate maximum points per open-ended question; for closed questions, specify the point system (and whether guessing correction applies. Also, explain how the final mark is determined.
  • Procedural instructions should cover how to ask questions during the exam, rules for late arrivals, early leaving, toilet visits, and how to submit the exam.
  • In digital exams, provide clear guidance on where to get help for login or connectivity issues.
  • Inform students periodically about the remaining time.

Administering assessments in an uncontrolled environment

For exams or assignments in an uncontrolled environment, it is crucial to clearly specify the following elements of the assignment specification form:

  • What is expected of the student;
  • Which tools and sources are permitted;
  • The assessment criteria or rubrics that will be used;
  • The deadline for submission;
  • The time frame within which the work will be marked.

 

In this phase, student performance is evaluated and converted into a result, expressed as a numerical mark or another form of qualification. Further analysis of student results (Phase 6) is needed to determine the final results. The dropdown menus below provide further information on this step for various types of assessment.

Exams with closed-ended questions

Student answers to exams with closed-ended questions may be collected on paper or digitally. Digital responses can be automatically compared with an answer key using assessment software. Paper responses are processed with scanning equipment and then marked automatically or marked fully by hand. Before determining students’ marks, it is essential to analyze the exams and address any quality issues (see phase 6). After adjustments have been made to the exam or scoring, students’ total scores determine whether they qualify for selection, pass or fail, and, where applicable, which mark or qualification is awarded.

Exams with open-ended questions

Exams with open-ended questions can be completed in writing or digitally. Students’ responses to the questions are assessed using a marking guide or rubrics to determine the degree of alignment with the expected responses, and points for each question are awarded accordingly. The total score informs the decision on whether the student is selected or has passed/failed, and, if applicable, which qualification or mark is awarded.

 Oral exams

Oral exams are conducted in person, preferably with two examiners: one leading the conversation with the student and the other taking notes and scoring using an assessment form or marking guide. If there is only one examiner, the session must be recorded. Recording is also recommended when two examiners are present. A scoring system is used to determine the final score. Immediately after the assessment, a mark or qualification is assigned and the final decision – pass, fail, or selection – is communicated to the student as soon as possible.

Assignments

Students can submit assignments through a learning management system (LMS), a plagiarism checker, or by email. Their quality is preferably assessed using rubrics, or otherwise with clearly defined assessment criteria. For extensive work such as a thesis, it is recommended to involve two assessors to enhance reliability. Using the rubrics or criteria, a decision is made on whether the student has been selected or has passed or failed, and, if applicable, a qualification or mark is awarded.

Portfolio assessment

In portfolio assessments, students compile their work products and reflections over an extended period. Assessment criteria or rubrics may be holistic or analytical, with the quality of the evidence and the accompanying reflection at the core. Students and/or peers contribute by providing feedback and, in some cases, conducting assessments. The assessor—or, for greater reliability, a panel of assessors—then determines whether the portfolio meets the required standards, resulting in a pass/fail decision, mark, or qualification.

 Skills exams

In skills exams, students are observed and on the spot and scored based on standardised criteria. Ideally, two assessors are involved, or the assessment is recorded for reference. The final result, with full substantiation, is communicated as soon as possible after students have completed the skills exam. Based on the final score and cut-off score, a decision is made on whether the student is competent or not yet competent.

 Presentations

Presentations can be delivered individually or in groups, using various formats such as multimedia, posters, no media, or PowerPoint. Clear rubrics or criteria are preferred to ensure transparency, and involving multiple assessors—whether fellow teachers or peers—can further improve reliability. Finally, the score from an individual assessor, or the combined scores from all assessors, is used to determine the final mark or qualification.

Practical assignments and internships

In practical assignments or internships, students are evaluated on both the product they create and the process leading to it. Assessment is usually based indirectly on internship reports and professional products—such as advice, policy documents, or tools—and sometimes directly through observation. At the end of the internship, students often give a presentation to share their findings. Rubrics and assessment criteria should be used to guide the assessment. The assessment of the product, observation, or presentation is often conducted by the university lecturer, with supervisors or mentors also contributing to the process. While these external assessors are not formal examiners and cannot determine the final mark, they can provide an assessment recommendation. The final result—usually expressed as pass/fail or a mark—is determined by combining the external recommendation with the internal assessment.

 

Following the marking and processing phase, the quality of the assessment must be evaluated. If shortcomings are identified, they may result in unjust pass/fail decisions. To address this, adjustments to the exam questions, answer keys, assessment instruments, or cut-off scores may be required. The following section provides guidance on how to do this for different assessment methods. Once any necessary corrections are made, the total scores are used to decide whether students are selected, whether they pass or fail, and, if relevant, which mark or qualification they receive. The examiner determines the final results, records them and is responsible for communicating them to the students.

How to analyse the assessment quality before determining final results

  • For exams with closed-ended questions, psychometric analysis can often be carried out directly within the digital assessment system. This typically includes examining the difficulty level of items (p-value), the overall reliability of the assessment (Cronbach’s Alpha), and the correlation between the total score and the individual item score (Rit and Rir-values). Utrecht University provides courses and manuals to support you in carrying out these analyses.
  • For assessments with open-ended questions, a similar (manual) analysis is also possible. This can be done by treating each question as an item and recording the points each student earns per question. These data can then be used to carry out a psychometric analysis.

For other forms of assessment, it may be useful to review how the assessment is scored overall as well as for individual tasks or components. When multiple assessors are involved in evaluating a piece of work, it is advisable to examine the agreement between assessors.

 

In accordance with the Dutch Higher Education and Research Act (WHW), an inspection of the assessment is mandatory to allow students to review their results. Reviewing an assessment offers students the opportunity to understand what they have done well and what can improve on. Unfortunately, in practice, students often use this opportunity to attempt to negotiate for a higher mark. The most effective way to prevent such situations is to apply clear assessment criteria and use transparent marking schemes and model answers. This approach helps students understand how their qualification or mark was determined and it reduces the likelihood of post-assessment disputes.

 

 

In the final phase of the assessment cycle, the focus is on reflecting on both the process and the outcomes to identify what worked well and where improvements are needed. When evaluating the assessment, it is important to consider whether the resulting decisions are fair, valid, and reliable. The quality criteria and steps of the assessment cycle can serve as helpful tools in this process. To support the evaluation, you may also draw on a range of sources, including the assessment matrix, student feedback, pass/fail rates, mark distributions, criterion scores, statistical analyses, and reflections from both assessors and yourself. The dropdown menu below provides further guidance for this. Finally, recording your findings, sharing them with relevant others, and outlining concrete action points are essential to ensure that identified improvements are effectively implemented in future assessments.

What to focus on in evaluation

The following three areas are key points for reflection that can help you improve your assessment.

 

  • Overall impression – Review pass rates, mark distributions, and how these compare with expectations, along with feedback from students and staff. This gives a quick health check, showing whether students met the intended standards and highlighting trends or anomalies that may require attention.
  • Quality of the assessment – Explicitly measure your assessment against the five quality criteria for assessments: validity, reliability, effectiveness, feasibility, and transparency. Examining these aspects provides a clear understanding of the strengths of the assessment design and identifies areas where refinements are required to enhance its overall quality.
  • Impact on teaching – Look at which parts of the assessment scored high or low and connect these to teaching activities used in the course. This helps to identify the teaching practices that support student learning and to highlight areas where intended learning objectives or teaching activities may be refined, thereby strengthening constructive alignment.

 

 

References

 

  • Biggs, J. (1996). Enhancing teaching through constructive alignment. Higher Education, 32(3), 347–364. https://doi.org/10.1007/BF00138871
  • Biggs, J., Tang, C., & Kennedy, G. (2022). Teaching for quality learning at university (5th ed). McGraw-hill education (UK).
  • Bloom, B. S., Engelhart, M. D., Furst, E. J., Hill, W. H., & Krathwohl, D. R. (1956). Taxonomy of educational objectives: The classification of educational goals. Handbook 1: Cognitive domain (pp. 1103-1133).
  • Bijkerk, L. (2015). Basis Kwalificatie Examinering in het hoger beroepsonderwijs. Bohn Stafleu van Loghum.
  • Krathwohl, D. R. (2002). A revision of Bloom’s taxonomy: An overview. Theory Into Practice, 41(4), 212–218. https://doi.org/10.1207/s15430421tip4104_2
  • Milius, J. (2007). Schriftelijk tentamineren. Een draaiboek voor docenten in het hoger onderwijs. Utrecht: Ivlos
  • Miller, G. E. (1990). The assessment of clinical skills/competence/performance. Academic Medicine, 65(9), S63–S67. https://doi.org/10.1097/00001888-199009000-00045
  • Sluijsmans, D., Joostenten Brinke, D., & van SchiltMol, T. (2015). Kwaliteit van toetsing onder de loep: Handvatten om de kwaliteit van toetsing in het hoger onderwijs te analyseren, verbeteren en borgen (1e druk). Maklu

 

You are free to share and adapt, if you give appropriate credit and use it non-commercially. More on Creative Commons

 

Are you looking for funding to innovate your education? Check our funding calender!