Mini conference Programmatic Assessments Hogeschool Rotterdam
Tuesday 28 March 2023
Supported by: Laura Lobert (facilitator), Peter Schouten & Dominique Sluismans
The mini conference on programmatic assessments at Hogeschool Rotterdam (HR) was organized in response to the growing interest and discussions around the topic. The event was initiated by the chairman of the executive board and aimed to explore various aspects of programmatic assessments, including the ‘gray areas’. A leading coalition consisting of directors, educational managers, and advisors was invited to participate, along with senior lecturer representatives from various education stations. The conference sought to investigate programmatic assessments as a potential future direction for education. Els de Bock was representing a critical voice towards these developments.
At the beginning of the conference, Dominique Sluismans posed the fundamental questions of what assessments are and their place in the curriculum. To frame the discussion, she presented the Briggs model of education, which consists of three pillars: the curriculum goals, the assessments, and the education itself. Sluismans emphasized that any form of education starts with three critical questions: what the program aims to teach the students?, how to measure whether they have achieved those goals?, and what learning activities the teachers should design to challenge and help the students accomplish those goals.? By highlighting the importance of these questions, she set the stage for a deeper exploration of programmatic assessments and their role in education.
Why consider programmatic assessments?
What (problems) does programmatic assessment try to solve? The basic idea is that more (smaller and diverse) assessments throughout the term improves the clarity by which we can determine the students level. The yet or not-yet decision for teachers becomes easier with more and diverse information.
– One type of assessment does not do justice for the diversity of students.
– One type of assessments can lead to strategic learning (learning to the test).
– One assessment at the end of a term can stimulate student procastrination.
– One assessment moment can lead to superficial understanding by students.
– One assessment moment can lead to student anxieties and fear of failing.
– One type of assessments at the end of a term does not offer opportunity for students to learn.
– Students often don’t read feedback after they pass their assessment (feedback graveyard).
What are questions of programmatic assessments?
– Is it Feasible. Doable. Affordable. Organizable (H.U.B.O)?
– Are the intended learning outcomes clear enough?
– Can learning – and performance activities blend in one?. (learning zone vs performance zone).
– Can datapoints become the new tutors checklists?
– How will feedback be archived in regards to privacy and systems? (disappearing feedback)
– Can student development be ‘manufactured’ in this type of monitored process?
– Can all students reflect, and do they have enough self regulation?
– Can teachers take a more coaching role and provide feedup?
– Who decides on such an educational change?
Peter Schouten continued with the six principles of Programmatic Assessments. 1. There is a mixt of datapoints 2. Each datapoint is feedback driven 3. The curriculum is the backbone 4. There is a constant dialogue about the use of feedback 5. Datapoints are related to ‘weight’ of the verdict 6. The weight of the decision is leading.
Programmatic assessment is only a term with a set of underlying principles which are likely already in use in educational institutes. However, how these principles are implemented is crucial in determining their effectiveness. I believe the language that is used will play an important role in its success. Will teachers and students feel their programme is represented by buzzwords such as: datapoints, high/low stake-assessments, learning statistics and performance zone? The success of programmatic assessments also depends on whether the approach aligns with the experiences and needs of the institution and its students. While feedback has always been essential in education, the concept of feedup (what is needed to succeed) becomes more important in programmatic assessments. The most significant challenge is getting buy-in from the teaching community, with at least 70% of staff needing to support the transition. Additionally, time and resources are required to clarify learning outcomes, design a datapoint-driven curriculum, train teachers, and implement tools such as e-portfolios to facilitate and communicate the assessment process.
Rough summary of thoughts by Ron Bormans: “The margin for error in education is small. We are responsible for our students and cannot follow and experiment wit every new educational trend. Education has a graveyard of many failed experiments. We owe it to our students to critically reflect on how we collectively take each new educational step”.
Questions for the WdKA
The following are questions that need to be addressed by the WdKA:
- Are the PA principles in line with our vision?
- Is there enough clarity on our intended outcomes and skills?
- Does PA provide sufficient flexibility for unforeseen events during the (creative) process?
- Does this type of monitoring student development suit our students?
- Does this type of educational design suit our teachers?
In addition, there are questions related to professionalisation:
- Can teachers provide transparency about how their education is structured
- Can teachers provide specific feedback during the process.
Furthermore, infrastructure-related questions should be considered, including whether teachers have enough time to redesign the assessment process and be prepared, and whether the necessary tools are available to support this development at this moment.
I believe PA has potential and that much of the principles are aligned with our institute. By introducing competency assessments with portfolio’s we have already made steps into this direction. Further implementation can support our ambition to focus assessment on development of our students instead of products. The next step and biggest challenge I foresee is to identify desired outcomes per project and how to organize individual student feedup on development. The introduction of our new Learning Management System (Brightspace) could support the next tiny step in providing more transparency towards students and influence our assessments in a positive way.