By Associate Professor Tim Fawns
Posted Thursday 22 August, 2025
At Monash, we’ve recently introduced challenge space conversations, a new kind of dialogue inspired by Monash’s Programmatic Assessment and AI Review (PAAIR) project and our shared need to navigate educational problems or areas of inquiry where clarity is lacking and definitive answers are hard to come by.
Hosted as online webinars, challenge space conversations bring together a panel of people with specialist knowledge who are tasked, not with providing answers, but with thinking aloud and grappling with the complexity of the issues.
In this third challenge space conversation, I chatted with Professor Claire Palermo, Professor Ari Seligmann and Professor Liesbeth Baartman (from Utrecht University of Applied Sciences and Maastricht University, School of Health Professions Education) about what it means to design and evaluate curricula for coherent, cumulative learning, and how to align outcomes across units and courses, track development over time, and capture important learning beyond formal outcomes.
The challenge
How do learning outcomes work in programmatic assessment?
This conversation was held 17th July, 2025. The recording and key takeaways from the conversation are below. I hope you enjoy our conversation and find it useful.
A full transcript of this conversation is available. You can also explore our playlist of previous Challenge Space recordings.
Key takeaways from panellists
Liesbeth Baartman
One of the key messages for me would be to focus on student learning. We started our discussion about decision making, but student learning is maybe even more important. So for me, programmatic assessment or course-wide approaches to assessment could provide students with the opportunity to really use feedback; to use feedback in next assignments, in next projects, whatever you might call them. I think that’s ultimately the most important thing that we should try to achieve in a curriculum, maybe even that’s something that we should start with. So how can we help students to meaningfully use feedback throughout the curriculum?
Tim Fawns
One of the things that the conversation stimulated for me was thinking about progress and broader outcomes and progression over time. Often, assessments are snapshots, but what we need them to be is samples of evidence that build on other samples… extending out, thinking about evaluating students as a much longer process. I liked the talk about the role of feedback in that process. Feedback is part of the glue that joins those different moments of evaluation together for students and for assessors. Feedback can help with the vertical integration of learning outcomes that Liesbeth was talking about.
Another takeaway was about the messaging that comes through in the types of assessment we do. What messages are students taking from the ways we do assessment?
And the final takeaway for me is also really a question: what kind of container or vessel is a unit? How are we conceiving of it? Is it a set of content? Is it a theme that is actually relevant to the wider program? Is it a project that students do? And how do those things combine into a curriculum or a program?
Claire Palermo
I’ve been reminded about the importance of seeing alignment of assessment to course learning outcomes over time and being able to see this as a whole picture or journey for a course.
Another important point for me has been that reaching a saturation point of evidence of learning is really important. We have some existing structures and systems through which to look at that saturation, without feeling like we need to create new structures to consider decision conversations.
Ari Seligmann
Programmatic approaches to assessment help us design armatures and learning journeys. They help us coordinate different scales and speeds of engagement and different scaffolding of activities along the journey, with feedback cycles that facilitate accumulation of knowledge and capacities along the way. We can plan and articulate intended journeys but also have to be flexible enough to negotiate all the non-standard paths diverse learners will navigate to similar destinations and the different speeds at which people might progress along those pathways. There are a lot of different things that we have to keep track of at the same time, but the considered coordination of different types of learning, scales and speeds is one of the promises of thinking programmatically.
Embrace the challenge
This session continues the series of PAAIR Challenge Conversations aimed at deepening understanding and practice around programmatic assessment and AI integration at Monash.
We’d love to hear your thoughts – what resonated with you, what questions do you have, and how might this apply to your own context? Feel free to share your reflections and join the conversation.

Associate Professor Tim Fawns
Tim Fawns is Associate Professor (Education Focused) at the Monash Education Academy. His role involves contributing to the development of initiatives and resources that help educators across Monash to improve their knowledge and practice, and to be recognised for that improvement and effort. Tim’s research interests are at the intersection between digital, professional and higher education, with a particular focus on the relationship between technology and educational practice.

Leave a Reply