Making evidence of learning visible at Monash: Beyond detection and invigilation

By A/Professor Tim Fawns, Professor Ari Seligmann, and Professor Claire Palermo
Posted Tuesday 14 October, 2025

ABC News recently ran the headline “University wrongly accuses students of using artificial intelligence to cheat.” Underneath, the article stated that “a major Australian university used artificial intelligence technology to accuse about 6,000 students of academic misconduct last year.” The article outlines serious issues with using AI detection tools such as Turnitin’s AI detector in decision-making about academic integrity. For a detailed discussion of such issues, see Mark Bassett and colleagues’ excellent analysis: Heads we win, tails you lose: AI detectors in education.

Many universities, including Monash, resisted pressure from both commentators and vendors to use AI detection technologies. Turnitin initially pushed to switch its AI detector on by default, only reversing course after significant, coordinated advocacy from Monash and other leading institutions. That collective action shows that, when universities work together, we can influence the direction of the sector.

But if not detection, then what? Where do we go from here? As Danny Liu noted in an interview for the ABC article, we need to shift focus from detecting AI use to detecting learning. Similar arguments have been made by Cath Ellis and Jason Lodge (among others). The goal is not to police technology use, but to generate and evaluate evidence of students’ learning processes and achievements.


The goal is not to police technology use, but to generate and evaluate evidence of students’ learning processes and achievements

Designing robust assessment in a time of AI

Learning in our complex times does not accommodate simple solutions. Invigilated exams may have a place in certain contexts, but they remain a narrow form of assessment, raising equity concerns and, arguably, do little to prepare students for the kinds of complex tasks they will commonly face beyond university. Oral exams can be powerful when combined with other assessment approaches, but they must be carefully designed and integrated into broader assessment regimes if they are to be secure, valid, equitable, and feasible. Authentic assessment is often proposed as the answer, but as Fawns and colleagues (2025) argue, they are not a panacea. There are always trade-offs, assessments are not inherently AI-proof or secure, and everything depends on the design, the kind of learning we’re targeting, the purposes for which we are assessing, the forms of learning evidence our tasks generate, and how we evaluate that evidence.

At Monash, programmatic assessment, highlighted in TEQSA’s Assessment Reform in an Age of Artificial Intelligence, is a key part of our response to the complexities of robust assessments in the evolving age of AI. The Programmatic Assessment and AI Review (PAAIR) project is a key vehicle for reforming practices. Complexity does not accommodate quick-fixes and we recognise that  this is a long-term evolving project. Implementation will vary across courses and disciplines, and progress will be uneven, as each course interprets and adapts our modified principles of programmatic assessment, making them locally relevant across diverse disciplines and levels.

Alongside PAAIR, we are developing approaches that make evidence of learning visible and assessable in more holistic ways. A crucial part of this is breaking assessment into component tasks (see Alex Steel’s useful post on ‘assessment chords’) and structuring teaching so that educators and peers regularly encounter, discuss, and provide feedback on that work in development. For example, studio and lab models offer clues for  engaging with students’ work in progress. Even in disciplines that don’t traditionally use these approaches, such as history, we can imagine a ‘studio’ format where students develop their analyses while tutors move between groups, discussing reasoning, argumentation, and methods, with peers observing and contributing. Imagine ‘learning studios’ wherein academic knowledge (content, practices, etc.) and life skills (collaboration, feedback literacy, communication, etc.) combine and enrich each other.

TEQSA’s guidance also identifies observing process as an important element of assuring learning when we all have ready access to GenAI. However, assessing process is difficult and can also be problematic (see Fawns, 2024). The challenges for valid assessment and ideas to increase visibility of learning are not entirely new, but with AI’s increasing capacities, problems are being exacerbated and the urgency for reform has increased.

Addressing contemporary conditions is challenging work. Educators are already under significant workload pressure. Some educators have limited assessment design repertoires and expertise. To address the complex challenges, we’ve been exploring these issues through the AI in Education Learning Circles – a cross-Faculty group of education experts with specialist AI knowledge who collaborate on guidance, resources, and practical design challenges. We’ve run Monash Education Academy workshops on making evidence of learning visible, and showcased examples at the Monash Learning and Teaching Conference. Notably, Andrew Cain’s work in the Faculty of IT demonstrates how large-scale, ongoing educator and peer engagement with portfolios of task-based evidence can support holistic assessment, even in large classes.

A collective call to action

So while it might seem easier to try to find and install appropriate AI detectors, tighten invigilated activities, or saturate the curriculum with oral exam checkpoints, these are simplistic responses to a complex (or wicked) problem (see Corbin et al., 2025). Genuine assessment reform requires deeper rethinking, reshaping curricula so that assessment emerges from, and feeds back into, active engagement with learning. This might mean using existing class time to observe students working, discussing their reasoning, and collecting these encounters as further evidence that contributes to holistic judgements of achievement.


Genuine assessment reform requires deeper rethinking, reshaping curricula so that assessment emerges from, and feeds back into, active engagement with learning

At Monash, we are doing this harder, slower, and thoughtful work. This choice has required imagination and courage, but we believe it will prepare our students, and our educators for the future, far better than quick fixes that so often turn out to be band-aids or detours that lead us backward. Join in our conversations, along with our colleagues, as part of our shared effort to chart paths for higher education that are programmatic, responsive to complexity, and grounded in a thoughtful, cautious approach to positioning AI in our work, lives, and studies.

References

Bassett, M. A., Bradshaw, W., Bornsztejn, H., Hogg, A., Murdoch, K., Pearce, B., & Webber, C. (2025, October 10). Heads we win, tails you lose: AI detectors in education. https://doi.org/10.35542/osf.io/93w6j_v1

Bergin, J. (2025, October 8). University wrongly accuses students of using artificial intelligence to cheat. ABC News. https://www.abc.net.au/news/2025-10-09/artificial-intelligence-cheating-australian-catholic-university/105863524.

Corbin, T., Bearman, M., Boud, D., & Dawson, P. (2025). The wicked problem of AI and assessment. Assessment & Evaluation in Higher Education. Advance online publication. https://doi.org/10.1080/02602938.2025.2553340.

Ellis, C., & Lodge, J. M. (2024, July 9). Stop looking for evidence of cheating with AI and start looking for evidence of learning. LinkedIn. https://www.linkedin.com/pulse/stop-looking-evidence-cheating-ai-start-learning-cath-ellis-h0zzc/.

Fawns, T. (2024, October 16). Process over product – What is the utility of process-based assessments? HERDSA Connect. https://herdsa.org.au/herdsa-connect/process-over-product-what-utility-process-based-assessments.

Fawns, T., Bearman, M., Dawson, P., Nieminen, J. H., Ashford-Rowe, K., Willey, K., Jensen, L. X., Damşa, C., & Press, N. (2025). Authentic assessment: From panacea to criticality. Assessment & Evaluation in Higher Education, 50(3), 396–408. https://doi.org/10.1080/02602938.2024.2404634.

Monash University. (2025). Programmatic Assessment and AI Review (PAAIR). Teach HQ. (Retrieved October 13, 2025.) https://www.monash.edu/learning-teaching/teachhq/Assessment/PAAIR.

Monash University. (2025). Programmatic approaches to assessment. Teach HQ. (Retrieved October 13, 2025.) https://www.monash.edu/learning-teaching/teachhq/Assessment/PAAIR/programmatic-approaches-to-assessment.

Monash University. (2025). AI and assessment. Teach HQ. (Retrieved October 13, 2025.) https://www.monash.edu/learning-teaching/TeachHQ/Teaching-practices/artificial-intelligence/ai-and-assessment.

Steel, A. (2024, December 11). The assessment integrity chord: Three notes for assessment harmony and the three ways to achieve each. Education & Student Experience, UNSW Sydney. https://www.education.unsw.edu.au/news-events/news/assessment-integrity-chord.

Tertiary Education Quality and Standards Agency. (2023, November 23). Assessment reform for the age of artificial intelligence. https://www.teqsa.gov.au/guides-resources/resources/corporate-publications/assessment-reform-age-artificial-intelligence.

Associate Professor Tim Fawns

Academic Lead, Programmatic Assessment and AI Review (PAAIR) project
Associate Professor, Monash Education Academy

Tim Fawns is Associate Professor (Education Focused) at the Monash Education Academy. His role involves contributing to the development of initiatives and resources that help educators across Monash to improve their knowledge and practice, and to be recognised for that improvement and effort. Tim’s research interests are at the intersection between digital, professional and higher education, with a particular focus on the relationship between technology and educational practice.

Professor Ari Seligmann

Academic Lead, AI in Education, Portfolio of the Deputy Vice-Chancellor Education (DVCE)
Academic Lead, Programmatic Assessment and AI Review (PAAIR) project
Associate Dean (Education), Faculty of Art, Design and Architecture

Ari is an educator and administrator with numerous roles within Monash. After helping establish, teach and lead the Architecture program for many years he shifted to the Associate Dean Education for the Art Design & Architecture Faculty in 2022. He was a member of the University GenAI in Education Working Group and a co-author of the report that set out the current directions for the University. Since late 2023, he has been serving as Academic Lead, AI in Education within the Deputy Vice-Chancellor Education (DVCE) portfolio and helped establish Monash’s inaugural Learning Circle on AI in Education.

Professor Claire Palermo

Academic Lead, Programmatic Assessment and AI Review (PAAIR) project
Deputy Dean (Education), Faculty of Medicine, Nursing and Health Sciences

Claire is the Deputy Dean (Education) in the Faculty of Medicine, Nursing and Health Sciences. She is an Accredited Practising Dietitian and Fellow of the Dietitians Association of Australia. Claire is an accomplished teacher having received local, national and international awards and recognition for her teaching excellence. Her research is focussed on competency-based assessment and the preparation of the health workforce such that they are best prepared to improve the health of the population.