Traditional law assessments, accreditation requirements, and authentic assessment: a solvable conundrum?
Whilst this is a staff-authored education blog, I’d like to take a step back and introduce my topic via a story starting with my own undergraduate law student experience. ‘Back in those days’, with few exceptions, my law units had 3.5 hour, 100% weighted exams. There was usually an optional 30% research paper. But I quickly learned that it wasn’t worth working hard to get a good assignment mark, if I still had to work just as hard to get a good exam mark and get the same overall score that I otherwise could have achieved on the exam alone. I became great at doing exams. I’m not sure, though, that I became great at being a lawyer.
There were staff who defended this model on what would now be called authentic assessment grounds. It was said that lawyers need to give legal advice under significant time pressure, and that 100% exams reflect that reality. My belief that there was at least some wisdom in this view is probably the reason why I found myself adopting traditional test/exam paradigms when I first entered academia some 15 years ago.
I became great at doing exams. I’m not sure, though, that I became great at being a lawyer.
More recently, I’ve come to understand that accreditation requirements play a role in perpetuating exam use in legal education. If a certain percentage of student assessment must be invigilated, supervised exams may be a pragmatic assessment solution, especially where cohorts are large. At the same time, we increasingly appreciate the benefits of authentic assessment (i.e. assessment with real-world relevance that requires students to undertake tasks they may actually face in their future careers). Such authenticity prepares students for their future professional careers (by way of simulation, for example), promotes students’ engagement with their own learning, and can mitigate academic integrity risks. Are traditional law exams incompatible with authentic assessment? It’s a conundrum, for sure, but one that I think can be solved (at least in part) with a little bit of creativity.
Accreditation requirements constrain the assessment I set for my undergraduate business law and commercial law students. I’ve been encouraged in my own authentic assessment mission, however, by understanding that authenticity comes in degrees. I figure that even small movements towards greater authenticity are still moves in the right direction. With this in mind, I was recently inspired to set a series of exam questions based upon a real-world toy store receipt obtained in my own private capacity.
Traditional law exams provide students with a hypothetical but realistic scenario, and ask them to respond by applying the law to these given facts. My exam’s initial two questions generally stuck with this model, but I based my scenario around this receipt. This allowed me to incorporate business realia into an otherwise-traditional exam format to help bring my scenarios to life, and highlight their real-world relevance, elsewhere a common practice in (for example) the teaching of languages.
For my third question, however, I departed further from the norm and set students a practical business task, in an attempt to integrate a realistic, simulated situation into my exam.
Setting this question felt risky for three reasons. First, we had not covered a similar question in tutorials, so students didn’t have an example to refer to heading into their exam. Secondly, there are many possible correct answers. Our marking guide needed to foresee as many as possible, whilst remaining open-ended, in turn requiring us to rely heavily on our markers’ professional judgment (as any authentic assessment task will). And thirdly, there was not complete agreement across the unit team that the clause depicted actually breached the law: this ambiguity also reflected the task’s authentic nature. These differences of opinion required resolution across the teaching team before the exam was finalised.
Even small movements towards greater authenticity are still moves in the right direction.
Still, given the question’s low weighting (3 marks out of 50), I felt we had some scope to experiment. To our surprise, students actually did very well on this question. Even where students performed relatively poorly elsewhere on the exam, it was not uncommon for a full three out of three marks to be earned here. Ultimately, unlike responses to traditional law exam questions, students didn’t need to write a particular volume to produce an answer with the required level of detail, and they didn’t need to refer to particular statutes or cases. They just had to come up with a proposal – even if it was just a single line – that fixed this practical legal problem. That student results were high on this question might, then, reflect these differences as compared to traditional law exam questions.
On the other hand, they also lead into another question debated across our unit team regarding this and subsequent authentic assessment attempts: do strong overall results for these tasks actually reflect student learning that justifies those marks, or have our assessments failed to properly differentiate student performance and thereby uniformly awarded high marks that might not have been appropriate? Recognising that (in my own professional experience) many law academics are naturally reluctant to award full marks, I’d like to think it’s the former. At the same time, I also recognise that the question remains open. That being said, I’d love to hear about our readers’ own experiences with assessment authenticity in accredited units, and thoughts on ways to effectively differentiate student performance according to achievement in that context.
Associate Professor Benjamin Hayward
Ben is an Associate Professor in the Department of Business Law and Taxation at the Monash Business School. Over the course of his academic career, Ben has taught in both Law School and Business School contexts, with a particular focus on first year teaching. His research analyses the laws supporting international trade, and his first year teaching at the Monash Business School helps his students understand how legal knowledge can support their future business careers. In that context, Ben has a particular interest in understanding how traditional law teaching practices (including the use of lectures and exams) can be adapted and improved to better fit the modern business education context.
3 responses to “Traditional law assessments, accreditation requirements, and authentic assessment: a solvable conundrum?”
Wendy Taleo
Thanks for your post Ben. It has me thinking that this move into authentic assessment also led towards authentic academic discourse about assessing. I imagine it was an interesting academic conversation about how those 3 points were awarded and what were the range of possible answers.
The concept of realia is a fancy term but such a great tool for designing activities for real-time opportunities for active learning.
Dear Wendy – thanks so much for reaching out, and I’m glad to see you’ve picked up on one of the practical difficulties here! You’re very right – we had to consider, for example, what separated a 3 mark response from say a 1 mark response; and in an exam context, whether a student’s expression (and, by extension, language skills) mattered if the core idea was valid. It’s been our experience that as the assessment gets more authentic, it gets more ambiguous too, and that extends to the marking.
One of my fantastic colleagues Andrew Moshirnia introduced me to the ‘realia’ term and I really love how artefacts of all kinds can help students understand how the law applies to the real world of business. Two of the first contract law cases students ever approach – the Carlill and Boots cases – are over a hundred years old each, though we can show students a copy of the real-world advertisement that was the basis of the former (and we can use an inflation calculator to show why a dispute over 100 pounds back then ended up in court), and we can also show students that Boots Pharmacy still exists in the UK today! In the consumer law arena, where this question sits, I actually accumulate so much great realia in terms of receipts, signs, and advertising campaigns – just from my everyday life – that I’m never able to use it all!
Great post! It’s encouraging to see how incorporating real-world elements into exams can bridge the gap between traditional assessments and authentic learning experiences. The use of a real store receipt for exam questions is a clever way to make the scenarios more relevant. It’s fascinating to hear that students excelled in this part of the exam, suggesting that practical tasks can indeed enhance learning outcomes. I’m curious to see how these insights might shape future assessments and whether they can be scaled or adapted to other legal topics.
Leave a Reply to Wendy Taleo Cancel reply