Unlocking education opportunities: authentic assessments in online programs

By Zahra Aziz and Danijela Gasevic
Posted Tue 6 August, 2024

Bridging education and real-world skills through authentic assessments: The rapid development of generative artificial intelligence (AI) technologies has disrupted traditional tertiary education, impacting how we teach, learn and assess learning. This transformation is also driven by the increasing demand for knowledgeable graduates capable of applying their learning in “real-world” contexts (scenarios and situations our graduates are likely to encounter in their future work environments). In the context of these disruptions and demands, the importance of authentic assessments has never been greater.

Assessment is considered authentic when it requires students to use the same knowledge, skills and attitudes that they will need to apply in their future careers (Gulikers et al., 2004). It measures students’ success in a way directly relevant to the skills they require once they have completed the course. Wiggins (1990) refers to assessment as “authentic” if it is realistic, requires judgement, simulates real-life contexts and assesses students’ judgement in negotiating a complex task whilst allowing appropriate opportunities to practise and receive feedback on performance. In other words, the assessment tasks should mirror the tasks professionals encounter in the field, making them highly practical and valuable. This is particularly relevant in fields like public health, where practitioners must navigate multifaceted problems and apply interdisciplinary knowledge.

By focusing on the application of knowledge and skills in realistic scenarios, authentic assessments can help to ensure that students are prepared for the demands of their future careers. This relevance is critical in an era where AI is compelling us to closely consider the types of knowledge and judgement that will be essential in practical settings.

As the demands of future careers have evolved, educators’ understanding of effective assessment has also evolved accordingly. Educators and institutions are increasingly embracing authentic assessment approaches, leading to a significant shift towards assessment practices that are more practical and aligned with “real-world” applications.

Authentic assessments, when effectively designed and implemented, can offer several advantages, such as helping develop skills that traditional assessment would not typically cover (Arnold, 2019), closing competency gaps between education and professional life (Ashford-Rowe et al., 2014), focusing on higher-order thinking skills (Koh, 2017), as students are required to apply knowledge creatively to problem solve, and motivating students by providing tasks relevant to professional and everyday life (Coon & Walker, 2013).


Authentic assessments can help to ensure that students are prepared for the demands of their future careers. This relevance is critical in an era where AI is compelling us to closely consider the types of knowledge and judgement that will be essential in practical settings.

However, it is important to address a misconception that authenticity in assessment is binary i.e., the misconception that assessment can be either authentic or not authentic (Cronin, 1993). Instead, we should consider authenticity as a continuum, and encourage educators to progress towards higher levels of authenticity along the continuum (Cronin, 1993). The degree of authenticity of an assessment could be discussed in relation to how closely it mirrors the “real world” tasks and challenges our graduates are likely to encounter in their future careers (Gulikers at al., 2004).

A review of assessments in online postgraduate programs: Given the crucial role assessments play in evaluating student learning and preparing graduates for future work, we recently reviewed assessments in two of our fully online master’s courses, i.e., Master of Public Health (MPH) and Master of Health Administration (MHA). In the context of MPH and MHA, it is essential to use assessment methods that appropriately measure the acquisition of competencies required for public health and health administration professionals. Our review included the methods, variety and scope of assessments, focusing on aligning the assessment objectives, purpose, methodology and authenticity.

We reviewed over sixty assignments (and fifty sub-assignments embedded within main assignments) across nineteen units from two postgraduate courses (MPH and MHA). The review process was facilitated through interactive discussions with educators and unit coordinators with extensive experience designing and delivering these assessments. The primary objective was to evaluate the authenticity of the assessments and identify strategies for enhancing their effectiveness. In doing so, we ensured that these interactive consultations also served as educational opportunities for academic staff to deepen their understanding of authentic assessments and enhance their skills in designing and implementing them effectively.

Six primary factors guided the review and analysis:

  1. The relevance and alignment of the learning outcomes with each assessment, ensuring they aligned with the core competencies and skills required by public health professionals.
  2. The skills and knowledge standards we want our students to achieve through each assessment.
  3. Assessment rationale or understanding the reasons for assessing, i.e., gathering evidence to inform teaching, learning, or evaluation.
  4. Effectiveness of methods for assessing the specific learning outcomes, the identified learning areas and alignment with the assessment rationale (for example, does the task resemble activities expected within professional settings?).
  5. Authenticity of the assessment, i.e., whether the assessment measures students’ success in a way relevant to the knowledge, skills and attitudes required once they have completed the unit and course.

Authenticity of the assessment (very authentic, somewhat authentic, not authentic, not sure) was evaluated against three dimensions that we, and in conversation with our colleagues teaching in MPH and MHA courses, identified as specific to these courses: 1) “real world” relevance (the extent to which the assessment mirrors “real world” tasks and challenges that professionals in the field of public health and health administration would encounter); 2) skills development (the use of higher-order thinking processes, competencies and practical skills in the assessment); and 3) context (whether the assessment is situated within physical and/or social context(s) that reflect how knowledge, skills and attitudes will be used in public health/health administration professional practice). These dimensions were informed by the work carried by Gulikers et al., 2004 on a five-dimensional framework for authentic assessment.

We acknowledge that authenticity lies “in the eye of the beholder” meaning that authenticity is subjective and dependent on individual perceptions (Ajjawi 2024). The dimensions of authentic assessment identified as relevant to MPH and MHA courses may not be as relevant for evaluating authenticity of assessments in other courses and programs. In our project, these dimensions were considered based on educators’ perceptions; however, learners’ perceptions of assessment authenticity may differ from those of educators (Gulikers, 2004). If learners do not perceive assessments as relevant (linked with future career tasks and situations) and meaningful (linked with their personal interests), they are less likely to perceive them as authentic, which may affect their learning (Sambell et al., 1997). Therefore, learners’ experiences of authentic assessments should also be considered when designing such assessments (Nicaise et al., 2000).

  1. Ensuring the effectiveness of the assessment in appropriately measuring student learning and providing meaningful feedback. This includes an evaluation of the criteria and rubric to assess good performance (and what good performance looks like).
Assessing success, gaps and opportunities: Insights from our assessment review

Our findings show that some of the most used assessment types were analytical (51%), reflective (17%), descriptive (14%), persuasive (11%) and others (7%). The main assessment formats include critical reflections, policy analysis, program reports, case studies, position statements, short answers, quizzes, infographics and presentations. Based on the authentic assessment measures discussed above, about 70% of the assessments were deemed ‘very authentic’, followed by ‘somewhat authentic’ (22%). Only 8% of assessments were deemed either ‘not authentic’ or fell under the category of ‘not sure’, requiring further examination.

In addition to assessing whether our assessments align closely with the desired learning outcomes and authentic public health practice, the exploration of assessments’ authenticity was guided by the specific skills and knowledge areas each assessment category targets, which are essential for effective practice in the field.

For example, critical reflection encourages critical thinking and self-assessment. Similarly, policy analysis develops critical appraisal and data interpretation skills vital for evidence-based public health decision-making. Case studies allow students to apply theoretical knowledge to real-world scenarios, promoting problem-solving and practical application skills, whilst reports and essays enhance students’ ability to articulate complex ideas, conduct thorough research, and communicate findings effectively, all of which are important for public health reporting and policy formulation. Formulating position statements helps students develop persuasive writing and advocacy skills, essential for influencing public health policy and practice. Project planning prepares students for the practical aspects of program development and implementation, including strategic planning and resource management. Infographic and presentation assessments foster the ability to present data and information visually, making it accessible to diverse audiences and help develop public speaking and presentation skills, which are important for effectively communicating public health messages. For quantitative subjects like epidemiology and biostatistics, which involve using mathematical, statistical, and computational methods to understand and address public health issues, the most relevant assessment format was short answer questions based on case studies. This format reinforces understanding through repetitive engagement with key concepts and theories and presents practical skills needed to apply this knowledge in real-world scenarios.

Overall, our assessments ensure that students are well-equipped with the competencies required to meet the demands of the public health sector, enabling them to apply their learning in real-world contexts and contribute meaningfully to the field.

On the assessment criteria, grading and rubric, there was a consensus among the team that it is a good practice to continuously review rubrics and grading criteria to enhance clarity of what is expected of students and ensure transparency. This can help students understand how their work will be evaluated, which can reduce disputes and misunderstandings about grades. Furthermore, as curricula evolve to incorporate new knowledge and skills, rubrics need to be updated to reflect these changes and ensure they are assessing relevant competencies.

Our analysis also identified a few gaps in the current assessment approaches, and recommended alternative methods or modifications to enhance the authenticity and effectiveness of the assessment process within our programs.

Educational opportunities for educators: The entire review exercise served as a significant educational opportunity for our staff. Through this process, we engaged deeply with the parameters of authentic assessment, examining the intricacies and nuances that define it. By collaboratively reviewing and discussing various assessments, we were able to evaluate our current practices and identify areas for improvement. This engagement reinforced our understanding of authentic assessment principles and highlighted the importance of aligning our assessment methods with real-world applications and professional competencies.

Moreover, this exercise enhanced our collective knowledge and skills in designing and implementing authentic assessments, allowing for an in-depth exploration of assessment methods and ensuring they are both rigorous and relevant. The insights and feedback gathered during these consultations were invaluable, offering actionable recommendations for future curriculum development and assessment design. Ultimately, this exercise not only improved our assessment strategies but also contributed to our professional growth, empowering us to better prepare our students for the challenges they will face in their public health careers.

Authentic assessment in the age of AI: This methodological review sheds light on the importance of embedding authentic assessments within MPH and MHA programs and provides practical guidelines for educators aiming to implement authentic assessment practices in their courses.

It is important to address a misconception that authenticity in assessment is binary i.e., the misconception that assessment can be either authentic or not authentic (Cronin, 1993). Instead, we should consider authenticity as a continuum

The shift towards authentic assessments in tertiary education represents a significant advancement in preparing students for the complexities of professional practice. Generative AI’s ability to quickly provide accurate information diminishes the relevance of assessments focused solely on knowledge recall. If students can easily access information using AI tools, traditional exams that test memorisation become less meaningful as indicators of student competency.

It has been argued that the creation of authentic assessments may help address the problem of generative AI in assessment (Overono and Ditta 2023). Authentic assessments that leverage “real world” tasks involving complex problem-solving, higher-order thinking, practical and interpersonal skills, and reflective practice may be less susceptible to AI manipulation in public health, health administration and other health fields. However, not all authentic tasks are immune to the problem of generative AI in assessment, as observed in engineering education (Nikolic et al., 2023).

While generative AI presents challenges, it can also be leveraged to enhance the authenticity of assessments. AI tools can be used to simulate real-world scenarios, provide instant feedback, and support personalised learning pathways. For example, some of our colleagues at Monash are working on AI-driven simulations that can create realistic public health crises that students must navigate, offering a safe environment for them to apply their knowledge and skills.

Question for readers: Have you conducted a similar exercise for your discipline? What methods did you use, and what was your experience?

References


Ajjawi, R., Tai, J., Dollinger, M., Dawson, P., Boud, D., Bearman, M. (2024). From authentic assessment to authenticity in assessment: broadening perspectives. Assessment & Evaluation in Higher Education, 49(4), 499-510.

Arnold, L. (2019). Authentic Assessment. Retrieved 6 August, 2024 from https://lydiaarnold.net/wp-content/uploads/2019/02/authentic-assessment-1.pdf

Ashford-Rowe, K., Herrington, J., Brown, C. (2014). Establishing the Critical Elements That Determine Authentic Assessment. Assessment & Evaluation in Higher Education, 39(2), 205–22.

Coon, David R., Walker, I. (2013). From Consumers to Citizens: Student-Directed Goal Setting and Assessment. New Directions for Teaching and Learning, 2013(135), 81–87.

Cronin, J.F. (1993). Four misconceptions about authentic learning. Educational Leadership, 50(7), 78-80

Koh, Kim H. (2017). 1 Authentic Assessment. Oxford University Press.

Nicaisem, M., Gibney, T., Crane, M. (2000). Toward an understanding of authentic learning: student perceptions of an authentic classroom. Journal of Science Education and Technology, 9, 79-94.

Nikolic, S., Daniel, S., Haque, R., Belkina, M., Hassan, G.M., Grundy, S., Lyden, S., Neal, P., Sandison, C. (2023). ChatGPT versus engineering education assessment: a multidisciplinary and multi-institutional benchmarking and analysis of this generative artificial intelligence tool to investigate assessment integrity. European Journal of Engineering Education, 48(4), 559-614.

Overono, A.L., Ditta, A.S. (2023). The rise of artificial intelligence: a clarion call for higher education to redefine learning and reimagine assessment. College Teaching, 1-4.

Sambell, K., McDowell, L., Brown, S. (1997). But is it fair? An exploratory study of student perceptions of the consequential validity of assessments. Studies in Educational Evaluation, 23(4), 349-371

Wiggins, G. (1990). 2 The Case for Authentic Assessment. – Practical Assessment, Research & Evaluation.

Zahra Aziz

Zahra Aziz is a Senior Lecturer and Head of Online Education within the School of Public Health & Preventive Medicine. Her research focuses on the innovative design and effective delivery of educational programs. She has received the Dean’s Award for Excellence in Education and is a Senior Fellow of the Higher Education Academy (HEA).

Danijela Gasevic

Danijela is an Associate Professor and Head, Postgraduate and Professional Education Programs in the School of Public Health and Preventive Medicine. Her research interests include chronic disease prevention and assessment in health professions education. She currently leads the project on rethinking assessment practices in higher education. She has received the School’s Teaching Excellence Award.