We’re at a critical juncture in higher education. Recent breakthroughs in Large Language Models (LLMs) have brought opportunities for a new kind of scalable, personalised learning – just as they have threatened the viability of conventional assessment methods.
We can now take empirically grounded models about a wide variety of professional, discipline-relevant environments and scenarios and transform them into scalable, immersive learning experiences. Learners can practise core disciplinary skills -adopting the role of, for example:
- a public health official combatting online misinformation about an outbreak,
- an activist managing a social media campaign,
- a medical clinician developing rapport with a patient,
- a secondary school teacher managing an unruly class,
- a negotiator working for the best possible deal for their company.
So long as we have well modelled theories of a professional phenomenon, we can simulate it and embed it in our wider learning design -using the LLM to facilitate engagement with the learner through interactive, natural language. I have two web applications at the prototype stage that can help achieve this.
Tool 1: A simulated network of interactive agents
The first tool gives learners the chance to interact with a large number of agents, complete with detailed individual personas and a complex network of relationships. A learner (or a team of learners) can, to expand on the first example, design and implement a health communications campaign around a bird flu outbreak, using a series of carefully crafted posts on the simulated network.
The adaptive agents’ responses to the posts are determined by a model developed based on the literature in that field, taking into account the quality of the content in the post, an individual agent’s persona, and the individual agent’s place in the network.
Students must combat the dis- and mis-information spread by some of the simulated agents. The LLM plays a role in transforming the students’ posts into a form the model can handle and transforming the modelled reactions into meaningful posts by the agents at the other end.
I will deploy this tool for the first time in a class centred on an extended interprofessional simulation this year. The social media tool is only one component of a larger, complex simulation. The tool will be supported by two assessment items: a media campaign pitch -where teams provide detailed plans on how they will use social media to achieve their campaign goals, and weekly in-person briefings on their progress. The experience for the student will be a dramatic, intrinsically motivating, and authentic opportunity to practise critical skills in a way that would be difficult to replicate.
Tool 2: Simulated interactions with patient agents
The second tool, better suited for the latter three use cases, allows learners the chance to fully immerse themselves in an applied professional role where they must interact with a simulated agent -and get instant feedback on the quality of their interaction. Taking the clinician example, I am working with Lisa Barker and Helmy Cook to develop realistic patient agents who learners can interview in an authentic professional manner.
These agents have detailed medical histories, personas, and varied willingness to share embarrassing medical details with learners -who must put the work in to develop rapport with the patients.
Again, the determination of who will share what and under what circumstances and of what makes an interaction likely to develop rapport is modelled by the team -with the LLM only doing the final transformation of the learner’s communications into the model and transformation of the model’s reply at the other end.
Here also, the student experience will be lifelike and an effective way to translate theories learned in class to professional application. We will pilot this tool later this year. The learning activity will build on other readings, activities, and assessments in the class.
Of course, these LLM-augmented simulations should be just one of many assessment-supported learning activities used to develop these skills, but they can be a powerful addition to our educative toolbox -at a time when Generative AI’s proficiency in mimicking class essays and other mass assessment poses a threat to how we assess at scale. The careful modelling of behaviour we use the LLMS for will minimise the risks of bias and hallucination common in these LLMs.
If it’s of interest to you, read the feedback on this blog post provided by ChatGPT.
Dr Joel Moore
Joel is a Senior Lecturer and Director of Postgraduate Education in the Faculty of Arts at Monash University. Joel has a PhD in Political Science and is the recipient of a number of Monash University awards for teaching, including Malaysia Outstanding Educator Award; Faculty of Arts Dean’s Award for Teaching Excellence; and Pro-Vice Chancellor’s Award for Excellence in Teaching. As part of his focus on facilitating better engagement with students, Joel builds web applications using R, Shiny and ChatGPT4 and collaborates with multidisciplinary groups of educators to design learning activities around them.
Leave a Reply