By Dr Paula de Barba
Posted Tuesday 22 July, 2025
Many optimistic discussions about the use of generative Artificial Intelligence (AI) conclude with the argument that, in a world where machines can perform most jobs, human value will be found in our uniqueness. I’d consider myself on the optimistic side, too. But… what does that mean? To bring out our uniqueness to the table?
In their 1986 book, Mind Over Machine: The Power of Human Intuition and Expertise in the Era of the Computer, Dreyfus argues that human uniqueness lies in our reliance on intuition and experiential knowledge to make meaningful contributions to the world. This knowledge is not just theoretical; it is formed through our direct, physical interactions within specific contexts and circumstances. Our contributions are shaped by who we are, our sense of self, personality, and preferences, as well as the unique opportunities we encounter. These factors, combined with the unpredictable variability of our experiences, make our knowledge both embodied and highly adaptive. Our unpredictable interactions with the world and the evolving contexts we encounter shape intuition in ways that are difficult to replicate systematically. While human behaviour is constrained by biases, social contexts, and material circumstances, it is precisely this complex interplay of influences that creates the contextual richness and adaptability that enables humans to approach complex decisions with perspectives that computers alone cannot replicate.
Our contributions are shaped by who we are, our sense of self, personality, and preferences, as well as the unique opportunities we encounter
But how does this argument hold up nearly 40 years later, amidst the remarkable advancements in generative AI? Generative AI can create diverse outputs and multiple solutions based on specific constraints, and this can sometimes mimic aspects of human creativity and problem-solving. However, generative AI still lacks the embodied, contextually-situated knowledge that humans accumulate across varied, lived experiences over time. Despite this, generative AI can be a powerful ally, helping to enhance our knowledge and support our intuition by offering new perspectives and insights, ultimately reinforcing our unique contributions.
To harness embodied knowledge and intuition, we must first become aware of them. Unique contribution begins with how we perceive ourselves and develop our sense of self. So, how do we cultivate a strong sense of self-awareness to recognise and express our unique contributions? This question is central to AI literacy.
Each time we learn something new, we’re not only acquiring knowledge – we’re also learning how to be, and learning about how we learn, uncovering more about ourselves. When we approach learning with intention and reflection, we become active agents in our learning journey, intentionally shaping our growth. This self-regulated approach – a cyclical process where learners plan, monitor, and evaluate their own learning, enabling them to take charge of their learning journey – helps us gain insight into our values, strengths, and potential contributions (Lodge et al., 2023).
For instance, if I’m training to become an engineer or improve as an educator, how do my personal experiences, embodied knowledge, and intuition influence my practice? I may master the AI tools that support my work, but what, ultimately, is distinctly mine in all of this?
In AI literacy, strengthening this strong sense of self is crucial. It empowers us to clarify our values, ethical principles, and strengths, bringing a grounded and intentional approach to how we use and interpret AI tools. Self-awareness becomes especially valuable when ethical dilemmas arise; it equips us to recognise when our human judgment should override or steer AI outputs, enabling us to act with integrity and responsibility.
In AI literacy, strengthening this strong sense of self is crucial. It empowers us to clarify our values, ethical principles, and strengths, bringing a grounded and intentional approach to how we use and interpret AI tools.
Ultimately, when we have a strong sense of self, we’re better equipped to engage in self-regulated learning. Through developing these self-regulated learning capabilities, students can become agile, ethical, and reflective users of AI. It prepares them to thoughtfully integrate AI into their professional roles, enhancing their unique contributions as well as the quality of their work. By cultivating these capabilities, we can become professionals who not only use AI effectively but do so in ways that honour our unique, human insights and values.
Behaviours – the visible actions we observe – are only the tip of the iceberg. Beneath the surface lies the foundation that informs those actions, which can be profound and purposeful or, at times, shallow and reactive. If we focus only on what students do with AI tools rather than how and why they engage with them, we risk missing the deeper layers of understanding and growth.
In teaching AI literacy, there are many approaches to explore the technical “what”. As Monash University staff or students, we can access more information about AI literacy here, including a self-paced online module. However, a self-regulated learning approach encourages us to see ourselves as the central agents of our learning journey. This perspective keeps the focus on us as the masters of our experiences and choices, helping us build a foundation of self-awareness and intentionality. From this grounded space, we can direct our actions to channel our unique contributions into our work with AI, whether that’s using today’s tools or those yet to be developed.
Acknowledgement statement
This post emerged from a collision of artificial and human intelligence. ChatGPT-4 and Claude 4 Sonnet helped organise my scattered thoughts into coherent arguments over multiple iterations, and improved the post’s readability. Colleagues Jason Lodge, Jaclyn Broadbent, and Tim Fawns provided wonderfully human conversations no AI can replicate.
Then there’s my partner, who, when I was passionately arguing that the world needs everyone’s unique perspective – including theirs – responded with characteristic wit: “I think they’re overestimating Ben (partner’s name)”. Their perfectly timed self-deprecation had me laughing, as they proved my point by being so quintessentially themselves.
References
Dreyfus, H. L., & Dreyfus, S. E. (1986). Mind over machine: The power of human intuition and expertise in the era of the computer (1st ed.). Free Press.
Lodge, J. M., de Barba, P., & Broadbent, J. (2023). Learning with generative artificial intelligence within a network of co-regulation. Journal of University Teaching and Learning Practice, 20(7), 1-10. https://doi.org/10.53761/1.20.7.02

Dr Paula de Barba
Dr Paula de Barba has over 12 years of experience in educational psychology research. Having earned her PhD from the University of Melbourne School of Psychological Sciences and completing her postdoctoral at the Melbourne Graduate School of Education, her expertise is investigating student motivation and learning strategies within online settings. She is currently a Senior Lecturer at Monash Education Academy and Monash Online. Dr. de Barba’s research focuses on autonomous learning, motivation in online courses, self-regulated learning, and learning analytics, all aimed at empowering students and enriching their educational journeys.

Leave a Reply