Blog post Part of series: Artificial Intelligence in educational research and practice
Surface certainty, knowledge substitution and the rise of the ‘shallow learner’
As generative artificial intelligence (AI) tools such as ChatGPT become embedded within higher education (HE), we must ask not only how they shape student practices but also how they shift our very understanding of what it means to our epistemology. One emerging risk, largely unaddressed in current discourse, is the development I call the ‘shallow learner’: a student who outsources critical engagement, over-relies on AI outputs, and enters the workforce with a fragile epistemic foundation. In a sector premised on cultivating deep, rigorous understanding, this should give us pause.
This blog post focuses on how generative AI risks encouraging surface-level learning habits, eroding students’ epistemic literacy, and leading to the emergence of ‘shallow learners’ who appear knowledgeable but lack deep, critical understanding.
Knowledge without epistemology
AI tools generate fluent, plausible text that mimics scholarly tone and structure but lack lived experience, critical interrogation or an epistemic trail: the ability to trace how knowledge is constructed through sources, theory and evidence. When students use generative AI in place of academic reading and dialogue, they risk knowledge substitution: accepting AI output as fact without questioning its validity. A student might ask ChatGPT to explain safeguarding in education and reproduce its response without checking if definitions align with current legislation or ethical frameworks. This leads to the shallow learner, who repeats terminology without understanding its application.
‘When AI offers frictionless answers, students may confuse fluency with understanding and coherence with credibility.’
This becomes especially problematic when students lack epistemic literacy: the capacity to ask: Who said this? How do they know? What is missing? Brookfield (2017) reminds us that critical thinkers engage with multiple perspectives and assumptions. Yet AI avoids contradiction, nuance or doubt. Sousa & Cardoso (2025) found only 2.3 per cent of undergraduates abstained from AI use, while Stone (2024) noted that 41 per cent used it in banned ways. When AI offers frictionless answers, students may confuse fluency with understanding and coherence with credibility.
From cognitive off-loading to professional risk
This erosion of critical engagement has serious implications for professional preparation. HE is not just about acquiring information, it is about cultivating the judgment, resilience and contextual understanding required for professional life. Disciplines like education, healthcare, law and engineering depend on navigating ambiguity, interpreting evidence and applying theory to practice.
Students increasingly use AI for tasks such as summarising complex texts or generating responses to case studies. While helpful as a scaffold, uncritical dependence risks producing professionals with shallow conceptual depth and limited evaluative skill. The Organisation for Economic Co-operation and Development (OECD) warns of ‘cognitive off-loading’, where reliance on technology weakens independent thinking and confidence in problem-solving (OECD, 2023). The result? Shallow learners as graduates who appear competent on paper but struggle in real-world decision-making.
Reframing pedagogy around critical AI use
Rather than resisting AI’s presence, educators must shift focus to how students use it. This starts by embedding critical AI literacy – helping students interrogate outputs, verify claims, and engage in epistemic critique. It also means rethinking assessment: prioritising annotated processes (students submitting AI-generated drafts alongside notes highlighting what they kept, edited or rejected), or dialogic exploration (students critically discuss contrasting AI-generated responses to the same question) over polished but decontextualised outputs. This aligns with Biesta’s (2009) call for education to move beyond knowledge transmission towards subjectification: the development of autonomous, responsible thinkers. To achieve this, learners must engage with uncertainty and contradiction, not outsource them to machines.
‘If we allow AI to flatten learning into frictionless output, we risk producing graduates who are well-written but lack depth, nuance or ethical grounding.’
Conclusion
If we allow AI to flatten learning into frictionless output, we risk producing graduates who are well-written but lack depth, nuance or ethical grounding. The role of higher education must be to cultivate thinking that AI cannot simulate: dialogic, situated and critically aware. This is not a crisis of technology but a crisis of pedagogy. AI can support learning, but only if students are equipped to question, critique and go beyond it. Our challenge is to ensure that in embracing these tools, we do not lose the very capacities that define education at its best.
References
Biesta, G. (2009). Good education in an age of measurement: On the need to reconnect with the question of purpose in education. Educational Assessment, Evaluation and Accountability, 21(1), 33–46.
Brookfield, S. D. (2017). Becoming a critically reflective teacher. Jossey-Bass.
Organisation for Economic Co-operation and Development [OECD]. (2023). Opportunities, guidelines and guardrails on effective and equitable use of AI in education. ÌýÌý
Sousa, A. E., & Cardoso, P. (2025). Use of generative AI by higher education students. Electronics, 14(7), 1258.
Stone, B. W. (2024). Generative AI in higher education: Uncertain students, ambiguous use cases, and mercenary perspectives. Teaching of Psychology, 52(3), 347–356.