Blog post Part of series: Artificial Intelligence in educational research and practice
A question of selfhood construction: Artificial intelligence and social injustice
Placing selfhood construction at the centre of social justice provides a useful lens to view emerging ethical debates around Artificial Intelligence in Education (AIED) and beyond. Selfhood construction – the making of one’s self – is underexplored in educational research and practice, yet it could help address current and emerging educational crises. Selfhood forms through a constant tension between an individual’s social context and their autonomy (Kincheloe, 2005). Norms, values and knowledge shape who we become, but individuals also act and make choices that respond to those outside influences. When the policies and practices of schools serve to construct certain selfhoods over others, they place an unequal value on those selfhoods and enact a fundamental form of social injustice. This is where the danger of AIED lies, in its reduction of both the diversity of knowledge and the freedom of students.
‘This is where the danger of Artificial Intelligence in Education lies, in its reduction of both the diversity of knowledge and the freedom of students.’
This blog post focuses on the examination of student-centred AIED tools in formal education – such as intelligent tutoring systems and digital learning environments – and critiques their potential social justice implications. To do so, I locate AIED within the educational status quo and use the concept of selfhood construction to interrogate the social injustice potential of these tools.
Viewing AIED through selfhood construction
When I first read about that could provide personalised learning for individual students through adaptation, I was filled with optimism about the potential of AIED (Holmes & Tuomi, 2022). I soon realised, however, that even the most promising AIED tools cannot be separated from the context in which they are deployed. While much of this technology is still developing, ethical scrutiny and regulation are urgently needed (Holmes & Porayska-Pomsta, 2023).
AIED risks being a reductive and universalising force on knowledge across curricula worldwide. Although AIED tools are developed globally, most investment and development occurs in the Global North (Holmes & Tuomi, 2022). This influence matters, particularly when considering that many AIED systems are deployed wholesale, without differentiation, to other countries (Zembylas, 2023). As a result, non-dominant perspectives are excluded and dominant knowledge systems from emerging AI global leaders are further amplified. When institutions treat knowledge as singular rather than diverse, they silence ways of knowing outside of the legitimised canon and devalue the selves connected to them (Paraskeva, 2016).
Furthermore, AIED-enabled surveillance threatens student autonomy beyond what is needed for a productive schooling environment. In one example, biometric data collection that tracks students’ eye movements can be used to check their levels of task engagement and feed that information back to teachers in real time, allowing for instant corrective action when students fall off task (Conati et al., 2013). While this may keep students working productively, it comes at a severe cost to their personal autonomy. School should help young people explore who they are and who they wish to become, not function as human capital factories. This exploration involves both resistance and compliance and I fear that these surveillance practices will all but stifle that resistance. By going further down the path of surveillance, schools are implicitly valuing students who express themselves in a concentrated and compliant manner to an even greater extent. Yet as educators, we know that expecting every student to learn and engage in the same manner is both unrealistic and harmful. Classrooms are composed of diverse personalities, and educational policy must reflect this by making room for that diversity. Privileging a single mode of engagement amounts to a socially unjust vision of education.
Future lessons
AIED could certainly transform education for the better; however, without deliberate direction, these technologies could reinforce the most problematic aspects of our education systems. The choices we make now will not only shape how students learn but who they become. Therefore, a framework grounded in selfhood construction, such as the one presented in my , would offer a valuable tool, not only for guiding the development of AIED towards autonomy and diversity, but also for addressing broader questions about what kind of learners, citizens and societies schools are shaping.
References
Conati, C., Aleven, V., & Mitrovic, A. (2013). Eye-tracking for student modelling in intelligent tutoring systems. In R. A. Sottilare, A. Graesser, X. Hu, & H. Holden (Eds.), Design recommendations for intelligent tutoring systems, vol. 1, pp. 227–236. US Army Research Laboratory.
Holmes, W., & Porayska-Pomsta, K. (2023). The ethics of artificial intelligence in education. Routledge Taylor.
Holmes, W., & Tuomi, I. (2022). State of the art and practice in AI in education. European Journal of Education, 57(4), 542–570.
Kincheloe, J. L. (2005). Critical constructivism: A primer. Peter Lang.
Paraskeva, J. M. (2016). Curriculum epistemicide: Towards an itinerant curriculum theory. Routledge.
Zembylas, M. (2023). A decolonial approach to AI in higher education teaching and learning: Strategies for undoing the ethics of digital neocolonialism. Learning, Media and Technology, 48(1), 25–37.