Blog post Part of series: Artificial Intelligence in educational research and practice
AI and education for Right Action: Exploring directions through possibilitarianism
In this blog post, we think about how artificial intelligence (AI) might open space(s) for Right Action by expanding individualistic human discernment, and by moving towards collective/communal dialogue about possibilities for Right Action. We suggest taking a nuanced approach that opens us to AI’s potential for educational futures.
What is Right Action?
Within utilitarianism, an action is understood in terms of its consequences: Right Action is that which does the ‘maximum good’ or the ‘least harm’ (see Driver, 2014). For virtue ethicists, Right Action means activities that fortify us and improve the ‘happiness of others’ (see Seneca, 2004). Right Action in education broadly indicates activities that not only do not harm but that align with the expectations of assisting students becoming ‘quality people’ (see Hall & Ames, 1998): individuals who, through the educational process, become more capable of contributing to social harmony. In the classroom this might involve creating habits of considering collective wellbeing over individual gain or teaching emotional literacy and perspective-taking (see Rogers & Kelly, 2025). Right Action also involves communities and societies, which raises the question: What possibilities exist for AI to expand our individual and collective considerations through living the questions of possibilities for Right Action?
‘What possibilities exist for AI to expand our individual and collective considerations through living the questions of possibilities for Right Action?’
AI and the quest for knowledge
Humans have long sought to build repositories of information, and AI represents the latest human invention for the production and storing of knowledge. Ethan Mollick claims that current AI models, however, have a major issue: their pre-training and fine-tuning (a process called Reinforcement Learning through Human Intervention, or RLHI) are necessarily limited by the knowledge upon which the systems are trained (see Mollick, 2024). The Large Language Models that feed and inform AI are intrinsically biased due to the very nature of who has developed and trained them. Mollick exemplifies this by noting that US-based ChatGPT is liberal and pro-capitalism. When asked about Communism, ChatGPT uses words such as ‘restrictions on freedom’, and ‘low on civil liberties’. On the other hand, our own search into AI bias led us to DeepSeek – a Chinese-developed generative AI – where the answers to our prompt prove the point in reverse: when asked, it provides a description of Communism in China as a source of ‘world peace and development’, and a ‘people-centered development philosophy’. In other words, current AI models have ‘a skewed picture of the world’ (Mollick, 2024), which suggests that beyond RLHI strategies, we might also begin considering other training methods for AI tools.
In his Letters to a Young Poet, Rainer Maria Rilke (1929) articulates,
try to love the questions themselves as if they were locked rooms or books penned in a language most foreign to you. Don’t search for answers now that cannot be given because you could not live them. And it is about living it all. Live the questions now.
In line with this, we ask: What if all AI systems were weighted towards providing perspectives, and then prompting questions for consideration from the users? What possibilities would this open for education?
The question of Right Action in education
We suggest that AI systems should allow us to live the questions of Right Action and return education to a more dialogic space of inquiry. Within education, we ask generative AIs for the answers. Students using AI seek a platform that either provides the answers or does the thinking work for them. Educational systems such as K-12 or standardised testing regimes (SAT, A-levels, Gaokao) are built around the notion of right answers, so the question we pose here is: How can we challenge this right-answer-needed approach so that Right Action has many possibilities?
In this blog post, we advocate for shifting educational spaces beyond right answers to considering possibilities for Right Action through discernment. For instance, when analysing case studies or ethical scenarios, students could engage AI as a collaborative thinking partner – not to provide answers, but to surface diverse perspectives and probe assumptions. Education can shift from an individual to a collective and dialogic space, with worldviews and perspectives that can be discussed, considered and discerned. The possibilities of advocating for educational spaces that open dialogue, greater care and understanding of multiple perspectives, and a commitment to living the question of greater human flourishing is one possibility we believe worthy of consideration, advocacy and construction. What we call on is for the positioning of AI as a perspective-multiplier rather than an answer-provider, and for supporting educational frameworks that explicitly promote a dialogical exploration of what Right Action is and to engage with it.
References
Driver, J. (2014). The history of utilitarianism. In E.N. Zalta & U. Nodelman (Eds.), The Stanford encyclopedia of philosophy (Winter 2022 Edition). Stanford University.
Hall, D., & Ames, R. (1998). Thinking from the Han: Self, truth, and transcendence in Chinese and Western culture. SUNY Press.
Mollick, E. (2024). Co-intelligence: Living and working with AI. Portfolio/Penguin Random House.
Rilke, R. M. (1929). Letters to a young poet. Insel Verlag.
Rogers, H., & Kelly, C. (2024). Exploring the relationship between the Emotional Literacy Support Assistant (ELSA) intervention and whole-school approaches to wellbeing: A case study. Pastoral Care in Education, 43(4), 584–606.
Seneca. (2004). Letters from a stoic (R. Campbell, Trans.). Penguin Books.