娇色导航

Skip to content
 

Blog post Part of series: Artificial Intelligence in educational research and practice

Rethinking artificial intelligence through a relational lens

Margaret Bearman, Research professor at Deakin University

Artificial intelligence (AI) prompts an existential crisis for universities (Bearman et al., 2022), stoking concerns that we may lose the human touch or that teachers and institutions may be replaced by AI. Many educational commentators suggest we should embrace these new technologies as AI will be another fabulous tool for educators and students. While human versus machine is an old debate 鈥 which could usefully be reset (Bayne, 2015) 鈥 there has been a recent rise in these binary perspectives due to generative AI technologies such as ChatGPT.聽

The arrival of ChatGPT and similar large language models feels like a game-changer in how society thinks about AI. These generative AIs have an extraordinary ability to statistically synthesise large amounts of text and present them in a coherent and often dialogic way. For many educators and students, ChatGPT makes tangible the opportunities and challenges presented by AI.

So how can educators, institutions and students move beyond fear and hype? In light of the concerns and excitement, it is important to look beyond the binaries. In a , we note that AI is often compared to a 鈥榖lack box鈥 鈥 because its outputs are unpredictable, even to its developers (Bearman & Ajjawi, 2023). Thus there are often calls for 鈥榚xplainable鈥 AI or to improve what students (and we ourselves) know about a particular technology. But we think that is not the whole story. We write: 鈥楢I resembles many other aspects of our complex-socially mediated world in that it can never be fully explainable or transparent.鈥 Thus, we argue that we need to think about pedagogic strategies that can help our students learn to work with AI.

We conceptualise an 鈥楢I interaction鈥 as a useful starting point (Bearman & Ajjawi, 2023). An AI interaction occurs when a person works with a technology whose outputs cannot be traced, in a particular time and place. This definition of AI allows a shift away from considering technology as a neutral tool or as a deterministic technology, towards a contextualised relationship. This kind of thinking shifts the emphasis from 鈥榳hat AI can do for us鈥 and 鈥榳hat AI is doing to us鈥 to 鈥榳hat we are doing together鈥. This suggests, for example, that ChatGPT is always situated in the circumstances of its use: whether with an expert or a young child or a university student.

鈥楬elping students understand what 鈥済ood鈥 looks like or developing their 鈥渆valuative judgement鈥 (Tai et al., 2018) becomes an increasingly important pedagogical approach for working with AI.鈥

Working with AI can be thought of as a dynamic, in-the-moment experience, rather than a singular, static position. Thus, our students can learn to assess the trustworthiness of AI interactions rather than take a fixed global view of AI. Helping students understand what 鈥榞ood鈥 looks like or developing their 鈥榚valuative judgement鈥 (Tai et al., 2018) becomes an increasingly important pedagogical approach for working with AI.

Our proposal also exposes trust as a key emotional dimension of working with AI. We write: 鈥楤oth trust and distrust are powerful affective prompts 鈥 but nor are they sufficient in themselves.鈥 We suggest a person should pay attention to what they are feeling 鈥 to examine their own doubts and certainties when working with AI (Bearman & Ajjawi, 2023) 鈥 to help guide their judgements about seeking evidence to confirm AI outputs. We contend that emotions are often overlooked with respect to technology yet play a significant role in how they are incorporated into our day-to-day lives.

Our insights help frame how universities 鈥 and other educational institutions 鈥 can respond to AI. An interaction emphasises context: it allows us to note that, in one moment, a person and AI working together might lead to generative learning but, at another time and place, an AI interaction might be more instrumental or even harmful. And we should be employing pedagogic strategies that can help students distinguish between the two.

This blog post is based on the article by Margaret Bearman and Rola Ajjawi published in the British Journal of Educational Technology.


References

Bayne, S. (2015). Teacherbot: Interventions in automated teaching.聽Teaching in Higher Education,听20(4), 455鈥467.

Bearman, M., & Ajjawi, R. (2023). Learning to work with the black box: Pedagogy for a world with artificial intelligence. British Journal of Educational Technology, 54(5), 1160鈥1173.

Bearman, M., Ryan, J., & Ajjawi, R. (2022). Discourses of artificial intelligence in higher education: A critical literature review. Higher Education, 86, 369鈥385.

Tai, J., Ajjawi, R., Boud, D., Dawson, P., & Panadero, E. (2018). Developing evaluative judgement: Enabling students to make decisions about the quality of work. Higher Education, 76, 467鈥481.