½¿É«µ¼º½

Skip to content
 

Blog post Part of series: Artificial Intelligence in educational research and practice

Generative AI for educators: Opening the ‘black box’ behind its use

Nashwa Ismail, Lecturer at University of Liverpool

Generative AI (GAI) is a form of AI that utilises machine learning and deep learning techniques to generate new data (such as text and images) (Yu et al., 2023). However, teachers’ application using GAI remains ambiguous in terms of pedagogical integration into teaching strategies. This blog post explores this ambiguity surrounding teachers’ knowledge and application of GAI and its integration into teaching practices.

Chiu et al. (2023) observed that many educators perceive GAI as a ‘black box’, due to the limited transparency in how it processes data, generates outputs and makes decisions, making its inner workings difficult for users to fully understand. Carrigan (2024) confirmed that while large language models (LLMs) are widely used by academics, they often lack transparency in their practices. For Illingworth (2024), the use of GAI focuses on its outcomes (for instance risk mitigation) rather than improving Teaching and Learning (T&L) methods.

This lack of transparency has significant drawbacks. Suharyat (2023) highlights overdependence on GAI and concerns over its reliability, bias and ethical concerns. Privacy risks emerge when GAI handles emails or sensitive organisational matters, amplified by cyber threats targeting AI platforms like ChatGPT (Lakshmanan, 2023). Carrigan (2024) highlights the urgent need for open dialogue among educators to establish professional norms before entrenched patterns of GAI use take hold. Wegerif and Major (2024) recommend the need to encourage dialogues between GAI users to question GAI roles in pedagogy, its ethical implications, and its impact on student learning and assessment. Without such discussions, the integration of GAI risks becoming inconsistent, unreliable and potentially harmful. An example of GAI environmental harm is the excessive water consumption for its cooling and maintaining its data centres (UNEP, 2024).

‘The integration of generative artificial intelligence risks becoming inconsistent, unreliable and potentially harmful.’

Addressing the challenge

To address the challenge of vagueness in teachers’ knowledge and use of GAI, university academics across three countries (the UK, and ), organised full-day workshops. These collaborative events aimed to support educators in discussing their understanding of GAI and its application in Teaching and Learning. The face-to-face workshops engaged more than 200 academics from various higher education roles in collaborative discussions to identify the knowledge and skills educators need to effectively use GAI in T&L. The discussions identified three enablers of knowledge that educators lack and need support with (see figure 1):

  1. Pedagogical Enablers: Emphasising the ‘why’ behind GAI integration.
  2. Technological Enablers: Addressing the ‘³ó´Ç·É’, to use GAI tools and staying updated on emerging technology trends.
  3. Institutional Enablers: Exploring the ‘what’ issues to guide responsible GAI use (such as ethics and policy).

Figure 1: Enablers for educators to understand and use GAI

Hence, two critical issues emerged from these discussions:

  1. The rapid and diverse evolution of AI, described as ‘an octopus’, due to its widespread impact on education.
  2. The uncertainty of navigating an ever-changing landscape, summarised as ‘We do not know what we do not know.’

In summary, a significant knowledge gap regarding educators’ implementation of GAI in teaching and learning prevents teachers from effectively sharing their use of GAI. This may be due to ethical concerns (‘Does my use of GAI conflict with university policy?’), a lack of technological competence (‘How do I implement GAI tools effectively?’), or a narrow view of GAI as a time-saver rather than a tool for meaningful learning (‘Why should I rethink my teaching approach when GAI can automate tasks?’). By prioritising collaboration between educators, professional development and transparency, educators can move beyond the ambiguity of GAI to critically examine the purpose and pedagogical value of using GAI, ensuring it aligns with educational goals rather than merely focusing on its outcomes.


References

Carrigan, M. (2024). Generative AI for academics. SAGE Publications Ltd.

Chiu, T. K., Xia, Q., Zhou, X., Chai, C. S., & Cheng, M. (2023). Systematic literature review on opportunities, challenges, and future research recommendations of artificial intelligence in education. Computers and Education: Artificial Intelligence, 4, 100118.

Illingworth, S. (2024). Generative AI in academia: Friend or foe? ½¿É«µ¼º½ Blog. /blog/generative-ai-in-academia-friend-or-foe

Lakshmanan, R. (2023). Over 100,000 stolen ChatGPT account credentials sold on dark web marketplaces. The Hacker News.

Suharyat, Y. (2023, May 14). Artificial intelligence: Positive and negative role in education management. In Proceedings of the International Conference on Education (pp. 349–357).

United Nations Environment Programme [UNEP]. (2024). AI has an environmental problem. Here’s what the world can do about it.

Wegerif, R., & Major, L. (2024). Generative artificial intelligence and a return to dialogue in education. ½¿É«µ¼º½ Blog. /blog/generative-artificial-intelligence-and-a-return-to-dialogue-in-education

Yu, H., & Guo, Y. (2023, April 3). Generative artificial intelligence empowers educational reform: current status, issues, and prospects. Frontiers in Education, 8, 1183162).