½¿É«µ¼º½

Skip to content
 

Blog post Part of series: Artificial Intelligence in educational research and practice

Rethinking feedback literacy: Teaching engagement with AI-generated feedback

Aakash Kumar, Doctoral student at Texas A&M University

Despite decades of research, helping students use feedback effectively remains one of the most persistent challenges in writing instruction. Teachers devote extensive time to commenting on student work, yet the benefits often fall short because students struggle to interpret and act on those comments. The arrival of generative artificial intelligence (GenAI) has intensified this challenge by introducing a new, non-human source of feedback. Understanding how teachers and students engage with it requires rethinking what it means to be feedback literate.

What it means to be feedback literate

Carless and Boud (2018) define feedback literacy as a set of understandings that enable learners to interpret feedback, make evaluative judgments and act for improvement. In writing instruction, feedback literacy translates into several behaviours such as identifying what problem feedback addresses, evaluating the evidence behind a suggestion, and planning specific revisions. Without these processes, feedback provides information, but no improvement follows. Feedback literacy, therefore, is less about receiving comments and more about reasoning with comments.

Rethinking feedback in the age of GenAI

Generative AI has transformed how feedback is produced and implemented. Tools such as ChatGPT, Gemini and Claude can generate immediate, fluent responses to student writing. While these tools offer accessibility and speed, they operate through pattern recognition, not understanding. Banihashem et al. (2024) found that AI feedback effectively addressed grammar and organisation but struggled with meaning, argumentation and disciplinary reasoning. When students accept AI suggestions without critical evaluation, revision becomes mechanical and more focused on correction rather than understanding.

‘When students accept AI suggestions without critical evaluation, revision becomes mechanical and more focused on correction rather than understanding.’

Rethinking feedback literacy for teachers

The emergence of AI has reshaped teachers’ role from providing feedback to mediating the feedback generated by GenAI tools, guiding students to interpret it critically. This requires explicit preparation in teacher education and professional development, emphasising three outcomes. The first is interpretive competence which is the ability to evaluate whether AI feedback aligns with curricular objectives and writing outcomes. The second is instructional integration which is the ability to design activities where AI feedback serves as material for analysis rather than as an endpoint for revision. The third is ethical awareness which is the ability to guide students in issues of originality and appropriate use of AI tools. Zhan et al. (2025) propose a framework in which generative AI can enable student feedback engagement if teachers scaffold its use deliberately.

Rethinking feedback literacy for students

Students need structured strategies for working with AI feedback. Feedback literacy for students involves analysis and decision-making. Students should learn to ask: (a) What aspect of writing does this feedback address? (b) Is the suggestion consistent with my communicative goal or disciplinary expectation? (c) How will implementing this change affect meaning or coherence?

For example, when an AI tool recommends simplifying a sentence, a feedback-literate student must judge whether simplicity enhances clarity or removes necessary precision. Banihashem et al. (2024) found that students benefit most when they engage critically with feedback and compare it with peer or teacher feedback. Such triangulation of sources helps students recognise differences in depth, tone and purpose.

Keeping human judgment central

Technology can accelerate feedback but cannot replace pedagogical reasoning. AI tools cannot account for these dimensions because AI tools analyse text statistically, not contextually. Teachers’ feedback can provide relational and moral dimensions such as empathy and encouragement that GenAI tools lack. When feedback becomes entirely automated, students risk treating revision as compliance rather than reflection. Human judgment ensures that feedback continues to develop writers rather than merely to perfect text.

Conclusion

Feedback literacy remains the foundation of effective writing instruction. What has changed is the ecology in which feedback is produced and interpreted. AI has made feedback more abundant but not inherently more informative. Understanding still depends on discernment, the ability to evaluate information and to preserve independent judgment. The goal of feedback has never been to perfect text. Its purpose has always been to develop thoughtful writers, and that remains a distinctly human responsibility.


References

Banihashem, S. K., Kerman, N. T., Noroozi, O., Moon, J., & Drachsler, H. (2024). Feedback sources in essay writing: Peer-generated or AI-generated feedback?. International Journal of Educational Technology in Higher Education, 21(1), 23.

Carless, D., & Boud, D. (2018). The development of student feedback literacy: Enabling uptake of feedback. Assessment & Evaluation in Higher Education, 43(8), 1315–1325.

Zhan, Y., Boud, D., Dawson, P., & Yan, Z. (2025). Generative artificial intelligence as an enabler of student feedback engagement: A framework. Higher Education Research & Development, 44(5), 1289–1304.