Rethinking learner uptake, agency, and epistemic responsibility in the age of generative feedback

Why GenAI feedback is not “just feedback”

The rapid adoption of generative AI tools in language education has produced an understandable pedagogical preoccupation: what does it mean when feedback can be generated instantly, fluently, and at scale? In L2 writing contexts, large language models (LLMs) increasingly function as ubiquitous feedback providers, offering error correction, stylistic refinement, coherence suggestions, and genre-specific rewriting within seconds.

Yet much of the current discourse continues to treat AI feedback as a technologically enhanced version of what teachers have always provided: correction, advice, and improvement. This assumption is pedagogically insufficient. GenAI feedback is not merely a new channel for old practices; it introduces a fundamentally different epistemic and developmental condition in which feedback becomes abundant, decontextualised, and algorithmically authored.

The central question is therefore not whether GenAI feedback is accurate or efficient, but whether it is formative in any meaningful sense. As UNESCO (2023) cautions, generative systems must be integrated in ways that preserve human agency, educational integrity, and the developmental purposes of learning (UNESCO, 2023). In L2 writing pedagogy, this requires moving beyond tool adoption toward conceptual reframing.

This article proposes such a reframing: from feedback as correction to feedback as ecology.

Feedback is not a message: it is a system

In second language writing research, feedback has long been conceptualised not as a simple transmission of corrective information, but as a socially mediated developmental process embedded within broader practices of interpretation, negotiation, and revision. Learners do not automatically improve because comments are provided; rather, development depends on how feedback is understood, taken up, and transformed into meaningful writing decisions over time. As Hyland and Hyland (2006) argue, feedback functions less as a discrete instructional message and more as an interactional mechanism through which learners engage with linguistic form, rhetorical intent, and evaluative expectations.

From this perspective, feedback cannot be treated as a static product or a one-directional input. It constitutes an evolving cognitive and pedagogical system shaped by learner agency, teacher mediation, and the contextual conditions under which revision occurs. The emergence of generative AI fundamentally disrupts this system by introducing a feedback agent that is linguistically fluent yet epistemically non-human, capable of producing immediate and scalable suggestions but largely insensitive to local pedagogical goals unless carefully constrained. Moreover, LLM-generated feedback often operates as norm-producing rather than dialogically responsive, offering authoritative-seeming revisions without the interactional negotiation that characterises formative teacher feedback.

Consequently, the pedagogical unit of analysis must shift. The critical question is no longer whether AI feedback is “good” in isolation, but what kind of revision environment it generates, what forms of learner engagement it enables or suppresses, and how responsibility for textual decisions is redistributed. This reframing leads to the central conceptual lens of this article: the need to understand GenAI-supported writing not through feedback as correction, but through AI-mediated revision ecologies—dynamic systems of interaction in which feedback, agency, uptake, and learning are continuously co-constructed.

AI-mediated revision ecologies: a conceptual reframing

An AI-mediated revision ecology can be understood as the dynamic system of relations through which writing development is negotiated when generative feedback tools become embedded in classroom practice. Rather than treating feedback as a discrete corrective input, the ecological perspective conceptualises revision as an emergent process shaped by multiple interacting forces, including learner intentions and agency, teacher mediation and assessment culture, AI-generated suggestions and the linguistic norms they reproduce, as well as broader institutional expectations surrounding accountability and performance. Revision, from this standpoint, is not an individual act of textual adjustment but a socially and epistemically situated practice co-constructed across human and algorithmic actors.

This reframing is particularly important because LLM-generated feedback does not enter pedagogical contexts as a neutral resource. It is absorbed into classrooms that already contain established feedback cultures, power relations, assessment regimes, and learner beliefs about authority and correctness. As Selwyn et al. (2025) observe, teachers are increasingly required to manage not only the outputs of generative systems but also the pedagogical “frailties” they introduce, including inaccuracies, normative pressures toward standardised language, and patterns of learner over-reliance. The ecological lens therefore foregrounds questions that are not merely technical but fundamentally educational: who holds epistemic authority in revision, what counts as legitimate improvement, where responsibility for the final text is located, and what forms of learner engagement and authorship are ultimately being cultivated within AI-mediated writing environments.

Why GenAI feedback is not inherently formative

A persistent misconception in current discussions of generative AI is that feedback becomes formative simply by virtue of being immediate, extensive, or linguistically sophisticated. Yet formative feedback is not defined by volume or speed, but by its capacity to support learner development through processes of reflection, evaluative judgement, and purposeful uptake. In other words, feedback is formative only insofar as it contributes to the learner’s evolving competence, rather than merely improving the surface quality of a single text.

As Kasneci et al. (2023) caution, LLM-generated outputs, while often educationally useful, remain fundamentally probabilistic and cannot be assumed to align reliably with pedagogical intent, curricular priorities, or developmental appropriateness. GenAI feedback is therefore not inherently formative because it frequently lacks the diagnostic grounding that teacher feedback draws from—such as knowledge of learner history, awareness of instructional objectives, sensitivity to developmental readiness, and the dialogic negotiation of meaning that characterises effective formative interaction. Moreover, without accountability structures that require learners to interpret, justify, and act upon suggestions, AI-generated feedback risks encouraging passive acceptance rather than reflective revision.

In many cases, then, generative feedback produces a form of surface optimisation rather than durable learning. This underscores a critical distinction: feedback that improves the text is not necessarily feedback that develops the writer.

Uptake, compliance, and learning: a necessary differentiation

One of the most urgent conceptual tasks in GenAI-mediated writing pedagogy is to distinguish between outcomes that are frequently conflated but pedagogically distinct: uptake, compliance, and learning. Uptake refers to instances in which learners notice feedback, engage with it critically, and incorporate it into revision decisions in ways that reflect understanding and intentionality. Compliance, by contrast, involves the passive acceptance of suggested changes primarily because the feedback source appears authoritative, rather than because the learner has evaluated its appropriateness or rationale. Learning represents a deeper developmental outcome, in which learners internalise linguistic or rhetorical understanding in ways that extend beyond the immediate task and support future performance. Generative AI environments, however, risk producing high levels of compliance with minimal learning. Learners may accept revisions because they are fluent and polished, not because they are cognitively or rhetorically understood. This dynamic is particularly pronounced in EFL contexts, where linguistic insecurity and deficit orientations can amplify algorithmic authority and encourage uncritical dependence on automated suggestions. As feedback literacy research has consistently demonstrated, learners must develop the capacity to interpret, evaluate, and act upon feedback as part of an agentive learning process, rather than merely receive it as instruction (Carless & Boud, 2018). In the age of LLM-mediated revision, therefore, feedback literacy becomes increasingly inseparable from AI literacy, requiring learners to exercise judgement, discernment, and epistemic responsibility in their engagement with automated feedback.

Feedback literacy in the age of LLMs

Feedback literacy has traditionally been understood as learners’ capacity to engage productively with evaluative information, including the ability to appreciate feedback, make informed judgements about quality, manage affective responses, and take purposeful action for improvement (Carless & Boud, 2018). Within this framework, feedback is not simply received but interpreted, negotiated, and transformed into learner-driven revision practices. Under GenAI-mediated conditions, however, feedback literacy necessarily expands into a new domain of discernment: the ability to evaluate algorithmically generated suggestions that may appear authoritative yet remain epistemically ungrounded.

In the context of LLM feedback, learners must increasingly ask whether a suggestion is rhetorically appropriate for their communicative purpose, whether it subtly distorts voice or stance, whether it aligns with the epistemic demands of the task, and what may be gained—or lost—through uncritical acceptance. GenAI feedback therefore cannot be treated as an endpoint or a substitute for pedagogical interaction, but must become an explicit object of inquiry through which learners develop judgement, agency, and responsibility in revision. UNESCO (2023) similarly emphasises that AI integration in education must strengthen, rather than erode, learner autonomy and critical capacity, underscoring the need for human-centred mediation in AI-supported feedback practices (UNESCO, 2023).

Teacher-designed revision protocols: preventing passive acceptance

One of the most pedagogically innovative responses to AI-mediated revision is the development of structured revision protocols: deliberate processes designed to ensure that generative feedback becomes developmental rather than merely consumable. In the absence of such scaffolding, learners may engage with AI suggestions as ready-made corrections, accepting revisions passively without reflection or ownership. Revision protocols therefore function as pedagogical mechanisms that restore accountability, epistemic agency, and diagnostic visibility within writing development.

A central strategy involves requiring learners to justify AI-suggested changes through brief rationales, explaining why a suggestion was accepted, what communicative principle it improves (e.g., coherence, clarity, register), and whether the change could have been generated independently. Such justification practices shift revision from compliance toward metalinguistic decision-making. Similarly, feedback triage activities invite learners to classify AI feedback as essential, optional, inappropriate, or unclear, thereby transforming revision into an act of evaluative judgement rather than automated uptake.

Further protocols foreground reflective engagement through revision memos, in which students articulate their revision priorities, document which feedback was rejected and why, and identify what was learned through the process. Accountability can also be strengthened through oral defence tasks, where learners explain selected rhetorical or grammatical choices aloud, reinforcing authorship responsibility and making reasoning visible. Finally, verification practices—such as checking AI suggestions against corpora or trusted reference resources—position generative output as hypothesis rather than authority, encouraging learners to treat AI feedback critically rather than deferentially.

Collectively, these protocols enact what Luckin et al. (2022) describe as AI readiness: the development of learner and teacher competence grounded not in automation or tool dependence, but in judgement, discernment, and pedagogically mediated engagement with AI-supported writing environments.

From correction to feedforward: designing for transfer

The ultimate purpose of revision in L2 writing pedagogy is not the attainment of textual perfection, but the cultivation of developmental transfer: the capacity for learners to carry emerging linguistic and rhetorical competence into future writing contexts. From a feedforward perspective, the central concern is therefore not simply whether a text has been improved, but what the learner is able to do subsequently without reliance on external assistance. Feedforward-oriented revision foregrounds questions of durability and agency, including what strategies are being strengthened, what forms of awareness are being cultivated, and how revision contributes to longer-term writing development rather than task-specific optimisation. This emphasis aligns with feedback literacy scholarship, which stresses that feedback becomes educationally meaningful only when learners can interpret it, act upon it, and translate it into sustained improvement beyond the immediate task (Carless & Boud, 2018).

Generative AI can support such feedforward processes only when embedded within pedagogical structures that explicitly prioritise metalinguistic reflection, rhetorical intention, iterative drafting, teacher mediation, and learner ownership of textual decision-making. Without these conditions, AI feedback risks functioning less as a developmental resource and more as a mechanism of outsourced authorship, in which textual refinement is achieved at the expense of competence formation and transferable learning.

Conclusion: Toward responsible revision ecologies

Generative AI does not merely accelerate the provision of feedback; it fundamentally reshapes the ecological conditions under which revision, authorship, and learning are negotiated in L2 writing contexts. The central pedagogical challenge is therefore not simply whether AI-generated feedback should be permitted, but how writing environments can be designed so that feedback remains formative, agentive, and epistemically accountable rather than reductively corrective or automation-driven. Reframing GenAI feedback as situated within an AI-mediated revision ecology enables educators to move beyond surface-level textual optimisation toward feedforward-oriented practices that cultivate learner judgement, rhetorical voice, and durable writing competence.

In an era characterised by abundant algorithmic discourse, the role of the teacher is not diminished but intensified. Teachers remain essential as curators of revision processes, mediators of epistemic responsibility, and architects of pedagogical conditions that sustain meaningful language development, learner agency, and the integrity of writing as a humanly grounded educational practice.

REFERENCES

Carless, D., & Boud, D. (2018). The development of student feedback literacy: Enabling uptake of feedback. Assessment & Evaluation in Higher Education, 43(8), 1315–1325.

Hyland, K., & Hyland, F. (2006). Feedback on second language students’ writing. Language Teaching, 39(2), 83–101.

Kasneci, E., Sessler, K., Küchemann, S., Bannert, M., Dementieva, D., Fischer, F., Gasser, U., Groh, G., Günnemann, S., Hüllermeier, E., Krusche, S., Kutyniok, G., Michaeli, T., Nerdel, C., Pfeffer, J., Poquet, O., Sailer, M., Schmidt, A., Seidel, T., … Kasneci, G. (2023). ChatGPT for good? On opportunities and challenges of large language models for education. Learning and Individual Differences, 103, 102274.

Luckin, R., Holmes, W., Griffiths, M., & Forcier, L. B. (2022). Empowering educators to be AI-ready. Computers and Education: Artificial Intelligence, 3, 100080.

Selwyn, N., Ljungqvist, M., & Sonesson, A. (2025). When the prompting stops: Exploring teachers’ work around the educational frailties of generative AI tools. Learning, Media and Technology.

UNESCO. (2023). Guidance for generative AI in education and research. United Nations Educational, Scientific and Cultural Organization.

The Meta Linguist

The Meta Linguist is a professional space for English language educators, teacher trainers, and researchers engaging with language teaching at its deepest level. The blog explores the intersections of applied linguistics, corpus-informed pedagogy, and emerging technologies shaping contemporary ELT. With a particular focus on teacher education, writing development, and GenAI-mediated feedback practices, it foregrounds principled, research-informed approaches beyond prescriptive methodology. Designed for CELTA tutors, MA students, and doctoral scholars, The Meta Linguist invites critical reflection on how language, learning, and pedagogy are being reconfigured in an evolving educational landscape.

Let’s connect