From content delivery to epistemic stewardship, design, and ethical mediation

Why “the teacher’s role” is the wrong starting point

The rapid mainstreaming of generative artificial intelligence (GenAI), particularly large language models (LLMs), has intensified a long-standing question in language education: what is the role of the teacher when explanations, examples, feedback, and even complete texts can be generated instantly? What is at stake, however, is not merely access to information, but the outsourcing of linguistic work itself—including drafting, paraphrasing, ideation, translation, and revision.

In language education, where language is both the medium and the object of learning, this shift introduces a critical applied-linguistic risk: performance without competence. Learners may submit fluent texts that conceal shallow understanding or limited control over linguistic and rhetorical resources. Recent international policy discussions have framed this phenomenon as “false mastery,” where outputs mask underlying gaps in learning.

Against this backdrop, the teacher’s role is not diminished but reconfigured. Rather than acting primarily as a transmitter of content, the teacher increasingly becomes a designer of learning conditions, an epistemic steward, and an ethical mediator in environments where plausible language is no longer evidence of understanding.

Reframing the teacher’s role in GenAI-mediated language education

Teacher as learning designer (not materials producer)

One of the most immediate impacts of GenAI in ELT is efficiency. Lesson plans, worksheets, texts, and exercises can be generated in seconds. Yet efficiency is not pedagogy. What distinguishes professional teaching is not the ability to produce materials, but the capacity to design learning sequences in which development is visible, assessable, and theoretically grounded.

In this sense, the teacher’s expertise lies in identifying the learning mechanism at work—such as noticing, retrieval, output, interactional feedback, or metalinguistic reflection—and deciding whether GenAI supports or replaces that mechanism. When a tool replaces the mechanism itself, learning risks becoming performative rather than developmental.

For teacher educators, this distinction is crucial. GenAI should function as a resource for pedagogical reasoning, not as a shortcut that bypasses it.

Teacher as epistemic steward in an age of fluent error

Unlike earlier educational technologies, LLMs generate text that is not only fluent but often convincingly wrong. Fabricated references, distorted explanations, and genre-appropriate but pragmatically inappropriate language are well-documented features of current systems.

This shifts the teacher’s role toward epistemic stewardship: safeguarding the quality, validity, and contextual appropriateness of knowledge circulating in the classroom. In language education, this involves particular attention to:

  • the distinction between fluency and accuracy
  • pragmatic and cultural appropriateness across genres
  • evidence-based argumentation and source integrity

In other words, the teacher becomes responsible not just for what students produce, but for the epistemic warrants underpinning that production.

Teacher as assessment and authenticity engineer

Generative AI fundamentally disrupts traditional assessment proxies. A fluent written product can no longer be assumed to reflect independent learner competence. This creates not only concerns about academic integrity, but deeper construct validity problems in writing assessment.

In response, teachers are increasingly required to design assessments that foreground process, decision-making, and justification, rather than surface textual quality alone. Common strategies include:

  • collecting planning notes, drafts, and revision rationales
  • requiring oral justification of linguistic or rhetorical choices
  • designing in-class composing tasks with controlled tool use
  • using source-based and context-embedded writing tasks

The assessment focus shifts from the text as artefact to the thinking that produced it.

Teacher as workload realist and editorial mediator

While GenAI is often framed as a labour-saving tool, research suggests that it frequently reconfigures rather than reduces teacher workload. Teachers must review outputs, verify accuracy, detect bias, and align generated materials with curricular and ethical constraints.

This introduces a largely invisible form of labour: editorial mediation. Teachers are required to exercise professional judgement in deciding when to accept, adapt, or reject GenAI-generated content. This role is neither trivial nor automatable; it depends on disciplinary expertise, contextual knowledge, and pedagogical responsibility.

Teacher as agency protector and ethical mediator

Perhaps the most under-theorised aspect of GenAI integration is its impact on teacher and learner agency. Over-delegation of linguistic work risks long-term deskilling and dependency, while unequal access to tools raises concerns about equity and fairness.

Ethical mediation therefore becomes central to the teacher’s role. This includes:

  • setting transparent boundaries around acceptable use
  • protecting learner data and privacy
  • ensuring informed consent in tool adoption
  • resisting norm-imposing or culturally narrow feedback

Rather than positioning AI as neutral assistance, teachers must actively mediate its pedagogical and ethical implications.

What teachers now need to know: toward applied-linguistic AI literacy

The evolving role of the language teacher in GenAI-mediated environments implies the emergence of a new competence profile. Teachers do not need to become software engineers, yet they increasingly require a form of AI readiness that is grounded not in technical mastery, but in applied linguistics, pedagogy, and assessment theory. This readiness involves developing an informed understanding of how LLMs generate language, where their affordances lie, and where their outputs may introduce epistemic or pedagogical risk (Luckin et al., 2022).

Such competence includes recognising common failure modes of generative systems—such as hallucination, overgeneralisation, genre simulation without contextual appropriateness, and the production of fluent but conceptually shallow discourse. As Kasneci et al. (2023) emphasise, the educational promise of LLMs is inseparable from their unpredictability, requiring teachers and learners alike to develop new literacies for interpreting and evaluating machine-generated language. It also requires the ability to design prompts and classroom tasks that promote reasoning, reflection, and learner agency, rather than mere answer extraction or passive reliance on automated text generation (UNESCO, 2023).

Moreover, teachers must be able to align AI-generated feedback with established principles of feedback literacy, ensuring that learners engage critically with suggestions, make purposeful revisions, and retain ownership of their writing development. Equally important is an awareness of bias, normativity, and linguistic standardisation embedded in automated models, particularly in multilingual and EFL contexts where Anglocentric assumptions may distort learner voice or rhetorical diversity (UNESCO, 2023).

Finally, teachers must develop the capacity to redesign assessment practices in ways that protect construct validity, foreground process and decision-making, and ensure that textual performance remains meaningfully linked to underlying competence. In ELT, therefore, this literacy is best conceptualised as applied-linguistic AI competence: a pedagogically anchored form of professional judgement that exceeds generic technological skill and is central to sustaining responsible, human-centred language education.

An innovative lens: the teacher as curator of productive constraints

A particularly productive way to conceptualise pedagogical innovation in GenAI-mediated language education is paradoxical: the proliferation of generative capacity increases, rather than decreases, the need for constraint. In contexts where high-quality linguistic output can be produced instantly, the central challenge is no longer access to language models or textual fluency, but the preservation of the cognitive, interactional, and developmental mechanisms through which language competence is formed. GenAI, by design, reduces friction in writing processes; yet language learning often depends precisely on that friction—on effortful formulation, noticing, revision, and metalinguistic decision-making. Without pedagogical structuring, the availability of automated discourse risks transforming learning into mere consumption or surface-level performance.

Within this landscape, the teacher’s role can be reconceptualised as that of a curator of productive constraints: an expert designer who introduces boundaries not to restrict learners, but to protect the epistemic integrity of tasks and ensure that learner agency remains central. Such constraints function as pedagogical scaffolds that make thinking visible, sustain accountability, and preserve assessment validity under conditions of abundant algorithmic assistance.

For example, limiting permissible sources or requiring engagement with a defined corpus of readings shifts writing away from generic content generation and toward synthesis, evaluation, and evidence-based argumentation. Similarly, requiring students to submit prompt histories, revision memos, or reflective commentaries transforms GenAI from an invisible author into an explicit object of pedagogical interrogation, allowing teachers to assess not only the final product but the learner’s interpretive choices, uptake, and rhetorical reasoning.

Other constraints embed contextual specificity into tasks in ways that resist generic automation. Assignments that require local institutional knowledge, situated classroom experience, or interactional negotiation cannot be easily outsourced to decontextualised text generation, thereby reinforcing the learner’s role as an epistemic agent rather than a passive recipient. Pairing oral negotiation with written production further strengthens this principle, as spoken interaction creates opportunities for meaning-making, clarification, and stance-taking that must then be translated into accountable written decisions.

Crucially, such constraints also enable teachers to restore what GenAI environments often erode: diagnostic visibility. When textual outputs become less reliable indicators of competence, teachers must design tasks that foreground process, justification, and decision-making. Accountability mechanisms—such as requiring learners to defend rhetorical choices, explain revisions, or articulate why certain feedback was accepted or rejected—ensure that language development remains anchored in cognition, intention, and communicative responsibility.

In this sense, pedagogical innovation in the GenAI era does not lie in removing boundaries, but in designing them intelligently. Productive constraints function not as limitations, but as enabling conditions through which language learning remains developmental rather than performative, and through which the teacher’s expertise is reaffirmed as central to sustaining meaningful, human-centred education.

Implications for teacher education and research

The implications of generative AI extend across the full continuum of teacher education and research. For CELTA tutors, GenAI offers a timely opportunity to foreground pedagogical reasoning and professional judgement, encouraging trainees not merely to adopt tools instrumentally but to justify instructional decisions through established principles of language learning, interaction, and feedback. At the MA level, the shift becomes more conceptual and analytical, as GenAI must be examined as an emerging literacy environment that reshapes learner agency, revision practices, and the epistemic foundations of assessment, rather than as a neutral or purely supportive instructional resource. For PhD researchers, the frontier is increasingly epistemic and sociotechnical, raising fundamental questions concerning authorship, authority, construct validity, and the redistribution of expertise and responsibility within AI-mediated writing and feedback ecologies.

Conclusion

Generative AI does not diminish the significance of the language teacher; rather, it intensifies the demand for pedagogical expertise that cannot be automated. What becomes obsolete is not teaching itself, but unreflective instructional practice that equates fluent textual production with learning. In GenAI-mediated environments, the teacher’s role is increasingly reconstituted around the design of epistemically robust learning conditions, the safeguarding of assessment validity, and the ethical mediation of technological participation in classroom life. Teachers emerge not as peripheral facilitators of automated outputs, but as central agents responsible for sustaining the interpretive, developmental, and sociocultural integrity of language education.

In an era characterised by abundant and algorithmically generated discourse, the core challenge of ELT is no longer the mere production of language, but the cultivation of accountable language use, critical communicative competence, and durable learner agency. The pedagogical task ahead lies in ensuring that language learning remains not only efficient or technologically enhanced, but intellectually meaningful, ethically grounded, and fundamentally human.

References

European Commission. Ethical guidelines for educators on the use of artificial intelligence and data in teaching and learning.

Kasneci, E., et al. (2023). ChatGPT for good? On opportunities and challenges of large language models for education. Learning and Individual Differences, 103, 102274.

Kasneci, E., Sessler, K., Küchemann, S., Bannert, M., Dementieva, D., Fischer, F., Gasser, U., Groh, G., Günnemann, S., Hüllermeier, E., Krusche, S., Kutyniok, G., Michaeli, T., Nerdel, C., Pfeffer, J., Poquet, O., Sailer, M., Schmidt, A., Seidel, T., … Kasneci, G. (2023). ChatGPT for good? On opportunities and challenges of large language models for education. Learning and Individual Differences, 103, 102274.

Luckin, R., Holmes, W., Griffiths, M., & Forcier, L. B. (2022). Empowering educators to be AI-ready. Computers and Education: Artificial Intelligence, 3, 100080.

OECD. (2026). OECD Digital Education Outlook 2026: Exploring effective uses of generative AI in education. OECD Publishing.

Selwyn, N., Ljungqvist, M., & Sonesson, A. (2025). When the prompting stops: Exploring teachers’ work around the educational frailties of generative AI tools. Learning, Media and Technology.

UNESCO. (2023). Guidance for generative AI in education and research. United Nations Educational, Scientific and Cultural Organization.

UNESCO. (2025). Promoting and protecting teacher agency in the age of AI. UNESCO.

The Meta Linguist

The Meta Linguist is a professional space for English language educators, teacher trainers, and researchers engaging with language teaching at its deepest level. The blog explores the intersections of applied linguistics, corpus-informed pedagogy, and emerging technologies shaping contemporary ELT. With a particular focus on teacher education, writing development, and GenAI-mediated feedback practices, it foregrounds principled, research-informed approaches beyond prescriptive methodology. Designed for CELTA tutors, MA students, and doctoral scholars, The Meta Linguist invites critical reflection on how language, learning, and pedagogy are being reconfigured in an evolving educational landscape.

Let’s connect