TRANSFERRING UNIVERSITY ENGLISH ASSESSMENT THROUGH AI: FROM AUTOMATED GRADING TO REAL- TIME FORMATIVE FEEDBACK

Authors

  • Dr. Mujahid Shah Associate Professor, Department of English, Abdul Wali Khan University Mardan Author
  • Hamid Ghaffari Visiting Lecturer, English, Abdul Wali Khan University Mardan. Author
  • Memoona Khan (Corresponding Author) PhD Scholar, Department of English at Abdul Wali Khan University Mardan. Author

DOI:

https://doi.org/10.63878/qrjs407

Abstract

This case study analyzes the impact of implementation of automated scoring and real time feedback on English assessment and instruction at the university level in the Khyber Pakhtunkhwa province of Pakistan. Using a mixed-methods approach, we conduct a cluster randomized controlled trial at three public universities in Khyber Pakhtunkhwa, Pakistan and at the same time in the spring semester of the first and second years of English as a Foreign Language. Approximately 240 students are distributed evenly between experimental and control groups. Control students receive teacher audio comments and feedback on the ‘teacher as facilitator. Experimental students receive formative feedback on drafts and short oral responses embedded in the LMS and AI-generated scores. The essays, the timed final, and two drafts are rated separately by two assessors for each construct of the analytic rubric and CEFR domains assigned. Recall completion is marked for fluency of at least 90 and maximum 120 seconds. In addition to AI’s scores, each final score includes a pass or fail degree of assessment. Using Cohen’s k, we compute inter-rater agreement as a measure of score reliability, and conduct DIF analyses for equity of AI, human scores, gender equity and primary language equity analyses on Pashto, Hindko, and Urdu. Writing and speaking score gains are assessed for their ANCOVA results, and the teacher effort is calculated for the time to feedback from the LMS. Thematic analysis was performed focusing on students’ participation in class activities, usefulness of the course, and the level of concern shown regarding issues of academic integrity, on validated questionnaires (α≥.80) and in focus groups (n≈36). In this study, the augmented reality AI feedback in learning progress, attempts to examine the reliability of automated scoring in the KPK region, and develops an assessment workflow which enhances efficiency while maintaining fairness and integrity. These are addressed by the ethical guidelines which cover human consent and easier ways of opting out by having Pashto/Urdu info sheets. Findings are going to reflect the use of scalable AI-assessment technologies. Resource-constrained universities are likely to consider the study's conclusions in their policy. The results showed that students from the AI group outperformed then control students in writing accuracy and speaking fluency.AI scoring was highly accurate but showed some minor bias for L1 background. The study findings therefore point out that the use of AI in English assessment in universities of KPK will lead to improvement in learning, and at the same time reduced workload for teachers.

Downloads

Published

2024-12-24

How to Cite

TRANSFERRING UNIVERSITY ENGLISH ASSESSMENT THROUGH AI: FROM AUTOMATED GRADING TO REAL- TIME FORMATIVE FEEDBACK. (2024). Qualitative Research Journal for Social Studies, 1(4), 65-75. https://doi.org/10.63878/qrjs407