Classroom Applications of Automated Writing Evaluation: A Qualitative Examination of Automated Feedback

Classroom Applications of Automated Writing Evaluation: A Qualitative Examination of Automated Feedback

Corey Palermo, Margareta Maria Thomson
Copyright: © 2019 |Pages: 31
DOI: 10.4018/978-1-5225-6361-7.ch008
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The majority of United States students demonstrate only partial mastery of the knowledge and skills necessary for proficient writing. Researchers have called for increased classroom-based formative writing assessment to provide students with regular feedback about their writing performance and to support the development of writing skills. Automated writing evaluation (AWE) is a type of assessment for learning (AfL) that combines automated essay scoring (AES) and automated feedback with the goal of supporting improvements in students' writing performance. The current chapter first describes AES, AWE, and automated feedback. Next, results of an original study that examined students' and teachers' perceptions of automated feedback are presented and discussed. The chapter concludes with recommendations and directions for future research.
Chapter Preview
Top

Purpose Of The Present Chapter

More than 50 years of applying artificial intelligence to the task of writing evaluation has led to the development of mature Automated Essay Scoring (AES) systems capable of evaluating writing with a high degree of accuracy (Shermis, Garvan, & Diao, 2008; Shermis, 2014). AWE pairs AES with automated feedback in the form of customized suggestions for improving writing quality. Grading essays and providing high-quality feedback to student writing is time consuming for teachers (Dikli, 2010), and AWE removes some of the burden on teachers associated with evaluation and the provision of feedback in an AfL environment. However, students’ and teachers’ perceptions of automated feedback constrain the utility of AWE (Roscoe & McNamara, 2013), highlighting a need for additional research investigating how students and teachers experience and understood automated feedback.

The remainder of this chapter will:

  • 1.

    Discuss AES, AWE, and automated feedback.

  • 2.

    Review previous quantitative and mixed-methods studies that have examined automated feedback in K–12 classrooms.

  • 3.

    Describe an original study that examined students’ and teachers’ perceptions of automated feedback provided by the AWE system PEG Writing.

  • 4.

    Present study results, recommendations, and directions for future research.

Key Terms in this Chapter

Automated Writing Evaluation: A process in which educational technologies provide formative writing assessment and automated feedback with the goal of supporting improvements in students’ writing performance.

Assessment for Learning: An assessment environment that provides authentic assessment tasks that allow for extensive practice, includes both formal and informal feedback, fosters student autonomy, and balances summative with formative assessment.

Summative Assessment: Various types of assessments used to evaluate student learning at the end of an instructional unit by comparing it against some standard or benchmark.

Writing Traits: Components common to effective writing, including development of ideas, organization, style, language, sentence structure, and conventions.

PEG Writing: An AWE system that includes interactive student lessons, writing prompts, electronic graphic organizers, automated scores and feedback, and writing portfolios. PEG writing utilizes the project essay grade (PEG) AES engine to provide students with automated scores and feedback for their essays.

Self-Regulation: Learners’ monitoring, regulation, and control of their cognition, motivation, behavior, and some aspects of the learning environment in pursuit of learning goals.

Automated Feedback: Customized suggestions for improving writing quality provided by a computer to a learner.

Formative Assessment: A series of instructional procedures or assessments designed to monitor student learning and to provide ongoing feedback that can be used by instructors to improve their teaching and by students to improve their performance.

Automated Essay Scoring: Methods that utilize statistical models based on human evaluations of writing to score essays in a manner that mimics the scoring of professionals.

Complete Chapter List

Search this Book:
Reset