Assessing Mathematical Writing: Comparative Judgment and Professional Learning

Assessing Mathematical Writing: Comparative Judgment and Professional Learning

Ian Jones, Jodie Hunter
DOI: 10.4018/978-1-6684-6538-7.ch007
(Individual Chapters)
No Current Special Offers


The chapter discusses potential professional learning benefits for educators who engage in assessing students' mathematical writing. It draws on interview data from twelve mathematics educators who were experienced in assessing primary students' written responses to free response prompts covering a range of topics. The first stage of the interviews used a stimulated recall protocol that followed a comparative judgment procedure in which each participant was presented with pairs of students' written responses and asked to decide which was ‘better'. The second stage was semi-structured with questions about how participants made their comparative judgment decisions, and whether doing so improved their understanding of students' thinking. The findings are that assessing mathematical writing can provide educators with insights into students' representations, underlying ideas and learning trajectories, and can also provide stimulus for changing classroom practice.
Chapter Preview

Peer Assessment

Peer assessment involves students making judgments about the quality of their peers’ work (Topping, 2009). Learning benefits have been cited both for the activities of judging quality, and of receiving feedback from peers (Falchikov & Goldfinch, 2000) and it is the first of these that we are interested in here. The learning benefits of samples of others’ work for the case of mathematics have been reported across topics and age ranges, from school students learning algebra (Rittle-Johnson & Star, 2007), fractions (Jones & Wheadon, 2015) and mathematical problem solving (Evans & Swan, 2014), to undergraduates learning calculus (I. Jones & Alcock, 2014) and proof (Davies et al., 2020).

Reviews of the evidence of the learning benefits peer assessment highlight the importance of using detailed assessment criteria (e.g. Orsmond et al., 1996; van den Berg et al. 2006). This might involve provided criteria that the students are expected to internalize, or it might be student-generated criteria that are then used for making judgments of peers’ work. Our approach challenges this assumption that detailed assessment criteria are indispensable, and argues there are peer assessment contexts in which avoiding detailed criteria benefits learning.

Complete Chapter List

Search this Book: