Analytic Rubrics for Decision Making

Analytic Rubrics for Decision Making

Alan F. Chow (University of South Alabama, USA) and Kelly C. Woodford (University of South Alabama, USA)
Copyright: © 2014 |Pages: 11
DOI: 10.4018/978-1-4666-5202-6.ch011
OnDemand PDF Download:
List Price: $37.50

Chapter Preview



Rubrics have been used in education for many years as tools for assessing student performance. In assessing student performance, Beyreli & Ari (2009) cited Weigle, (2002), who noted that analytic rubrics assess the performance as the sum of itemized tasks, while holistic scoring assesses student performance with a single score for the entire work. Weigle’s research focused on the development and validation of rubrics designed to assess student performance on problem solving assignments. In the current study, rubrics were developed for both quantitative problems and qualitative problems. Rubrics developed for assessing student performance on conditional probability (quantitative) problems in an introductory statistics course were validated in study I. A rubric developed for assessing student performance discussing and analyzing specific situations related to legal issues (qualitative) presented in an undergraduate business law course was validated in study II.

Shipman, Roa, Hooten, & Wang (2012) provide positive and negative qualities of rubrics, focusing on the analytic rubric for problem solving. Like the rubrics in the Shipman et al study, the rubrics developed and validated in both of the current studies were all analytic in nature. Each rubric assigned scores based on several items required in the analysis and completion of the assigned problem, with the total score being the sum of the points scored for each item or task within the problem solution.

In order to validate that the rubrics are adequate for their intended use, the correlation between multiple raters using the rubrics to assess student performance are calculated. Strong inter-rater correlation indicates reliability of the assessment tool (Saxton, Belanger, & Becker, 2012). Reliability of the assessment tool is considered validation of the tool.



There have been numerous ways presented to improve the learning process in decision sciences, including the use of games and classroom activities (Chow, Woodford, & Maes, 2012), and applications of continuous improvement concepts in the development of online courses (Aggarwal, & Lynn, 2012). Rubrics are another tool used for improving the learning process. Cohen, Mason, Singh, and Yerushalmi (2008) had physics students use rubrics as self-diagnostic tools to determine their prior knowledge of the material introduced, and their post learning level of understanding. By completing the information requested in the rubric, students provided information based on their own perception of their understanding, while also providing the researchers with a consistent vehicle for collecting the information.

Yerushalmi, Mason, Cohen, & Singh (2009) used rubrics to increase student learning in physics. They concluded that students found the rubric as a useful learning tool, supporting the findings of their earlier research. Allen and Knight (2009) presented a method of collaboratively developing and validating a rubric suitable for assessing performance to be both “academically-sound and employer –relevant.” Collaboratively designing the rubric with input from academics and professionals, the resulting rubric assesses the learning and performance of students based on outcome criteria from both.

Key Terms in this Chapter

Problem Solving: The process of coming to some formal or structured conclusion as a result of determining the resolution of a particular task or challenge.

Analytic Rubric: A heuristic device created intentionally to assess the performance or learning of a subject. The analytic rubric provides a specific process for determining the level of performance completion of the given task or assignment.

Reliability and Reproducibility of a Measurement System: The reliability of a measurement system is found when the scores of multiple raters have consistent order of measurements. The reproducibility of a measurement system is found when the average measurements from different raters is consistent and has minimal variation.

Assessment of Learning: The process of measuring and determining the level of learning that comes as the result of some form of instructional intervention.

Decision Making: The act of drawing a conscious conclusion and acting upon that conclusion.

Validity of Rubrics: The measure of applicability of a rubric for its intended purpose by measuring the correlation between multiple raters or scorers.

Performance Evaluation: The process of determining the level of completion of a particular task or assignment. Performance Evaluation assesses the completeness of the solution provided by the subject against some known appropriate solution.

Complete Chapter List

Search this Book: