Outcomes-based assessment provides data for programs to demonstrate student learning as a result of their enrollment in the program and to assess courses and curriculum. This chapter reports on the process used to develop an outcomes assessment initiative for the Multimedia Writing and Technical Communication Program at Arizona State University. The authors discuss details on the development of outcomes, the mapping of outcomes to the curriculum, the use of electronic portfolios to assess student writing using Phase 2 scoring procedures, and how results from the first three semesters of implementation are being used to evaluate and improve the program’s curriculum.
Technical communication programs are no less immune from the need to demonstrate accountability than other educational programs. As an applied rhetoric, technical communication shares many disciplinary similarities with other writing programs. At the same time, technical communication has a distinct history, goals and objectives, and closer ties to industry and the workplace. Further, undergraduate degree-granting technical communication programs (such as the one discussed in this chapter), must address two missions. On the one hand, the program must address learning for program majors. On the other hand, the program must address the learning for students who enroll in “service courses.” These service courses are upper-division applied writing courses in which students from other majors enroll to build on what they learned in first-year composition.
Research from the broader discipline of rhetoric and composition can inform technical communication degree programs so that outcomes and assessment strategies are established within disciplinary context. When divorced from teaching and learning, program assessment can be viewed as an administrative mandate which imposes values and definitions of writing upon teachers that are potentially at odds with pedagogy and teaching philosophies. Assessment, then, can become equated with accountability to external parties rather than as a way to connect and foster teaching and learning. Edward M. White (1995) has argued that no assessment device is inherently good or bad. Further, he lists three qualities of assessment at its best (White, 1994):
it clearly defines what we do and what we expect our students to do and learn;
it helps us discover whether students have learned; and
it changes our teaching so that we prepare better assignments, give more constructive responses, and grade less.
Further, White (1994) claims that a primary mistake made in program assessment is to choose a measure before developing goals, specifications, and uses of an instrument.
Key Terms in this Chapter
Outcomes Assessment: The evaluation of student work based on articulated outcomes for a course of study
Holistic Scoring: A scoring method to evaluate writing in which raters award a score for the overall quality instead of for individual traits
Curriculum Matrix: Document visualizing the relationship between outcomes and courses
Authentic Assessment: A form of assessment that requires students to demonstrate performance of skills and abilities
Capstone: The culminating experience of an academic program designed to bring together a student’s educational experiences
Program Review: An expansive process of evaluating academic programs which incorporates the use of data from multiple sources
Rhetorical Argument: A form of academic writing in which the student makes a claim and supports it with evidence