Capturing Student Affect: Designing and Deploying Effective Microsurveys

Capturing Student Affect: Designing and Deploying Effective Microsurveys

Jeff Bergin, Kara McWilliams
DOI: 10.4018/978-1-7998-5074-8.ch014
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Early warning systems rely on behavioral and cognitive data drawn from student information systems, learning management systems, and courseware platforms; however, they often lack sufficient data on student attitudes, perceptions, and affective responses to effectively prevent student withdrawals, failures, and drop outs or to intervene early enough to improve institutional and student outcomes. To complement the behavioral and cognitive datastreams, researchers and designers are increasingly turning toward microsurveys—short questions or question sets that help researchers gather data at strategic points during a course—to enable earlier intervention and, therefore, improve outcomes. However, for microsurveys to be effective, researchers and designers may need to refactor their research, design, and evaluation processes to address considerations unique to microsurveys. This chapter considers how researchers may go about developing microsurveys by formulating a foundational research base, developing initial designs, and then refining those through formative evaluation.
Chapter Preview
Top

Introduction

Despite decades of research on student outcomes, students continue to withdraw, fail, and dropout of higher education (in particular at two-year and other broad access institutions) at an alarming rate. According to the National Student Clearinghouse Research Center, of those students who started college in the fall of 2016, 26% of students did not persist at any institution the following year, and 38% were not retained by their initial institution. At two-year institutions, only 48% were retained (NSC Research Center, 2018). Research has shown that when students feel less motivated to attend a class or do not see the relevance of that class to their degree goals, they visit the learning environment less frequently (Simpson, 2013). Courses taken online have a 10-20% higher drop rate than face-to-face courses, with 40-80% of online students dropping depending upon the institution and course (Bawa, 2016). Therefore, online courses and degree programs are in particular need of systems that help to identify students who are at risk of dropping in time for instructors, advisors, and administrators to attempt to intervene. Traditionally, the main source of this data was end-of-course surveys, which by their very nature were too late (Hu, Lo, & Shih, 2014). Today, early warning systems provide the opportunity to identify students early enough to provide an intervention.

Early warning systems leverage student data to develop algorithms that can give instructors and administrators warnings about student behavior – in particular around failing to meet specific outcome metrics such as a course or term – in time for intervention and remediation (Howard, Meehan, & Parnell, 2018; National Forum on Education Statistics, 2018). Early warning systems can be used in any educational context including traditional face-to-face classes; hybrid, blended, or computer-enabled courses; or fully online courses. However, much interest has been directed to incorporating them into online courses, where student/instructor interaction may be more limited or where higher failure, drop, and withdrawal rates may exist (Bawa, 2016). Furthermore, online contexts typically provide more data, whether on student logins, time-on-task, behaviors, and performance, compared to other contexts. This data can be used to develop early warning systems by mining learner data, discovering trends that reveal at-risk students, and devising algorithms to prompt alerts. Many studies indicate that early warning systems can in fact enable successful interventions, if implemented with fidelity which, in turn, can drive learner outcomes in both blended and online courses (Jokhan, Sharma, & Singh, 2018; National Forum on Education Statistics, 2018; Sneyers & De Witte, 2018).

While early warning systems are an important mechanism for improving outcomes, many are based primarily on behavioral data (National Forum on Education Statistics, 2018). Surveys, however, can provide a meaningful compliment to the other datastreams typically used in early warning systems. Specifically, microsurveysshort surveys that are limited in both the number of survey items and the length of those itemscreate a unique opportunity to gather real time data on students’ affective states. This data can complement behavioral and cognitive data gleaned by traditional coursework or courseware platforms and, therefore, be a valuable data input into early warning systems.

Creating, deploying, interpreting, and improving microsurveys comes with a series of considerations that are unique from their longer-form counterparts. These considerations need to be addressed when conducting foundational research to guide initial conceptualizations; when developing and refining initial and iterative designs; when measuring the psychometrics properties of the survey; and when conducting formative evaluation efforts for both outcome and instrument improvement.

Key Terms in this Chapter

Outcome: Broadly agreed, measurable change used to monitor the impact of a particular approach or intervention.

Microsurvey: A short question or question set that help researchers gather data in open-ended, closed-ended, or ordinal formats at strategic points in time.

Engagement: The degree of attention and interest students demonstrate (through their emotional responses, behaviors, and cognition) while learning.

Logic Model: A visualization of the relationships between outcomes, interactions, and interventions that shows a self-evident and defensible logic.

Alerts: Signals, nudges, or warnings sent to students, instructors, and/or administrators that are triggered when data-driven student performance or behaviors indicate a drop below a specified threshold or match a specified pattern.

Usability: The degree to which something is readily able to be used by a person.

Domain: Specific spheres of activity or knowledge, pertaining to learning, that include affective, behavioral, cognitive, and metacognitive.

Theory of Change: An illustration of how and why a change should happen in a specific context.

Archetype: A typology representing a group of people with shared or clustered attributes.

Complete Chapter List

Search this Book:
Reset