Alternative Review Screen Design for Electronic Voting Systems

Alternative Review Screen Design for Electronic Voting Systems

Danae V. Holmes (Rice University, Houston, TX, USA) and Philip Kortum (Rice University, Houston, TX, USA)
Copyright: © 2017 |Pages: 18
DOI: 10.4018/IJTHI.2017010105


Verifying a ballot for correctness in an election is a critical task for the voter. Previous work has shown that up to 30% of the ballot can be changed without being noticed by more than half of the voters. In response to this ballot weakness, this study evaluated the usability and viability of alternative ballot verification methods in an electronic voting medium. Three verification methods were tested: end-of-ballot, in-line confirmation, and dual confirmation. In-line and dual confirmation perform similarly to end-of-ballot confirmation in terms of effectiveness. The most efficient method is end-of-ballot review, and dual confirmation produced the longest time spent on the review screen. End-of-ballot confirmation produced the highest satisfaction ratings, though survey results indicated that dual confirmation may be the most appropriate method in terms of voting. Additional research in the field is the next step in exploring these confirmation methods.
Article Preview


Verifying information is a common task performed in a variety of situations such as checking test answers, reviewing receipts, and even deleting files on a computer. Verification is vital for tasks like transferring funds online or trading stock, where errors could lead to very serious consequences. Voting is an area where verification of accuracy is critical. This is especially true with electronic voting systems, where there is a potential security threat to ballot integrity (Bannet, Price, Rudys, Singer, & Wallach, 2004).

Electronic voting systems (referred to in the voting literature as DREs, which stands for Direct Recording Electronic voting systems) typically offer a first line of defense against ballot errors and malicious ballot tampering in the form of a review screen. These screens allow users to review and confirm or change their choices before submitting the ballot.

Research has been conducted on the usability of electronic voting systems and other traditional voting methods by Greene, Byrne, and Everett, 2006, but little research has been done on the actual effectiveness and design of review screens (Everett, 2007). According to Fischer and Coleman (2005), DREs have not undergone the extensive scientific analyses and review as would be expected for voting systems. Alvarez (2002) states that a common criticism of electronic voting systems is that review screens are poorly designed. Alvarez (2002) also comments on the many considerations that must be made in the design of review screens such as highlighting under-votes, which are races in which voters intend to vote but do not, and listing candidates’ political party affiliations. These decisions should be made according to scientific research on proper review screen design, but cannot, due to the lack of literature and empirical research. It is noted by Bell, et al. (2013) that the current Voluntary Voting System Guidelines version 1.1 (VVSG) needs additional information pertaining to the design of review screens. This evidence collectively shows that there is a lack of literature on how to properly design electronic voting system review screens.

Although there is lack of research on review screen design, research evaluating the effectiveness of DRE review screens does exist. These studies revealed a large weakness in current DRE review screen design and stressed the serious consequences of poor design.

Everett (2007) conducted a study showing exactly how important the DRE review screen is in the accuracy of recording voter intent. This was done by reviewing the usage of DRE review screens and how users performed when their votes were changed. The study asked the question of whether the added security of the review screen for DREs is effective, since their purpose is to give users the opportunity to review their votes for accuracy before submitting their ballots. This study included two experiments that both tested the effectiveness of the review screen by looking at whether or not voters noticed changes made to their selections on the review screen. Everett (2007) used a prototype DRE voting system called VoteBox. VoteBox was developed by Sandler, Derr, and Wallach (2008) and tested for usability against other voting methods in studies by (Everett, et al., 2008). VoteBox is typical of commercial DREs in that it displays one race at a time on the screen and has a review screen at the end of the ballot.

One experiment in Everett’s (2007) study focused on whether or not subjects would notice either an addition or removal of contests on the review screen. About 68% of subjects did not notice changes in their ballot on the review screen. The number of races added or removed did not affect the chance of subjects noticing the change, with their data showing that up to 30% of the ballot could be altered without being noticed by the majority of users. Everett (2007) suggested that this effect would likely be seen with even higher numbers of changed contests. Interestingly, the amount of time a subject spent on the review screen was a good predictor of whether or not they would detect changes. The longer a subject spent looking at the review screen, the more likely they were to detect errors.

Complete Article List

Search this Journal:
Open Access Articles: Forthcoming
Volume 16: 4 Issues (2020): Forthcoming, Available for Pre-Order
Volume 15: 4 Issues (2019): 3 Released, 1 Forthcoming
Volume 14: 4 Issues (2018)
Volume 13: 4 Issues (2017)
Volume 12: 4 Issues (2016)
Volume 11: 4 Issues (2015)
Volume 10: 4 Issues (2014)
Volume 9: 4 Issues (2013)
Volume 8: 4 Issues (2012)
Volume 7: 4 Issues (2011)
Volume 6: 4 Issues (2010)
Volume 5: 4 Issues (2009)
Volume 4: 4 Issues (2008)
Volume 3: 4 Issues (2007)
Volume 2: 4 Issues (2006)
Volume 1: 4 Issues (2005)
View Complete Journal Contents Listing