Exploiting Collaborative Tagging Systems to Unveil the User-Experience of Web Contents: An Operative Proposal
A. Malizia (Universidad Carlos III de Madrid, Spain), A. De Angeli (University of Manchester, UK), S. Levialdi (Sapienza University of Rome, Italy) and I. Aedo Cuevas (Universidad Carlos III de Madrid, Spain)
Copyright: © 2009
The User Experience (UX) is a crucial factor for designing and enhancing the user satisfaction when interacting with a computational tool or with a system. Thus, measuring the UX can be very effective when designing or updating a Web site. Currently, there are many Web sites that rely on collaborative tagging: such systems allow users to add labels (tags) for categorizing contents. In this chapter the authors present a set of techniques for detecting the user experience through Collaborative Tagging Systems and we present an example on how to apply the approach for a Web site evaluation. This chapter highlights the potential use of collaborative tagging systems for measuring users’ satisfaction and discusses the future implications of this approach as compared to traditional evaluation tools, such as questionnaires, or interviews.
Collaborative tagging is the process by which users add metadata to a community-shared content, in order to organize documents for future navigation, inspection, filtering, or search. The content is organised by descriptive terms (tags), which are chosen informally and personally by the user. The freedom to choose unstructured tags is the main distinctive feature of collaborative tagging systems, as compared to traditional digital libraries or other systems of content organization, where the creation of metadata is the task of dedicated professionals (such as librarians) or derives from additional material supplied by the authors (Bennis et al. 1998, Csikszentmihalyi, 1997). Like all socially-generated structures, tagging is an adaptable process; it takes the form best supported by the content, letting users decide the categorization of such content, rather than imposing a rigid structure on it. Collaborative tagging is most useful in an environment like the World Wide Web, where a single “content classification authority” cannot exist and there is a large amount of data content being continually produced by the users.
The widespread success of collaborative tagging systems over the last few years has generated a large collection of data reflecting opinions on, and evaluation of, web contents. In this chapter, we look into the possibility of exploiting this large database to evaluate the user experience (UX) of web sites. UX is a multi-faceted construct recently introduced into the HCI agenda to describe the quality of an interactive system (Garrett 2003; McCarthy and Wright 2005). This construct is used to indicate how people feel about a product and their pleasure and satisfaction when using it (Hassenzahl and Tracktinsky, 2006). Responses such as aesthetic judgments, satisfaction or frustration, feelings of ownership and identity are the most prominent aspects of user experiences investigated in this new, comprehensive, HCI research area (De Angeli, Sutcliffe and Hartman, 2005; Hartman, Sutcliffe and De Angeli, 2007; Norman, 2004). Normally, these responses are collected in formal evaluation settings via questionnaires and/or interviews. Collaborative tagging may offer an interesting alternative, one which is cheaper and less prone to experimental bias. In this chapter, we present a technique to extract semantics from tagging systems, and interpret them to describe the user experience when interacting with on-line content.
This chapter has the following organisation. Paragraph 2 reviews related works on collaborative tagging systems. Paragraph 3 describes three different techniques that can be used to extract semantics from tagging systems. Paragraph 4 reports a method to derive semantic
s differential attributes from collaborative tagging systems, 3, and its evaluation. Paragraph 5 summarizes the chapter, delineates future trends in the use of collaborative tagging systems for automating evaluation techniques and draws the conclusions.
Key Terms in this Chapter
Semantics Differential: A type of a rating scale designed to measure the connotative meaning of objects, events, and concepts9.
User Experience: User experience, often abbreviated UX, is a term used to describe the overall experience and satisfaction a user has when using a product or system11.
Information Retrieval: Information retrieval (IR) is the science of searching for information in documents, searching for documents themselves, searching for metadata which describe documents, or searching within databases, whether relational stand-alone databases or hypertextually-networked databases such as the World Wide Web7.
Semantic Clustering: Identifying and disambiguating between the senses of a semantically ambiguous word, without being given any prior information about these senses8.
Distributed Intelligence: In many traditional approaches, human cognition has been seen as existing solely “inside” a person’s head, and studies on cognition have often disregarded the physical and social surroundings in which cognition takes place. Distributed intelligence provides an effective theoretical framework for understanding what humans can achieve and how artifacts, tools, and socio-technical environments can be designed and evaluated to empower human beings and to change tasks6.
Collaborative Tagging Systems: Collaborative tagging (also know as folksonomy, social classification, social indexing and other names) is the practice and method of collaboratively creating and managing tags to annotate and categorize content5.
Usability Evaluation: Usability usually refers to the elegance and clarity with which the interaction with a computer program or a web site is designed10.
Complete Chapter List
Stylianos Hatzipanagos, Steven Warburton
Jon Dron, Terry Anderson
Chris Abbott, William Alder
Eleni Berki, Mikko Jäkälä
Mark Bilandzic, Marcus Foth
Rakesh Biswas, Carmel M. Martin, Joachim Sturmberg, Kamalika Mukherji, Edwin Wen Huo Lee, Shashikiran Umakanth
Jillianne R. Code, Nicholas E. Zaparyniuk
Jillianne R. Code, Nicholas E. Zaparyniuk
A. Malizia, A. De Angeli, S. Levialdi, I. Aedo Cuevas
Utpal M. Dholakia, Richard Baraniuk
Sebastian Fiedler, Kai Pata
Yoni Ryan, Robert Fitzgerald
Jerald Hughes, Scott Robinson
Helen Keegan, Bernard Lisewski
Lucinda Kerawalla, Shailey Minocha, Gill Kirkup, Gráinne Conole
Lisa Kervin, Jessica Mantei, Anthony Herrington
Jennifer Ann Linder-VanBerschot
Petros Lameras, Iraklis Paraskakis, Philipa Levy
Dimitris Bibikas, Iraklis Paraskakis, Alexandros G. Psychogios, Ana C. Vasconcelos
M. C. Pettenati, M. E. Cigognini, E. M.C. Guerin, G. R. Mangione
Sharon Markless, David Streatfield
Catherine McLoughlin, Mark J.W. Lee
Alexandra Okada, Simon Buckingham Shum, Michelle Bachler, Eleftheria Tomadaki, Peter Scott, Alex Little, Marc Eisenstadt
Luc Pauwels, Patricia Hellriegel
Andrew Ravenscroft, Musbah Sagar, Enzian Baur, Peter Oriogun
V. Sachdev, S. Nerur, J. T.C. Teng
Sue Thomas, Chris Joseph, Jess Laccetti, Bruce Mason, Simon Perril, Kate Pullinger
Martin Weller, James Dalziel