ACRONYM: Context Metrics for Linking People to User-Generated Media Content

ACRONYM: Context Metrics for Linking People to User-Generated Media Content

Fergal Monaghan, Siegfried Handschuh, David O’Sullivan
Copyright: © 2011 |Pages: 35
DOI: 10.4018/jswis.2011100101
(Individual Articles)
No Current Special Offers


With the advent of online social networks and User-Generated Content (UGC), the social Web is experiencing an explosion of audio-visual data. However, the usefulness of the collected data is in doubt, given that the means of retrieval are limited by the semantic gap between them and people’s perceived understanding of the memories they represent. Whereas machines interpret UGC media as series of binary audio-visual data, humans perceive the context under which the content is captured and the people, places, and events represented. The Annotation CReatiON for Your Media (ACRONYM) framework addresses the semantic gap by supporting the creation of a layer of explicit machine-interpretable meaning describing UGC context. This paper presents an overview of a use case of ACRONYM for semantic annotation of personal photographs. The authors define a set of recommendation algorithms employed by ACRONYM to support the annotation of generic UGC multimedia. This paper introduces the context metrics and combination methods that form the recommendation algorithms used by ACRONYM to determine the people represented in multimedia resources. For the photograph annotation use case, these result in an increase in recommendation accuracy. Context-based algorithms provide a cheap and robust means of UGC media annotation that is compatible with and complimentary to content-recognition techniques.
Article Preview

1. Introduction

“User-Created Content” (UCC) or “User-Generated Content” (UGC) (Wunsch-Vincent & Vickery, 2006) is paving the way towards a Web of “object-centred sociality” (Cetina, 1997; Bojārs, Heitmann, & Oren, 2007): a collaborative knowledge management platform built around documents or other objects of interest that goes beyond unidirectional publication and consumption. As well as user profiles, blogs, and other information manually input as text media by users, a significant proportion of UGC now consists of multimedia such as photographs, video and music. With the proliferation of cheap storage and affordable recording devices, interaction with digital multimedia has become a major activity for computer users. Furthermore, fast broadband network connections and ubiquitous, network-ready, sensor-laden mobile devices have facilitated the shift of this interaction to the global stage on the increasingly-available Internet.

Users create, store, upload, download, tag, rate, review, browse, search and share text, photograph, video and audio resources using a myriad of hardware and software tools on their personal computers, local networks, mobile devices and the Internet. This UGC multimedia is the currency of popular object-centered social network services like Flickr (photographs), YouTube (video) and (music).

The spread of peer-to-peer distribution on the Internet and between mobile devices via Personal Area Networking (PAN) allows users to share raw multimedia directly with each other, increasing throughput over the network by cutting out centralised “middle-men” such as Web servers. However, users of social networking services often find their profiles, preferences, tags and other meta-content locked into the proprietary repository of the service that they used to create the content. Every time a user wants to join another online social network hosted by a different social network service, they must leave their existing content behind and recreate or re-upload their user profile and all other UGC. This repetitive process can be extremely tedious and leads to the current situation where social networking services hoard UGC in isolated and disjointed islands of data.

Standards can bridge the gaps between these islands by providing best-practice means of storing and sharing UGC as portable, reusable data. Advocates of data portability -such as the Data Portability project ( -believe that users should be able to move, share, and control their identity, photographs, videos and all other forms of personal data independently. This can be done by separating the user’s content from the social network service’s functionality; in this way social network services can still compete for membership based on the value added to the user’s content by their functionality. Data portability can be enabled by the widescale adoption of reusable and extensible standards that allow users to control, share, and move their data from one system to another. Such standards are in fact at the heart of the Semantic Web vision, which has data portability as one of its key features.

Due to the dependence of the Semantic Web on ontology-based metadata, an important question is how to support the creation of this semantic metadata. As online social networking and the Semantic Web converge, a social Semantic Web is emerging which may help kickstart this process: a Web of collaborative knowledge management which is able to provide useful information based on human contributions and which becomes more useful as more people participate.

Instead of relying entirely on automated semantics with formal ontology processing and inferencing, the idea behind the social Semantic Web is to complement the formal Semantic Web vision by adding a pragmatic approach relying on heuristic classification and tagging to create semantic metadata in standard description languages. This requires a continuous process of eliciting crucial knowledge of a domain through semi-formal ontologies and emphasises the importance of manually-created, loose semantics as a means to initialise the vision of the Semantic Web. While the Semantic Web enables integration of domain-specific processing with precise automatic logic inference computing across domains, the social Semantic Web offers a more social interface to semantics, allowing interoperability between objects of interest, actions and users.

Complete Article List

Search this Journal:
Volume 19: 1 Issue (2023)
Volume 18: 4 Issues (2022): 2 Released, 2 Forthcoming
Volume 17: 4 Issues (2021)
Volume 16: 4 Issues (2020)
Volume 15: 4 Issues (2019)
Volume 14: 4 Issues (2018)
Volume 13: 4 Issues (2017)
Volume 12: 4 Issues (2016)
Volume 11: 4 Issues (2015)
Volume 10: 4 Issues (2014)
Volume 9: 4 Issues (2013)
Volume 8: 4 Issues (2012)
Volume 7: 4 Issues (2011)
Volume 6: 4 Issues (2010)
Volume 5: 4 Issues (2009)
Volume 4: 4 Issues (2008)
Volume 3: 4 Issues (2007)
Volume 2: 4 Issues (2006)
Volume 1: 4 Issues (2005)
View Complete Journal Contents Listing