Big Data at Scale for Digital Humanities: An Architecture for the HathiTrust Research Center

Big Data at Scale for Digital Humanities: An Architecture for the HathiTrust Research Center

Stacy T. Kowalczyk (Dominican University, USA), Yiming Sun (Indiana University, USA), Zong Peng (Indiana University, USA), Beth Plale (Indiana University, USA), Aaron Todd (Indiana University, USA), Loretta Auvil (University of Illinois, USA), Craig Willis (University of Illinois, USA), Jiaan Zeng (Indiana University, USA), Milinda Pathirage (Indiana University, USA), Samitha Liyanage (Indiana University, USA), Guangchen Ruan (Indiana University, USA) and J. Stephen Downie (University of Illinois, USA)
Copyright: © 2014 |Pages: 25
DOI: 10.4018/978-1-4666-4699-5.ch011
OnDemand PDF Download:
$30.00
List Price: $37.50

Abstract

Big Data in the humanities is a new phenomenon that is expected to revolutionize the process of humanities research. The HathiTrust Research Center (HTRC) is a cyberinfrastructure to support humanities research on big humanities data. The HathiTrust Research Center has been designed to make the technology serve the researcher to make the content easy to find, to make the research tools efficient and effective, to allow researchers to customize their environment, to allow researchers to combine their own data with that of the HTRC, and to allow researchers to contribute tools. The architecture has multiple layers of abstraction providing a secure, scalable, extendable, and generalizable interface for both human and computational users.
Chapter Preview
Top

Introduction

Big Data is big news. The data deluge of scientific research data, social media data, and financial/commercial data is now mainstream. It is discussed in the public press, studied in academic situations, and exploited by entrepreneurs (Press, 2013). Until very recently, the humanities have not been included in the digital data wave (Anderson, 2007). However, the state of humanities research is undergoing a major transformation (Drucker, 2009; Dunn & Blanke, 2009). Digitally-based research methodologies are becoming more widely used and accepted in the humanities (Borgman, 2009). Big data is just becoming available for humanists (Manovich, 2012).

The initial efforts at doing in silico humanities research were personal projects of forward-looking researchers (Dunn & Blanke, 2009). These initial efforts entailed digital scholarly editions (Newman, 2013; Walsh, 2013), specialized applications for specific research projects (Walsh & Hooper, 2012), and specialized applications for a specific research domain (Almas et al., 2011). Some of these initiatives evolved into community-based efforts to develop a set of standards for text encoding that became the focus of much of the digital humanities work for the past 10 years (Drucker, 2009). Institutional support for digital humanities includes the trend towards Digital Humanities Centers in colleges and universities. Many of these centers have not been as successful as initially expected due to the resource intensive nature of digital humanities projects such as text digitization and encoding or specialty applications development (Borgman, 2009). Cross-institutional activities, including funding agency initiatives to develop shared technical infrastructures, have had mixed results (Friedlander, 2009). All of these efforts have not provided the desired results. Infrastructure for digital humanities remains unavailable to many researchers (Borgman, 2009).

This chapter describes the research efforts to create a cyberinfrastructure to support Big Data for digital humanities. In the sections that follow, the need for this research is placed in the context of digital humanities and cyberinfrastructure research, the nature of data in the humanities is discussed, the HathiTrust Research Center (HTRC) is introduced, the cyberinfrastructure research is described, and the architecture is explicated.

Complete Chapter List

Search this Book:
Reset