Database Sharding: To Provide Fault Tolerance and Scalability of Big Data on the Cloud

Database Sharding: To Provide Fault Tolerance and Scalability of Big Data on the Cloud

Sikha Bagui, Loi Tang Nguyen
Copyright: © 2015 |Pages: 17
DOI: 10.4018/IJCAC.2015040103
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

In this paper, the authors present an architecture and implementation of a distributed database system using sharding to provide high availability, fault-tolerance, and scalability of large databases in the cloud. Sharding, or horizontal partitioning, is used to disperse the data among the data nodes located on commodity servers for effective management of big data on the cloud.
Article Preview
Top

1. Introduction

As the digital revolution heralds into a new era with Big Data on the cloud, the idea of a centralized data storage has to be modified to accommodate availability, scalability, reliability and manageability. Higher performance and lower cost become a challenge. This can be accommodated by database sharding.

The architecture of a sharded database is comprised of multiple computed nodes deployed across commodity severs to provide continuous uptime operation in the event of hardware or network failure. By leveraging on database sharding or horizontal partitioning, the database is divided into smaller chunks or shards across multiple data nodes in the cluster. Each node contains and is responsible for its own subset of the data to create a shared-nothing environment.

Unlike the shared-disk clustered database, where data can be accessed from all cluster nodes and thus can cause contention during simultaneous reads and writes, shared-nothing clustered database’s nodes operate on their own subset of data. Data nodes are replicated to provide redundancy, and thus provide high availability and scalability. In a worst-case scenario, if a node, containing a subset data of a table, and the replicated node become unavailable, the other nodes, with a different subset of data, will remain online and available. The business application continues to perform transactions and access data from the remaining available data nodes. These database operations will execute successfully and transparently to the application if the application does not access the unavailable data node. The failed node can be examined, recovered, and rejoin the cluster when it is ready, without affecting any other nodes.

Due to economies of scale, the incurred cost of operating the database needs to be substantially reduced. Database sharding alleviates cost in two ways (1) cost of operation and maintenance - Total Cost of Ownership (TCO) and (2) cost of accessing and processing data. Instead of a centralized storage-area network (SAN), which introduces a single point of failure, database clustering addresses cost (1) by providing a shared-nothing environment that enables the database to scale horizontally on low cost commodity hardware. By wisely partitioning data rows horizontally in a cluster using range or hash partitioning, subsets of data can be accessed separately on each individual data node, which significantly influences the performance capacity while reducing the cost of accessing and processing the data.

Database sharding solves the issue of manageability by partitioning tables and indexes into more manageable units. Database administrators can use database sharding’s Separation of Concerns design concept to pursue the management of data. Since each node is a representation of a portion of the data, maintaining the entire data table can be accomplished in succession for each individual node without affecting any other data nodes. For example, a typical usage for a database administrator is to archive or back up outdated data to save on storage. Rather than archiving the entire data table, a database administrator could archive a single data node containing the data for a particular partition, i.e. year. The archived data can be compressed and transferred to a less-expensive storage tier at a lower TCO cost.

For many years, the perception of gaining more database throughput was by building bigger and better database servers with supplementary multi-cores CPU, faster disk drives, and higher bandwidth and low latency memory. However, according to the law of diminishing returns, all three factors, CPU, disk drive, and memory, have to be added proportionally to provide maximum performance. If only one of the three factors is increased while the others remain constant, then the system will yield lower marginal returns. As business applications grow and are stored on the cloud, databases need to provide high performance to handle the increasing workloads. Not only does database sharding alleviate the TCO cost, but it is also a factor for gaining high performance. Since sharding divides the table into smaller partitions and distributes it across multiple data nodes, the number of records in each node is smaller than whole table. This technique improves searching performance by reducing the index size and limiting the number of records to be traversed.

In this paper, we use MySQL Database Cluster to demonstrate and discover the capabilities of database sharding. We will provide the implementation details to build a sharded database system. After the implementation section, we present some examples of databases that we sharded using our implementation.

Complete Article List

Search this Journal:
Reset
Volume 14: 1 Issue (2024)
Volume 13: 1 Issue (2023)
Volume 12: 4 Issues (2022): 2 Released, 2 Forthcoming
Volume 11: 4 Issues (2021)
Volume 10: 4 Issues (2020)
Volume 9: 4 Issues (2019)
Volume 8: 4 Issues (2018)
Volume 7: 4 Issues (2017)
Volume 6: 4 Issues (2016)
Volume 5: 4 Issues (2015)
Volume 4: 4 Issues (2014)
Volume 3: 4 Issues (2013)
Volume 2: 4 Issues (2012)
Volume 1: 4 Issues (2011)
View Complete Journal Contents Listing