Parallel Data Transfer Protocol

Parallel Data Transfer Protocol

Yushi Shen (Microsoft Corporation, USA), Yale Li (Microsoft Corporation, USA), Ling Wu (EMC2 Corporation, USA), Shaofeng Liu (Microsoft Corporation, USA) and Qian Wen (Endronic Corp, USA)
DOI: 10.4018/978-1-4666-4801-2.ch012
OnDemand PDF Download:
$30.00
List Price: $37.50

Abstract

Transferring very high quality digital objects over the optical network is critical in many scientific applications, including video streaming/conferencing, remote rendering on tiled display walls, 3D virtual reality, and so on. Current data transfer protocols rely on the User Datagram Protocol (UDP) as well as a variety of compression techniques. However, none of the protocols scale well to the parallel model of transferring large scale graphical data. The existing parallel streaming protocols have limited synchronization mechanisms to synchronize the streams efficiently, and therefore, are prone to slowdowns caused by significant packet loss of just one stream. In this chapter, the authors propose a new parallel streaming protocol that can stream synchronized multiple flows of media content over optical networks through Cross-Stream packet coding, which not only can tolerate random UDP packet losses but can also aim to tolerate unevenly distributed packet loss patterns across multiple streams to achieve a synchronized throughput with reasonable coding overhead. They have simulated the approach, and the results show that the approach can generate steady throughput with fluctuating data streams of different data loss patterns and can transfer data in parallel at a higher speed than multiple independent UDP streams.
Chapter Preview
Top

Introduction

In the recent years, CineGrid (CineGrid) has led the new trend of using high bandwidth networks, to transfer very high quality digital content, for real-time show and for digital preservation. Although CineGrid has performed many successful demonstrations of 4K video (Shirai & et. Al., 2009) (Herr, 2005), we also foresee the challenges of designing new protocols, which can scale the current solution to applications demanding higher bandwidth and higher resolution digital content. Now a days, ultra-high resolution displays have become a standard infrastructure in scientific research. These displays are typically achieved by tiling together an array of standard LCD displays into a display wall, using a PC cluster to drive it. (DeFanti & et. Al., 2009). Figure 1 a) and b) show different settings of display walls: the HiPerSpace uses fifty five 30’ LCD displays to form a display wall with more than two million effective pixels; the StarCAVE (DeFanti & et. Al., 2009) uses 16 high-definition projectors to construct a 3D virtual room, where people can navigate 3D virtual reality objects.

Figure 1.

a) HIPerSpace, one of the world’s largest display wall in Calit2, UC San Diego, has 286,720,000 effective pixels. b) The STARCAVE, a third-generation CAVE and virtual reality OptiPortal in Calit2, UC San Diego, ~68,000,000 pixels.

Meanwhile, high-speed research in optical networks (GLIF) makes it possible for scientists to use these ultra-high resolution displays over long distances, in their scientific applications like very-high-definition video streaming/conferencing (OPIPUTER), real-time data visualization generated by remote scientific instruments and etc. As a perfect example of the combination of display walls and high-speed network, the OpTiPuter (OPIPUTER) (Smarr, 2009) research project, funded by the American National Science Foundation, constructed a 1Gbps-10Gbps optical network infrastructure and middleware, aiming to achieve interactive access of remote gigabyte to terabyte visualization data objects, and bring them to its visual interface, the OptIPortals (DeFanti & et. Al., 2009).

However, the scaling up of display devices, from a single PC with a single display to a cluster of PCs with a cluster of displays, has brought up challenges on how to feed data into these display devices. These challenges make the traditional single data source communication model obsolete, and bring forth the creation of a new multiple-to-multiple communication model, which needs very-high-bandwidth parallel data streaming communications, between terminal display devices, while still appearing as a point-to-point communication between them. This put challenges on the traditional transport protocols. The widely adopted Transport Control Protocol (TCP) is slow because of its window-based congestion control mechanism, particularly for long distance communications. Alternatives like RBUDP (He & et. Al., 2002), UDT (Gu & Grossman, 2007), and LambdaStream (Vishwanath & et. Al., 2006) are the recently developed UDP-based protocols, focusing on high-speed file transfer or real-time data streaming in between two end nodes. These protocols are point-to-point rate-based, which means they support one sender and one receiver, and recover lost UDP packets by resending them, and the sender controls the sending rate to minimize packets loss. RBUDP, for instance, uses a bitmap to maintain a list of lost UDP packets and do a multi-round communication to recover lost packets, which usually takes 2-5 round trip time (RTT), to retrieve a GB file correctly; LambdaStream detects packet loss based on the gap between the receiving time of two consecutive UDP packets, and if that gap is bigger than expected, the sender reduces the sending rate accordingly. These protocols, however, cannot be scaled to the multiple-to-multiple communication model in a straightforward way.

Complete Chapter List

Search this Book:
Reset