Parallelization and Scaling of Deep Learning and HPC: Libraries, Tools, and Future Directions

Parallelization and Scaling of Deep Learning and HPC: Libraries, Tools, and Future Directions

Aswathy Ravikumar (Vellore Institute of Technology, India) and Harini Sriraman (Vellore Institute of Technology, India)
DOI: 10.4018/978-1-6684-3795-7.ch008
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Optimal high-performance computing (HPC) use for deep learning is a difficult task that requires progress in a variety of research domains. Complex systems and events are represented using a combination of science-based and data-driven models. Additionally, there is a growing demand for real-time data analytics, that necessitates the relocation of large-scale computations nearer to data and data infrastructures, as well as the adaptation of HPC-like modes. Parallel deep learning tries to maximize the performance of complex neural network models by executing them concurrently on recent hardware platforms. Considerable time has been invested in integrating HPC technology into deep learning frameworks that are both reliable and highly functional. In this chapter, distributed deep neural network model designing using parsl, tensor flow, Keras, Horovod libraries, and the implementation of these models using Hadoop, SPARK clusters on local-area networks (LAN) as well as on cloud services like Amazon web services (AWS) and Google cloud platform (GCP) are discussed.
Chapter Preview

Complete Chapter List

Search this Book:
Reset