DeepSlicing: Collaborative and Adaptive CNN Inference With Low Latency

DeepSlicing: Collaborative and Adaptive CNN Inference With Low Latency

Shuai Zhang, Yu Chen, Sheng Zhang, Zhiqi Chen
Copyright: © 2024 |Pages: 28
DOI: 10.4018/979-8-3693-0230-9.ch006
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Convolutional neural networks (CNNs) have revolutionized computer vision applications with recent advancements. Extensive research focuses on optimizing CNNs for efficient deployment on resource-limited devices. However, the previous studies had several weaknesses, including limited support for diverse CNN structures, fixed scheduling strategies, overlapped computations, and high synchronization overheads. In this chapter, the authors introduce DeepSlicing, an adaptive inference system that addresses the above challenges. It supports various CNNs and offers flexible fine-grained scheduling, including GoogLeNet and ResNet models. DeepSlicing incorporates a proportional synchronized scheduler (PSS) for balancing computation and synchronization. Implemented using PyTorch, the authors evaluate DeepSlicing on an edge testbed of 8 heterogeneous Raspberry Pis. Results showcase the remarkable reductions in inference latency (up to 5.79 times) and memory footprint (up to 14.72 times), demonstrating the efficacy of this proposed approach.
Chapter Preview
Top

1. Introduction

The past decade has witnessed the rise of deep learning. As a representative, convolutional neural networks (CNNs) are widely used in various applications, such as image classification (He et al., 2016; Simonyan and Zisserman, 2014; Szegedy et al., 2015), object detection (Liu et al., 2016; Redmon and Farhadi, 2017; Ren et al., 2015), and video analytics (Hsieh et al., 2018; Jiang et al., 2018; Zhang et al., 2017). These applications utilize dedicated CNNs to accurately detect and classify objects in images/videos.

Despite their advantages, it is important to note that CNN inference requires significant computing resources. For example, VGG-16 demands 15.5G multiply-add computations (MACs) to classify an image with 224×224 resolution (Zhou et al., 2019b). Consequently, conventional solutions perform inference on powerful cloud servers to reduce latency. However, the data being processed originates from the network edge, and long-distance transmission suffers from delay and jitter, making it challenging to meet real-time requirements.

The proliferation of the Internet of Things (IoT) in recent years has led to the growth of computing capabilities at the network edge, giving rise to edge computing (Satyanarayanan, 2017). User data can be processed by CNNs locally, eliminating the need for remote transmissions. To address the discrepancy between the limited computation capabilities of edge devices and the resource demands of CNNs, various approaches have been explored, including model compression (Crowley et al., 2018; Han et al., 2015; Zhang et al., 2018), model early-exit (Li et al., 2018; Scardapane et al., 2020; Teerapittayanon et al., 2016), model partitioning (Dey et al., 2019; He et al., 2020; Hu et al., 2019; Hu and Krishnamachari, 2020; Jeong et al., 2018; Kang et al., 2017; Ko et al., 2018; Xu et al., 2017), data partitioning (Hadidi et al., 2019; Mao et al., 2017; Stahl et al., 2019; Zhao et al., 2018; Zhou et al., 2019a), and domain-specific hardware/tools (ope; tpu).

Model compression aims to create a more compact CNN model through revision. However, even with compression, large input data can overwhelm an IoT device with limited RAM. Model early-exit attempts to bypass certain layers to accelerate inference but incurs additional training cost. Model partitioning involves splitting CNN models between the edge and the cloud, leveraging the network bandwidth of the edge and the computing power of the cloud. However, wide area network (WAN) transmission still poses challenges, and sequential partition execution fails to fully exploit the parallel nature of edge devices.

In contrast to these methods, data partitioning involves parallel inference by splitting data among edge devices, fully utilizing the computing resources of each device. Furthermore, data partitioning is edge-native, taking advantage of the faster and more stable connections between edge devices compared to WAN. This feature enables more efficient communication, resulting in lower inference latency. While there have been numerous studies (Hadidi et al., 2019; Mao et al., 2017; Stahl et al., 2019; Zhao et al., 2018; Zhou et al., 2019a) focused on data partitioning, they suffer from certain weaknesses that serve as the motivation for our work:

Complete Chapter List

Search this Book:
Reset