Article Preview
TopThroughout previous studies, traditional machine learning algorithms have relied too much on feature engineering and selection. With the continuous increase of network data, more and more scholars and researchers are integrating new IoT frameworks to balance service and data processing requirements by accessing edge computing nodes or devices for the IoT.
Oma et al. (2018) added edge nodes as the middle layer, through pushing sensor data directly to processing and used edge nodes to process feedback data to reduce the delay. Edge nodes, however, are limited by computing capacity.
Datta et al. (2019) proposed an IoT edge computing architecture that combined the relay computing layer to process IoT. This method used virtual IoT devices to process local data and improved the real-time response speed of data. The method, however, didn’t account for the increase in data demand and that the surge in sensing equipment would be limited by communication and power and computing capabilities.
Li et al. (2020) modeled the computing offload process as the minimum allocatable wireless resource block level and proposed a method for computing offload. This method measured the cost-effectiveness of resource allocation and energy conservation but could not meet the requirements of real-time and accuracy.