Deep Reinforcement Learning and In-Network Caching-Based Martial Arts Physical Training

Deep Reinforcement Learning and In-Network Caching-Based Martial Arts Physical Training

Qi Zhang
Copyright: © 2022 |Pages: 8
DOI: 10.4018/IJDST.291079
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The martial arts have been regarded as the athletics project at the international competitions, and the corresponding physical training has also brought about widespread attention. However, the traditional physical training evaluation methods are usually performed in the offline way and they are very difficult to achieve the large-scale data evaluation with the high evaluation efficiency. Therefore, this paper leverages Deep Reinforcement Learning (DRL) and in-network caching to realize the high-precision and high-efficiency data evaluation under the large-scale martial arts physical training environment while guarantees the online performance evaluation. Meanwhile, Q-learning based DRL is used to make the large-scale data evaluation. In addition, a communication protocol based on in-network caching is proposed to support the online function. The comparison experiments demonstrate that the proposed conduction method for the martial arts physical training is more efficient than the benchmark.
Article Preview
Top

1. Introduction

In recent years, more and more people including but not limited to the Chinese have paid attention to the martial arts (also called KungFu and WuShu). Similar to the common boxing, TKD and kick-boxing projects, the martial arts project also consumes the huge amount of human energy. In other words, such project also has a certain degree of risk. Therefore, the physical training of martial arts is very important and should be developed frequently. In fact, it needs a high standard to evaluate the physical training condition. However, the traditional physical training evaluation methods are usually performed in the offline way, i.e., the coaches conduct the corresponding courses of physical training according to their experience-based judgments. Furthermore, the coaches cannot complete the multiple conduction for the multiple physical training due to the limited personal ability, that is to say, the previous methods are very difficult to achieve the large-scale data evaluation. The above two limitations cause that they cannot obtain the high-precision and high-efficiency data evaluation in the large-scale martial arts physical training environment. As a conclusion, the current martial arts physical training evaluation method should satisfy the following requirements, i.e., large-scale, high-precision, high-efficiency, and online standards.

In terms of machine learning, Reinforcement Learning (RL) (Alcaraz et al., 2020) and Deep Learning (DL) (Amanullah et al., 2020) are two independent learning paradigms. DL has the strong perception ability whilst it lacks of the appropriate decision ability. On the contrary, RL has the strong decision ability but it cannot address the perception issue well. By mapping DL and RL into the evaluation of martial arts physical training, DL can satisfy the large-scale and high-efficiency standards with the strong perception ability while RL can satisfy the high-precision standard with the strong decision ability. If these three standards want to be covered, it needs a combination of DL and RL. In fact, Deep RL (DRL) (Luong et al., 2019) is just an almost perfect integration from DL and RL, realizing their respective advantages complementary to each other. DRL can directly learn the control decision from the original high- dimensional data, which can naturally satisfy the large- scale, high-precision and high-efficiency standards.

Furthermore, the evaluation of martial arts physical training relies on the data collection and communication. For the data collection, it needs the related container to store the physical training data. Regarding this, the in- network caching enabled mobile devices are the proper candidates. For the data communication, as the above mentioned, it needs to guarantee the online communication. The server used to perform the data analysis of physical training usually needs the connection support of TCP/IP. However, under the large-scale physical training environment, it is impossible to support all connections between the server and all mobile devices. Therefore, in order to guarantee the absolutely online standard, the communication protocol between mobile device and server is very necessary. For the above mentioned in-network caching ability, there are a few candidates, for example, Information-Centric Networking (ICN) (Djamaa & Senouci, 2020) and IPv6-Content Networking (6CN) (6CN Technique Report, n.d.). Although ICN has been widely accepted due to its innovative networking paradigm with the inherent in-network caching ability, it is difficult to be deployed at the mobile devices because two aspects. On one hand, the cache of ICN is very expensive. On the other hand, ICN maintains to abandon the usage of IP address, which is unrealistic in the current Internet era. Different from ICN, 6CN is the updated version of Content Distribution Network (CDN) (Jia et al., 2017), which can push the content to the mobile devices. In other words, the mobile devices can be deployed with 6CN paradigm and they can be equipped with the inherent cache space to store physical training data.

By summarizing the above points, the contributions of this paper are shown as two aspects. (1) Q-learning DRL is used to make the large-scale data evaluation to reach the high-precision and high-efficiency performance. (2) 6CN networking paradigm is deployed, and based on the in-network caching, a communication protocol is proposed to realize the online performance.

Complete Article List

Search this Journal:
Reset
Volume 15: 1 Issue (2024)
Volume 14: 2 Issues (2023)
Volume 13: 8 Issues (2022)
Volume 12: 4 Issues (2021)
Volume 11: 4 Issues (2020)
Volume 10: 4 Issues (2019)
Volume 9: 4 Issues (2018)
Volume 8: 4 Issues (2017)
Volume 7: 4 Issues (2016)
Volume 6: 4 Issues (2015)
Volume 5: 4 Issues (2014)
Volume 4: 4 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing