Kinect-Based Limb Rehabilitation Methods

Kinect-Based Limb Rehabilitation Methods

Yongji Yang (Changchun University, Changchun City, China), Zhiguo Xiao (Changchun University, Changchun City, China) and Furen Jiang (Changchun University, Changchun City, China)
DOI: 10.4018/IJHISI.2018070104
OnDemand PDF Download:
No Current Special Offers


Within the context of health informatics, this article discusses how real-time information of human skeleton movements can be conveniently captured via the use of Kinect-based deep sensor. It highlights an effective action recognition method, using Unity3D to form virtual models of characters and scenes. Kinect somatosensory cameras can now identify one's motion data and provide feedback on the virtual role model to complete the avatar drive. This is to achieve real-time simulation modeling as well as drive operation instructions based on the design and identification verification to complement limb rehabilitation. The hardware of such a health informatics-related system is simple, inexpensive, and highly meaningful in term of augmenting user experience. In conclusion, the suggested approach can contribute significantly towards aiding physical rehabilitation therapy
Article Preview

1. Introduction

Traditionally, rehabilitation medicine entails the deployment of either man-assisted and/or robot-aided rehabilitation methods to enhance the physical functioning of individuals affected by illnesses or injuries with the goal of minimizing any preventable activity-related impairment. With a growing shortage of medical resources, the rising cost of specialty medical equipment, and its high maintenance costs, a call for new inexpensive means of rehabilitation medicine is highly advocated. One of these innovations is the design of intelligent human-computer interaction (HCI) within the emerging health informatics context.

HCI refers to the transmission of one’s ideas to the computer through kinesics (such as gestures, postures, facial expressions, and more) as well as the computer recognition of human action and behavioral feedback. Using HCI to replace traditional rehabilitation methods can effectively resolve the cost and medical resource shortage challenges as it provides the patients with a convenient, inexpensive and effective rehabilitation system. Today, HCI is trending in current rehabilitation research. Yet, temporal variability and the complexity of human action is a key challenge in promoting action recognition. At present, the method of motion parameters collected by wearable sensors, gyroscopes, and accelerometers for the recognition of human body movement is accurate and highly effective (Maekawa, Yanagisawa, & Kishino, 2010; Cao, Cai, & Cheng, 2010), but the wearable sensor reduces the wearer’s comfort. In term of visual recognition method, the current study draws on 2D visualization, with the effect often being highly influenced by environmental background, illumination, and occlusion factors.

With the emergence of inexpensive Kinect somatosensory cameras, new opportunities have arisen for action recognition researchers. With the aid of skeletal tracking technology to gather real-time data, these researchers can now collect non-contact 3D human skeleton information, which is hardly influenced by the background illumination. There is also direct use of data based on the key points of Kinect motion recognition method, a method to set the rule to identify joint action between the use of position angle, height and other mobile velocity, acceleration, and the like to define thresholds. Xie Liang et al. (2013), for example, obtained good recognition results by determining the specific posture based on the Euclidean distance and angle between the joints. Such a method is now widely applied due to its ease of use, but it displays low extensibility and a limited recognition in newly defined domains after each new action to be inserted. Another approach is to extract features, then select the appropriate classification for recognition. Raptis et al. (2011), for example, used Kinect to get bone information, transforming it to human angle features from the perspective of the human skeletal structure, which can achieve the classification of dance gesture recognition and the accuracy can be upward of 91.9%. Lai et al. (2012) transform the joints features into 2D features, and the nearest neighbor classifier is used to recognize the hand gesture, which can make the recognition rate to reach 92.25% under the condition of limiting the speed of movement. In this work, we propose using Kinect recognizing skeleton key information, by feature selection, focusing on the temporal and special problems of motion recognition to achieve flexibility, robustness, and high instantaneity. Such an approach can then be used for limb rehabilitation action identification; moreover, this same approach, by binding human skeleton information identified by Unity to the 3D animated character, allows users to practice limb rehabilitation training in a virtual 3D scene as schematically depicted in Figure 1.

Figure 1.

Proposed Kinect-based Limb Rehabilitation Method


Complete Article List

Search this Journal:
Volume 17: 2 Issues (2022)
Volume 16: 4 Issues (2021)
Volume 15: 4 Issues (2020)
Volume 14: 4 Issues (2019)
Volume 13: 4 Issues (2018)
Volume 12: 4 Issues (2017)
Volume 11: 4 Issues (2016)
Volume 10: 4 Issues (2015)
Volume 9: 4 Issues (2014)
Volume 8: 4 Issues (2013)
Volume 7: 4 Issues (2012)
Volume 6: 4 Issues (2011)
Volume 5: 4 Issues (2010)
Volume 4: 4 Issues (2009)
Volume 3: 4 Issues (2008)
Volume 2: 4 Issues (2007)
Volume 1: 4 Issues (2006)
View Complete Journal Contents Listing