A Kinect-less Augmented Reality Approach to Real-time Tag-less Virtual Trial Room Simulation

A Kinect-less Augmented Reality Approach to Real-time Tag-less Virtual Trial Room Simulation

Abhinav Biswas (IT Services Division, ECIL, Hyderabad, India), Soumalya Dutta (JIS College of Engineering, Kalyani, India), Nilanjan Dey (Jadavpur University, Kolkata, India) and Ahmad Taher Azar (Benha University, Qalyubiyah, Egypt)
DOI: 10.4018/ijssmet.2014100102
OnDemand PDF Download:
$30.00
List Price: $37.50

Abstract

The Virtual Trial Room (VTR) application software simulates an apparel dressing room by the implementation of a virtual mirror, portraying an augmented view of the user with virtual superimposed clothes. Traditional approach to the design and implementation of virtual dressing rooms have been wildly using either normal webcams with Tag/Marker based tracking or expensive 3D depth & motion sensing cameras like Microsoft Kinect. The main idea of this paper is to methodologically devise a novel VTR solution deploying ubiquitous 2D webcams with tag-less tracking, in a real-time live video mode using open source tools and technologies. The solution model implements a tag-less or marker-less Augmented Reality (AR) technique with face detection technology and provides an intuitive motion-augmented User Interface (UI) to the VTR application, in the form of an interactive human-friendly Virtual Mirror using simple hand gestures. A qualitative performance analysis of the application is evaluated at the end of the paper to determine the fundamental susceptibility of the VTR system against varied illumination conditions.
Article Preview

1. Introduction

The concept of Virtual Trial Rooms have been creating a revolutionary paradigm shift in the context of apparel shopping into a direction of an intuitive motion-augmented interface, where people interact with the system using seamless hand gestures. A typical Virtual Trial Room (VTR) setup consists of one (monocular approach) or more 2D/3D cameras and a big screen/projection surface, which acts as the Virtual Mirror displaying the output of the camera (s). The VTR screen is overlaid with interactive digital options or buttons, for the user to select the desired garment from a browse-able gallery using simple hand motion. After garment selection, the VTR system portrays a real-time augmented view of the user on the screen with the virtual superimposed cloth as shown in Figure 1b. This real-time rendering empowers the user to immediately visualize how the clothing's size and color suits his or her body. In general, the VTR systems deploy Augmented Reality (AR) techniques with participatory design practices, for realistic simulation of the fit & physics of the virtual apparel. So, instead of physically trying various garments one after the other to find the best fit & style, consumers can use VTR systems to quickly visualize everything in the Virtual Mirror, thereby saving a lot of significant time. Hence, the virtual apparel shopping platform provisions a powerful high-fidelity decision tool for the end-user in addition to the fun-factor involved regarding the gesture based system usage. This innovative concept is gaining potential in medium and large fashion stores by transforming the consumers’ basic apprehensions regarding fit and look into enhanced customer satisfaction and reliability before the actual physical trial of the apparel.

Figure 1.

Illustration of VTR System (a) Normal camera view; (b) Augmented view with superimposed items

2. Background Analysis

The key challenge in designing a VTR system is to accurately track the behavior & motion of the end-user in real-time, in order to determine where to place the virtual overlay cloth during the augmentation process. Considering the acquisition of the user's body reference points, multiple methods have been attempted for body shape detection and subject-tracking described as follows:

2.1. Fiducial Marker-based Tracking

Fiala (2004) proposed a fiducial marker system called ARTag for automatic detection of patterns in digital images. Since then, several other robust tracking libraries have emerged like ARToolkit Plus (Wagner & Schmalstieg, 2007) for accurate and computationally effective tracking of markers. Krista and Karen (2005) demonstrated the use of marker-based tracking systems for implementation of virtual mirror in VTR systems. In this approach one or more marker/tag with specific printed pattern is placed over the body to recognize & position the specific marker during video processing before the augmentation process. The video frames received from the calibrated camera are analyzed in real-time using image processing techniques to determine 3D position & orientation of the marker within permissible error-limits (Freeman et al., 2007). Based on the acquired reference point of the fiducial marker, the VTR system can do further processing for a precise positioning and augmentation of the virtual cloth on the user’s body. Martin and Erdal (2012) exploited this technique for the development of virtual fitting room application on android-based systems.

One negative aspect of this tag-based or marker-based tracking is that the printed marker pattern has to be placed on the user's body, which may be time-consuming & cumbersome to use from a consumer point of view. Empirical observations show that the manual labeling of body parts with tags may also give way to a source of error as well.

Complete Article List

Search this Journal:
Reset
Open Access Articles: Forthcoming
Volume 9: 4 Issues (2018): 1 Released, 3 Forthcoming
Volume 8: 4 Issues (2017)
Volume 7: 4 Issues (2016)
Volume 6: 4 Issues (2015)
Volume 5: 4 Issues (2014)
Volume 4: 4 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing