Eye Gaze Capture for Preference Tracking in a Scalable Environment

Eye Gaze Capture for Preference Tracking in a Scalable Environment

DOI: 10.4018/978-1-6684-9804-0.ch013
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Nowadays, eye gaze tracking of real-world people to acknowledge what they are seeing in a particular page is trending; but its complexity is also growing fast, and the accuracy is not enough. In the proposed system, the image patch of the eye region is extracted from the input image using the Viola Jones algorithm for facial feature detection. Then SqueezeNet and U-Net are combined to train the model for pixel classification of iris and pupil from the eye image patch with a training dataset that contains manually labelled iris and pupil region. After extracting the iris and pupil features, the eye gaze tracking is formulated by using 2D pupil center extracted by applying Mean-Shift algorithm and 3D eyeball center. The system achieved an accuracy of 99.93% which is best comparable to the state-of-the-art methods.
Chapter Preview
Top

Introduction

Nowadays, eye gaze tracking of real-world people to acknowledge what they are seeing in a particular page is trending as it is emerging fast; but its complexity is also growing fast and the accuracy is not enough. Earlier, the researchers used IR illumination glasses to track the contents in a page that the user was seeing. But IR illumination glasses were expensive. Recently, Wang, C., Shi, F., Xia, S., Chai, J (2016) introduced a method for tracking the eye movement through normal monocular RGB camera (webcams, mobile phone camera, etc) by extracting iris and pupil features through random forests classification. As random forests consume so much memory, Wang again introduced the method of extracting iris and pupil features by using DCNN. Thus, using DCNN reduced the memory size and semantic segmentation of iris and pupil was achieved through fewer parameters. This made eye gaze tracking possible in devices with less memory like mobile phones.

Our aim is to identify which products attract user’s attention the most. This paper introduces eye gaze tracking for preference tracking. Our idea is to extract iris and pupil features from the input image by using Deep convolutional neural network and producing heatmap in a page and to calculate which product that a user has seen the most.

To achieve this goal, the image patch of the eye region is extracted from the input image using the Viola Jones Algorithm for facial feature detection. Then SqueezeNet and U-Net are combined to train the model for pixel classification of iris and pupil from the eye image patch with a training dataset that contains manually labelled iris and pupil region.

After extracting the iris and pupil features, the eye gaze tracking is formulated by using 2D pupil center extracted by applying Mean-Shift algorithm and 3D eyeball center.

Lastly, after the eye gaze capture, the camera image plane and the screen image plane are mapped through linear transformation by instructing the user to focus on pre-defined points on the screen and then the eye gaze points are visualized on the screen by producing heatmaps using Gaussian kernel. A timer is set to cool down the previous focus points on the screen. Time is calculated to find the duration, the product seen. The product that was seen for more amount of time is declared the most viewed product.

Complete Chapter List

Search this Book:
Reset