Real Time License Plate Recognition from Video Streams using Deep Learning

Real Time License Plate Recognition from Video Streams using Deep Learning

Saquib Nadeem Hashmi, Kaushtubh Kumar, Siddhant Khandelwal, Dravit Lochan, Sangeeta Mittal
Copyright: © 2019 |Pages: 23
DOI: 10.4018/IJIRR.2019010105
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

With ever increasing number of vehicles, vehicular management is one of the major challenges faced by urban areas. Automation in terms of detecting vehicle license plate using real time automatic license plate recognition (RT-ALPR) approach can have many use cases in automated defaulter detection, car parking and toll management. It is a computationally complex task that has been addressed in this work using a deep learning approach. As compared to previous approaches, license plates have been recognized from full camera stills as well as parking videos with noise. On a dataset of 4800 car images, the accuracy obtained is 91% on number plate extraction from images, 93% on character recognition. Proposed ALPR system has also been applied to vehicle videos shot at parking exits. Overall 85% accuracy was obtained in real-time license number recognition from these videos.
Article Preview
Top

1. Introduction

Automatic License Plate Recognition(ALPR) is a technique to extract the license plate number from a still image or a video of a moving or stationary vehicle. It is a useful approach for vehicle surveillance. Robust ALPR has many use cases in combating thefts, illegal vehicles classification, customized electronic toll collection, cataloguing the movements of traffic in a premise, catching speed limit violators, determining what cars belong in a parking garage, expediting parking by eliminating the need for human confirmation of parking passes and many more (Chang, Chen, Chung & Chen, 2004; Du, Ibrahim, Shehata, & Badawy, 2013).

License plate detection and recognition is a challenging problem due to issues like noisy image inputs, occlusion, vehicle orientations, different license plate types, extra images on number plates, non -standard sizes, poor quality of camera among others (Du et al., 2013) Most of the existing solutions make simplistic assumptions as compared to real scenarios i.e. work for stationary cameras, with a specific viewing angle, at a specific resolution, for a specific type of license plate template (Chang et al., 2004; Du et al., 2013; Łubkowski & Laskowski, 2017; Björklund, Fiandrotti, Annarumma, Francini, & Magli, 2017; Khan, Shah, Wahid, Khan & Shahid, 2017). The proposed ALPR model addresses these challenges to a large extent. It works for cases like slightly blurred images, images at a distance, license plate oriented at different angles up to 15 degrees (approx.), multi-coloured license plates, text other than registration number on the license plate.

A typical ALPR system consists of four steps namely Vehicle image capture, Number plate extraction, Character segmentation and Character recognition (Chang et al., 2004; Du et al., 2013; Björklund et al., 2017; Panchal, Hetal, & Panchal, 2016; Azam & Islam, 2016; Yuan et al., 2017). But since our ALPR model is designed to work on videos, therefore, one more step is added at the beginning that is to split the video into frames. In general, the first step i.e. to capture an image of the vehicle is quite an exigent task as it is very difficult to capture image of a moving vehicle in real time, in such a manner that none of the components of vehicle especially the number plate is missed. But our implementation does not rely on single image, rather author's generated multiple frames (still images of which the video is composed) from the video and find possible license plate number corresponding to each frame. Then the registration number with maximum number of occurrences is chosen as the final output. The success of the fourth step of character recognition depends on how the second and the third steps are able to locate vehicle number plate and separate each character.

Template matching has been used here to find out a license plate from vehicle’s image. The values of different parameters like minimum and maximum pixel width, aspect ratio, change in width and height, minimum and maximum contour area etc. have been hardcoded and a list of possible license plates is formed according to these parameters (Chang et al., 2004; Du et al., 2013; Łubkowski & Laskowski, 2017). Finally, a CNN classifier has been applied to every possible plate for Optical Character Recognition (OCR) (Chang et al., 2004; Björklund et al., 2017; Azam & Islam, 2016; Yuan et al., 2017; Khan et al., 2017) and obtain result corresponding to each possible plate. Then the plate with maximum number of characters recognized is selected as the final license plate. The two steps of license plate detection and OCR individually gave good accuracy (93% for CNN classifier and 91% for license plate detection). The accuracy of final combined model is 85%.

Complete Article List

Search this Journal:
Reset
Volume 14: 1 Issue (2024)
Volume 13: 1 Issue (2023)
Volume 12: 4 Issues (2022): 3 Released, 1 Forthcoming
Volume 11: 4 Issues (2021)
Volume 10: 4 Issues (2020)
Volume 9: 4 Issues (2019)
Volume 8: 4 Issues (2018)
Volume 7: 4 Issues (2017)
Volume 6: 4 Issues (2016)
Volume 5: 4 Issues (2015)
Volume 4: 4 Issues (2014)
Volume 3: 4 Issues (2013)
Volume 2: 4 Issues (2012)
Volume 1: 4 Issues (2011)
View Complete Journal Contents Listing