Article Preview
Top1. Introduction
Automatic License Plate Recognition(ALPR) is a technique to extract the license plate number from a still image or a video of a moving or stationary vehicle. It is a useful approach for vehicle surveillance. Robust ALPR has many use cases in combating thefts, illegal vehicles classification, customized electronic toll collection, cataloguing the movements of traffic in a premise, catching speed limit violators, determining what cars belong in a parking garage, expediting parking by eliminating the need for human confirmation of parking passes and many more (Chang, Chen, Chung & Chen, 2004; Du, Ibrahim, Shehata, & Badawy, 2013).
License plate detection and recognition is a challenging problem due to issues like noisy image inputs, occlusion, vehicle orientations, different license plate types, extra images on number plates, non -standard sizes, poor quality of camera among others (Du et al., 2013) Most of the existing solutions make simplistic assumptions as compared to real scenarios i.e. work for stationary cameras, with a specific viewing angle, at a specific resolution, for a specific type of license plate template (Chang et al., 2004; Du et al., 2013; Łubkowski & Laskowski, 2017; Björklund, Fiandrotti, Annarumma, Francini, & Magli, 2017; Khan, Shah, Wahid, Khan & Shahid, 2017). The proposed ALPR model addresses these challenges to a large extent. It works for cases like slightly blurred images, images at a distance, license plate oriented at different angles up to 15 degrees (approx.), multi-coloured license plates, text other than registration number on the license plate.
A typical ALPR system consists of four steps namely Vehicle image capture, Number plate extraction, Character segmentation and Character recognition (Chang et al., 2004; Du et al., 2013; Björklund et al., 2017; Panchal, Hetal, & Panchal, 2016; Azam & Islam, 2016; Yuan et al., 2017). But since our ALPR model is designed to work on videos, therefore, one more step is added at the beginning that is to split the video into frames. In general, the first step i.e. to capture an image of the vehicle is quite an exigent task as it is very difficult to capture image of a moving vehicle in real time, in such a manner that none of the components of vehicle especially the number plate is missed. But our implementation does not rely on single image, rather author's generated multiple frames (still images of which the video is composed) from the video and find possible license plate number corresponding to each frame. Then the registration number with maximum number of occurrences is chosen as the final output. The success of the fourth step of character recognition depends on how the second and the third steps are able to locate vehicle number plate and separate each character.
Template matching has been used here to find out a license plate from vehicle’s image. The values of different parameters like minimum and maximum pixel width, aspect ratio, change in width and height, minimum and maximum contour area etc. have been hardcoded and a list of possible license plates is formed according to these parameters (Chang et al., 2004; Du et al., 2013; Łubkowski & Laskowski, 2017). Finally, a CNN classifier has been applied to every possible plate for Optical Character Recognition (OCR) (Chang et al., 2004; Björklund et al., 2017; Azam & Islam, 2016; Yuan et al., 2017; Khan et al., 2017) and obtain result corresponding to each possible plate. Then the plate with maximum number of characters recognized is selected as the final license plate. The two steps of license plate detection and OCR individually gave good accuracy (93% for CNN classifier and 91% for license plate detection). The accuracy of final combined model is 85%.