Article Preview
TopIntroduction
Cervical cancer is the second leading cause of cancer death in women aged 20 to 39 years and is responsible for 8.8 million deaths in 2015 (Siegel, Miller, & Jemal, 2017). Screening or diagnosis for cervical cancer and its precursor lesions is carried out using a Papanicolaou (Pap) test. Biopsied cervical tissue histology slides are interpreted visually by the expert pathologist to give a definitive evaluation (Jeronimo, Schiffman, Long, Neve, & Antani, 2004). For slide analysis, pathologists seek to visually assess cervical intraepithelial neoplasia (CIN), a pre-malignant condition for cervical cancer. There are conventionally four CIN grades, including normal, CIN1 (mild dysplasia), CIN2 (moderate dysplasia), and CIN3 (severe dysplasia) by identifying the atypical cells in the epithelium by the visual inspection of histology slides. Figure 1 shows an example of different CIN grades. Observation of atypical cells in the epithelium associated with delayed maturation with an increase in immature atypical cells from bottom to top of the epithelium has been observed as CIN increases in severity (He, Long, Antani, & Thoma, 2010; Egner, 2010). Computer-assisted CIN diagnoses have been studied and developed previously in (Guillaud et al., 2005; Guo et al., 2015, 2016; Keenan et al., 2000; Van Der Marel et al., 2012; Wang et al., 2009). In previous research, our research group used a localized fusion-based approach for CIN grade classification (Guo et al., 2015). This localized approach partitioned an epithelium image into ten vertical segments (partitions), where 27 handcrafted features are extracted from each vertical segment (Guo et al., 2015, 2016). The features are used to classify each segment into one of the CIN grades. The handcrafted feature extraction utilizes image analysis techniques such as texture feature extraction (Guillaud et al., 2005) nuclei analysis (Keenan et al., 2000) and other techniques, a detailed literature review of these techniques can be found in (Guo et al., 2015) and (De et al., 2013). In these studies, handcrafted features are extracted using time-consuming image processing and machine learning algorithms.
Figure 1. CIN grade examples (a) Normal, (b) CIN1, (c) CIN2, (d) CIN3
The rapid advancement of convolutional neural networks (ConvNets) comprise a powerful technique used in many image analysis and classification problem domains with very large-scale datasets like ImageNet (Krizhevsky, Sutskever, & Hinton, 2012), face recognition (Parkhi, Vedaldi, & Zisserman, 2015), and breast cancer mitosis detection (Ciresan, Giusti, Gambardella, & Schmidhuber, 2013). ConvNet is a feature-free learning technique that utilizes filters and convolves them with the images to extract the features. The filters are updated and tuned during the training process. Deep learning techniques found their way in many medical image applications such as melanoma recognition in dermoscopy images (Codella et al., 2016), breast mitosis detection (Ciresan et al., 2013), and nuclei detection in digitized histology images (Sornapudi et al., 2018).