This chapter introduces the basis of feature level fusion and presents two feature level fusion examples. As the beginning, Section 13.1 provides an introduction to feature level fusion. Section 13.2 describes two classes of feature level fusion schemes. Section 13.3 gives a feature level fusion example that fuses face and palm print. Section 13.4 presents a feature level fusion example that fuses multiple feature presentations of a single palm print trait. Finally, Section 13.5 offers brief comments.
Fusion at the feature level means that the combination of different biometric traits occurs at an early stage of the multi-biometric system (Chang, Bowyer, & Sarkar, 2003; Gunatilaka & Baertlein, 2001; Gunes & Piccardi, 2005; Jain, Ross, & Prabhakar, 2004; Kober, Harz, & Schiffers, 1997; Kong, Zhang, & Kamel, 2006; Ross & Govindarajan, 2005; Ross & Jain, 2003). The traits fused at the feature level will be used in the matching and decision-making modules of the multi-biometric system to obtain authentication results. As pointed out by Jain, Ross and Prabhakar (2004) and other researchers (Choi, Choi, & Kim, 2005; Ratha, Connell, & Bolle, 1998; Singh, Vatsa, Ross, & Noore, 2005), it is likely that the integration at the feature level is able to produce a higher accuracy than fusion at the matching score level and fusion at the decision level. This is because the feature representation conveys richer information than the matching score or the verification decision (i.e. accept or reject) of a biometric trait. In contrast, the decision level is so coarse that much information loss will be caused (Sim, Zhang, Janakiraman, & Kumar, 2007). Following are some examples of fusion at the feature level in the field of biometrics. Chang, Bowyer and Sarkar (2003) fused appearance traits of the face and ear at the feature level. They concatenated the traits of the face and ear and exploited the resultant new one-dimensional data to perform personal authentication. To reduce the dimensionality of the new data, they extracted features from the new data. Ross and Govindarajan (2005) discussed fusion at the feature level in the following different scenarios such as fusion of PCA and LDA coefficients of face images and fusion of face and hand modalities. Kong, Zhang and Kamel (2006) fused the phase information of different Gabor filtering results at the feature level to perform palm print identification. Gunatilaka and Baertlein (2001) proposed a feature-level fusion approach for fusing data generated from non-coincidently sampled sensors. Other feature level fusion examples include fusion of visual and acoustic signals (Kober, Harz, & Schiffers, 1997), the fusion of face and body information (Gunes & Piccardi, 2005), fusion of face and fingerprint (Rattani, Kisku, Bicego, & Tistarelli, 2006,2007), fusion of side face and gait (Zhou & Bhanu, 2008), fusion of iris and face and (Son & Lee, 2005), fusion of palm print and palm vein (Wang, Yau, Suwandy, & Sung, 2008), fusion of lip and audio (Chetty & Wagner, 2008), etc. Feature level fusion is also implemented for other fields such as medical image fusion (Kor & Tiwary, 2004; Patnaik, 2006) object classification (Wender & Dietmayer, 2007), machinery fault diagnosis (Liu, Ma, & Mathew, 2006) and content-based image retrieval (Rahman, Desai, & Bhattacharya, 2006).