A Survey Paper on Biometric Pattern Recognition

 

Anita Bara and Anurag Shrivastava

Shree Shankaracharya College of Engineering and Technology, Bhilai (CG) India

*Corresponding Author E-mail: monacyril2000@gmail.com

 

ABSTRACT:

This paper focus about biometric face recognition with different methodologies and comparison between them. Now a day’s face recognition is play very crucial role in recent technology like mostly companies are adopting the biometric identification for login but when images are degraded than the performance of our system is reduced. This paper gives some idea about steps by steps face recognition algorithm how face is recognized but when quality of an image is degrade due to some noise or any external reason than matching process will not give accurate result for this reason we adopt some restoration and enhancement techniques like retinex theory   for degrade image to improve quality for better performance in next part of my work.

 

KEYWORDS: Face detection, classification, pattern recognition, Biometric, PCA, LDA.

 

 


1. INTRODUCTION:

Pattern recognition is the scientific discipline whose goal is the classification of objects into a number of categories or classes [5]. Depending on the application, these objects can be images or signal waveforms or any type of measurements that need to be classified. We will refer to these objects using the generic term patterns. Pattern recognition has a long history, but before the 1960s it was mostly the output of theoretical research in the area of statistics [6]. As with everything else, the advent of computers increased the demand for practical applications of pattern recognition, which in turn set new demands for further theoretical developments. As our society evolves from the industrial to its postindustrial phase, automation in industrial production and the need for information handling and retrieval are becoming increasingly important[8]. This trend has pushed pattern recognition to the high edge of today’s engineering applications and research. Pattern recognition is an integral part of most machine intelligence systems built for decision making. Machine vision is an area in which pattern recognition is of importance.

 

A machine vision system captures images via a camera and analyzes them to produce descriptions of what is imaged[2]. A typical application of a machine vision system is in the manufacturing industry, either for automated visual inspection or for automation in the assembly line. For example, in inspection, manufactured objects on a moving conveyor may pass the inspection station, where the camera stands, and it has to be ascertained whether there is a defect. Thus, images have to be analyzed online, and a pattern recognition system has to classify the objects into the “defect” or “non defect” class[7]. After that, an action has to be taken, such as to reject the offending parts. In an assembly line, different objects must be located and “recognized,” that is, classified in one of a number of classes known a priori. Examples are the “screwdriver class,” the “German key class,” and so forth in a tools’ manufacturing unit. Then a robot arm can move the objects in the right place. Character (letter or number) recognition is another important area of pattern recognition, with major implications in automation and information handling. Optical character recognition (OCR) systems are already commercially available and more or less familiar to all of us, the light sensitive detector, light-intensity variation is translated into “numbers” and an image array is formed. In the sequel, a series of image processing techniques are applied leading to line and character segmentation. Computer-aided diagnosis is another important application of pattern recognition, aiming at assisting doctors in making diagnostic decisions. The final diagnosis is, of course, made by the doctor. Computer-assisted diagnosis has been applied to and is of interest for a variety of medical data ,such as X-rays, computed tomographic images, ultrasound images, electrocardiograms (ECGs), and electroencephalograms (EEGs). The need for a computer-aided diagnosis stems from the fact that medical data are often not easily interpretable, and the interpretation can depend very much on the skill of the doctor. Let us take for example X-ray mammography for the detection of breast cancer[9]. Although mammography is currently the best method for detecting breast cancer, 10 to 30% of women who have the disease and undergo mammography have negative mammograms. In approximately two thirds of these cases with false results the radiologist failed to detect the cancer, which was evident retrospectively. This may be due to poor image quality, eye fatigue of the radiologist, or the subtle nature of the findings. The percentage of correct classifications improves at a second reading by another radiologist. Thus, one can aim to develop a pattern recognition system in order to assist radiologists with a “second” opinion. Increasing confidence in the diagnosis based on mammograms would, in turn, decrease the number of patients with suspected breast cancer who have to undergo surgical breast biopsy, with its associated complications [1].Data mining and knowledge discovery in databases is another key application area of pattern recognition. Data mining is of intense interest in a wide range of applications such as medicine and biology, market and financial analysis, business management, science exploration, image and music retrieval [3]. Its popularity stems from the fact that in the age of information and knowledge society there is an ever increasing demand for retrieving information and turning it into knowledge. Moreover, this information exists in huge amounts of data in various forms including, text, images, audio and video, stored in different places distributed all over the world [7]. The traditional way of searching information in databases was the description-based model where object retrieval was based on keyword description and subsequent word matching. However, this type of searching presupposes that a manual annotation of the stored information has previously been performed by a human [6]. This is a very time-consuming job and, although feasible when the size of the stored information is limited, it is not possible when the amount of the available information becomes large[5]. Moreover, the task of manual annotation becomes problematic when the stored information is widely distributed and shared by a heterogeneous “mixture” of sites and users. Content-based retrieval systems are becoming more and more popular where information is sought based on “similarity” between an object, which is presented into the system, and objects stored in sites all over the world [6]. In a content-based image retrieval CBIR (system) an image is presented to an input device (e.g.,scanner). The system returns “similar” images based on a measured“ signature,” which can encode, for example, information related to color, texture and shape [4]. In a music content-based retrieval system, an example (i.e., an extract from a music piece), is presented to a microphone input device and the system returns “similar” music pieces. In this case, similarity is based on certain (automatically) measured cues that characterize a music piece, such as the music meter, the music tempo, and the location of certain repeated patterns and many applications of pattern recognition like figure print recognition, iris recognition, etc [2]. In below figures the percentage of pattern recognitions application given

 

2: Biometric recognition: Personal recognition based on “who you are or what you do” as opposed to “what you know” (password) o“ what you have” (ID card)

 

Face recognition is a subject in pattern recognition study for machine learning applications. Although, other biometrics system such as fingerprint, iris and others reported to be more accurate, the research in face recognition has significantly increase for the past 20 years because of its non-intrusive characteristic (Angle et al., 2005)[3]. Furthermore, face recognition systems require minimal participation from users in order to perform identification tasks. Despite of these advantages, to build a face recognition system is not an easy task. Several constraints that usually confront researchers are varying poses, lighting conditions and facial expressions.

 

There are certain techniques used in face recognition study (Zhao et al., 2000). One of it is the statistical pattern method which has been extensively adopted in many face recognition commercial products (Jain et al., 2000)[5]. A statistical method is defined as a method which analyzes a pattern data with D dimensional input vector. The representation of the input data are usually the pre-processed by reducing the dimensionality, extract the relevant information from the data and remove noise before the recognition task is performed [7]. One of the earliest statistical methods is by Turk and Pentland , who has proposed the eigenfaces or also known as Principal Component Analysis (PCA) method. The classification is done using the simple Euclidean distance equation. The PCA usually became a benchmark method to the others and it is extensively used in many research papers (Duan et al., 2008; Mong et al., 2009).

 

3: Review of Biometrics Face Recognition algorithm

 

PCA: The Eigen face algorithm uses the Principal Component Analysis (PCA) for dimensionality reduction to find the vectors which best account for the distribution of face images within the entire image space. These vectors define the subspace of face images and the subspace is called face space. All faces in the training set are projected onto the face space to find a set of weights that describes the contribution of each vector in the face space. To identify a test image, it requires the projection of the test image onto the face space to obtain the corresponding set of weights[ 8]. By comparing the weights of the test image with the set of weights of the faces in the training set, the face in the test image can be identified.

 

ICA:  Independent Component Analysis (ICA) is similar to PCA except that the distribution of the components is designed to be non-Gaussian. Maximizing non-Gaussianity promotes statistical independence. Figure 11 presents the different feature extraction properties between PCA and ICA. Bartlett et al. [8] provided two architectures based on Independent Component Analysis, statistically independent basis images and a factorial code representation, for the face recognition task. The ICA separates the high-order moments of the input in addition to the second-order moments utilized in PCA. Both the architectures lead to a similar performance.

 

LDA:  Both PCA and ICA construct the face space without using the face class (category) information. The whole face training data is taken as a whole. In LDA the goal is to find an efficient or interesting way to represent the face vector space. But exploiting the class information can be helpful to the identification tasks. The Fisherface algorithm [10] is derived from the Fisher Linear Discriminant (FLD), which uses class specific information. By defining different classes with different statistics, the images in the learning set are divided into the corresponding classes. Then, techniques similar to those used in Eigenface algorithm are applied. The Fisherface algorithm results in a higher accuracy rate in independent component axes. Each axis is a direction found by PCA or ICA. Note the PC axes are orthogonal while the IC axes are not. If only 2 components are allowed, ICA chooses a different subspace than PCA. Bottom left: Distribution of the first PCA coordinate of the data. Bottom right: distribution of the first ICA coordinates of the data [7]. For this example, ICA tends to extract more intrinsic structure of the original data clusters.

 

Model-based face recognition: The model-based face recognition scheme is aimed at constructing a model of the human face, which is able to capture the facial variations. The prior knowledge of human face is highly utilized to design the model. For example, feature-based matching derives distance and relative position features from the placement of internal facial elements (e.g., eyes, etc.). By localizing the corners of the eyes, nostrils, etc. in frontal views, his system computed parameters for each face, which were compared (using a Euclidean metric) against the parameters of known faces. A more recent feature-based system, based on elastic bunch graph matching, was developed by Wiskott et al. as an extension to their original graph matching system . By integrating both shape

 

4: Comparison between Appearance and Model based Algorithm

 

5: Perform Analysis of Different Algorithms.

 

6: CONCLUSION:

Image-based face recognition is still a very challenging topic after decades of exploration. A number of typical algorithms are presented, being categorized into appearance-based and model-based schemes. Sensitivity to variations in pose and different lighting conditions is still a challenging problem. Georghiades et al. extensively explored the illumination change and synthesis for facial analysis using appearance-based approaches to achieve an illumination-invariant face recognition system.

 

7. REFERENCES:

1.       R. Chellappa, C.L. Wilson, and S. Sirohey, “Human and machine recognition of faces: A survey,” Proc. IEEE, vol. 83, pp. 705–740, 1995.

2.       H. Wechsler, P. Phillips, V. Bruce, F. Soulie, and T. Huang, Face Recognition: From Theory to Applications, Springer-Verlag, 1996.

3.       W. Zhao, R. Chellappa, A. Rosenfeld, and P.J. Phillips, “Face recognition: A literature survey,” CVL Technical Report, University of Maryland,2000

4.       S. Gong, S.J. McKenna, and A. Psarrou, Dynamic Vision: from Images to Face Recognition, Imperial College Press and World Scientific Publishing, 2000.

5.       Terence Sim and Takeo Kanade. Illuminating the face. Technical report, Robotics Institute, Carnegie Mellon University, September 2001.

6.       E. Land and J. McCann. Lightness and retinex theory. Journal of the Optical Society of America, 61(1):1–11,

7.       P. Perona and J. Malik. Scale-space and edge detection using anisotropic diffusion. IEEE Transactions on PAMI, 12(7):629–639, 1990.

8.       T. Ojala, M. Pietikainen, and T. Maenpaa. Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Transactions on PAMI, 24(7):971–987, 2004.

9.       G. Heusch, Y. Rodriguez, and S. Marcel. Local binary patterns as image preprocessing for face authentication. In IEEE International Conference on Automatic Face and Gesture Recognition, 2006.

10.     Qian Tao and Raymond N. J. Veldhuis, “Biometric authentication for a mobile personal device”, Proceedings of the 1st International Workshop on Personalized Networks (Pernets2006), San Jose, July 2006.

 

 

Received on 11.03.2011       Accepted on 12.04.2011     

© EnggResearch.net All Right Reserved

Int. J. Tech. 1(1): Jan.-June. 2011; Page 49-52