Page: 3-22 (20)
Author: Wenyun Sun and Zhong Jin
PDF Price: $15
Research trends in Convolutional Neural Networks and facial expression analysis are introduced at first. A training algorithm called stochastic gradient descent with l2 regularization is employed for the facial expression classification problem, in which facial expression images are classified into six basic emotional categories of anger, disgust, fear, happiness, sadness and surprise without any complex pre-processes involved. Moreover, three types of feature generalization for solving problems with different classifiers, different datasets and different categories are discussed. By these techniques, pre-trained Convolutional Neural Networks are used as feature extractors which work quite well with Support Vector Machine classifiers. The results of experiments show that Convolutional Neural Networks not only have capability of classifying facial expression images with translational distortions, but also have capability to fulfill some feature generalization tasks.
Sparsity Preserving Projection Based Constrained Graph Embedding and Its Application to Face Recognition
Page: 23-28 (6)
Author: Libo Weng, Zhong Jin and Fadi Dornaika
PDF Price: $15
In this chapter, a novel semi-supervised dimensionality reduction algorithm is proposed, namely Sparsity Preserving Projection based Constrained Graph Embedding (SPP-CGE). Sparsity Preserving Projection (SPP) is an unsupervised dimensionality reduction method. It aims to preserve the sparse reconstructive relationship of the data obtained by solving a L1 objective function. Label information is used as additional constraints for graph embedding in the SPP-CGE algorithm. In SPP-CGE, both the intrinsic structure and the label information of the data are used. In addition, to deal with new incoming samples, out-of-sample extension of SPP-CGE is also proposed. Promising experimental results on several popular face databases illustrate the effectiveness of the proposed method.
Page: 39-65 (27)
Author: Alireza Bosaghzadeh and Fadi Dornaika
PDF Price: $15
Local Discriminant Embedding (LDE) was recently proposed to overcome some limitations of the global Linear Discriminant Analysis (LDA) method. Whenever a small training data set is used, LDE cannot directly be applied to high-dimensional data. This case is the so-called small-sample-size (SSS) problem. The classic solution to this problem was applying dimensionality reduction on the raw data (e.g., using Principal Component Analysis (PCA)). This chapter introduces a novel discriminant technique called “Exponential Local Discriminant Embedding” (ELDE). The proposed ELDE can be seen as an extension of LDE framework in two directions. Firstly, the proposed framework overcomes the SSS problem without discarding the discriminant information that was contained in the null space of the locality preserving scatter matrices associated with LDE. Secondly, the proposed ELDE is equivalent to transforming original data into a new space by distance diffusion mapping (similar to Kernel-based non-linear mapping), and then, LDE is applied in such a new space. As a result of diffusion mapping, the margin between samples belonging to different classes is enlarged, which is helpful in improving classification accuracy. The experiments are conducted on four public face databases, Extended Yale, PF01, PIE and FERET. The results show that the performances of the proposed ELDE are better than those of LDE and many state-of-the-art discriminant analysis techniques.
Page: 66-85 (20)
Author: Fadi Dornaika and Ammar Assoum
PDF Price: $15
This chapter addresses the graph-based linear manifold learning for object recognition. In particular, it introduces an adaptive Locality Preserving Projections (LPP) which has two interesting properties: (i) it does not depend on any parameter, and (ii) there is no correlation between mapped data. The main contribution consists in a parameterless computation of the affinity matrix built on the principle of meaningful and Adaptive neighbors. In addition to the framework of LPP, these two properties have been integrated to the framework of two graph-based embedding techniques: Orthogonal Locality Preserving Projections (OLPP) and Supervised LPP (SLPP). After introducing adaptive affinity matrices and the uncorrelated mapped data constraint, we perform recognition tasks on six public face databases. The results show improvement over those of classic methods such as LPP, OLPP, and SLPP. The proposed method could also be applied to other kinds of objects.
Page: 86-108 (23)
Author: Alireza Bosaghzadeh, Mohammadali Doostari and Alireza Behrad
PDF Price: $15
While face recognition algorithms have shown promising results using gray level face images, their accuracy deteriorate if the face images are not frontal. As the head can move freely, it causes a key challenge in the problem of face recognition. The challenge is how to automatically and without manual intervention recognize nonfrontal face images in a gallery with frontal face images. The rotation is a linear problem in 3D space and can be solved easily using the 3D face data. However, the recognition algorithms based on 3D face data gain less recognition rates than the methods based on 2D gray level images. In this chapter, a sequential algorithm is proposed which uses the benefits of both 2D and 3D face data in order to obtain a pose invariant face recognition system. In the first phase, facial features are detected and the face pose is estimated. Then, the 3D data (Face depth data) and correspondingly the 2D image (Gray level face data) are rotated in order to obtain a frontal face image. Finally, features are extracted from the frontal gray level images and used for classification. Experimental results on FRAV3D face database show that the proposed method can drastically improve the recognition accuracy of non-frontal face images.
Page: 109-131 (23)
Author: Alireza Behrad
PDF Price: $15
3D face recognition algorithms are a group of methods which utilize 3D geometry of face and facial feature for recognition. In comparison with 2D face recognition algorithms that employ intensity or color based features, they are generally robust against lighting condition, head orientation, facial expression and make-up. 3D face recognition has several advantages. Firstly, the shape and the related features of 3D face can be acquired independent from lighting condition. Secondly, the pose of 3D face data can be easily corrected and used for subsequent pose invariant feature extraction. Thirdly, 3D face data are less affected by skin color, face cosmetic and similar face reflectance factors. 3D face recognition may include several stages such as 3D image acquisition, face localization, feature extraction and face recognition. In this article, different algorithms and the pipeline for 3D face recognition are discussed.
Page: 132-153 (22)
Author: Fawzi Khattar, Fadi Dornaika and Ammar Assoum
PDF Price: $15
Automatic head pose estimation consists of using a computer to predict the pose of a person based on a given facial image. Fast and reliable algorithms for estimating the head pose are essential for many applications and higher-level face analysis tasks. Many of machine learning-based techniques used for face detection and recognition can also be used for pose estimation. In this chapter, we present a new dimensionality reduction algorithm based on a sparse representation that takes into account pose similarities. Experimental results conducted on three benchmarks face databases are presented.
Page: 154-180 (27)
Author: Luis Unzueta, Waldir Pimenta, Jon Goenetxea, Luís Paulo Santos and Fadi Dornaika
PDF Price: $15
In this work, we present a robust and lightweight approach for the automatic fitting of deformable 3D face models to facial pictures. Well known fitting methods, for example those taking into account statistical models of shape and appearance, need a training stage based on a set of facial landmarks, manually tagged on facial pictures. In this manner, new pictures in which to fit the model cannot differ excessively in shape and appearance (including illumination changes, facial hair, wrinkles, and so on) from those utilized for training. By contrast, our methodology can fit a generic face model in two stages: (1) the localization of facial features based on local image gradient analysis; and (2) the backprojection of a deformable 3D face model through the optimization of its deformation parameters. The proposed methodology preserves the advantages of both learning-free and learning-based methodologies. Subsequently, we can estimate the position, orientation, shape and actions of faces, and initialize userspecific face tracking approaches, such as Online Appearance Models (OAMs), which have demonstrated to be more robust than generic user tracking methodologies. Experimental results demonstrate that our strategy outperforms other fitting methods under challenging illumination conditions and with a computational footprint that permits its execution in gadgets with reduced computational power, such as cell phones and tablets. Our proposed methodology fits well with numerous systems addressing semantic inference in face images and videos.
Page: 181-216 (36)
Author: Franck Luthon
PDF Price: $15
Face detection and tracking by computer vision is widely used for multimedia applications, video surveillance or human computer interaction. Unlike current techniques that are based on huge training datasets and complex algorithms to get generic face models (e.g. active appearance models), the proposed approach using evidence theory handles simple contextual knowledge representative of the application background, via a quick semi-supervised initialization. The transferable belief model is used to counteract the incompleteness of the prior model due to lack of exhaustiveness in the learning stage.
The method consists of two main successive steps in a loop: detection, then tracking. In the detection phase, an evidential face model is built by merging basic beliefs carried by a Viola-Jones face detector and a skin color detector. The mass functions are assigned to information sources computed from a specific nonlinear color space. In order to deal with color information dependence in the fusion process, a cautious combination rule is used. The pignistic probabilities of the face model guarantee the compatibility between the belief framework and the probabilistic framework. They are the inputs of a bootstrap particle filter which yields face tracking at video rate. The proper tuning of the few evidential model parameters leads to tracking performance in real-time. Quantitative evaluation of the proposed method gives a detection rate reaching 80%, comparable to what can be found in the literature. Nevertheless, the proposed method requires a scanty initialization only (brief training) and allows a fast processing.
Page: 217-233 (17)
Author: Shenglan Ben
PDF Price: $15
In traditional age estimation methods which utilize discriminative methods for feature extraction, the biological age labels are adopted as the ground truth for supervision. However, the appearance age, which is indicated by the facial appearance, is intrinsically a fuzzy attribute of human faces which is inadequate to be labeled as a crisp value. To address this issue, this paper firstly introduces a fuzzy representation of age labels and then extends the LDA into fuzzy ones. In the definition of fuzzy labels, both the ongoing property of facial aging and the ambiguity between facial appearance and biological age are considered. By utilizing the fuzzy labels for supervision, the proposed method outperforms the crisp ones in both preserving ordinal information of aging faces and adjusting the inconsistency between the biological age and appearance. Experiments on both FG-NET and MORPH databases confirm the effectiveness of the proposed method.
Page: 234-250 (17)
Author: Ammar Assoum and Jouhayna Harmouch
PDF Price: $15
Automatic age estimation consists of using a computer to predict the age of a person based on a given facial image. The age prediction is built on distinct patterns emerging from the facial appearance. The interest of such process has increasingly grown due to the wide range of its potential applications in law enforcement, security control, and human-computer interaction. However, the estimation problem remains challenging since it is influenced by a lot of factors including lifestyle, gender, environment, and genetics. Many recent algorithms used for automatic age estimation are based on machine learning methods and have proven their efficiency and accuracy in this domain. In this chapter, we present an empirical study on a complete age estimation system built around label sensitive learning . Experimental results conducted on FG-NET and MORPH Album II face databases are presented.
Advances in Face Image Analysis: Theory and applications describes several approaches to facial image analysis and recognition. Eleven chapters cover advances in computer vision and pattern recognition methods used to analyze facial data. The topics addressed in this book include automatic face detection, 3D face model fitting, robust face recognition, facial expression recognition, face image data embedding, model-less 3D face pose estimation and image-based age estimation. The chapters are also written by experts from a different research groups. Readers will, therefore, have access to contemporary knowledge on facial recognition with some diverse perspectives offered for individual techniques. The book is a useful resource for a wide audience such as i) researchers and professionals working in the field of face image analysis, ii) the entire pattern recognition community interested in processing and extracting features from raw face images, and iii) technical experts as well as postgraduate computer science students interested in cutting edge concepts of facial image recognition.