Detection of facial expressions based on time dependent morphological features
Abstract
Facial expression detection by a machine is a valuable topic for Human Computer Interaction and has been a study issue in the behavioural science for some time. Recently, significant progress has been achieved in machine analysis of facial expressions but there are still some interestes to study the area in order to extend its applications. This work investigates the theoretical concepts behind facial expressions and leads to the proposal of new algorithms in face detection and facial feature localisation, design and construction of a prototype system to test these algorithms. The overall goals and motivation of this work is to introduce vision based techniques able to detect and recognise the facial expressions. In this context, a facial expression prototype system is developed that accomplishes facial segmentation (i.e. face detection, facial features localisation), facial features extraction and features classification. To detect a face, a new simplified algorithm is developed to detect and locate its presence from the fackground by exploiting skin colour properties which are then used to distinguish between face and non-face regions. This allows facial parts to be extracted from a face using elliptical and box regions whose geometrical relationships are then utilised to determine the positions of the eyes and mouth through morphological operations. The mean and standard deviations of segmented facial parts are then computed and used as features for the face. For images belonging to the same class, thses features are applied to the K-mean algorithm to compute the controid point of each class expression. This is repeated for images in the same expression class. The Euclidean distance is computed between each feature point and its cluster centre in the same expression class. This determines how close a facial expression is to a particular class and can be used as observation vectors for a Hidden Markov Model (HMM) classifier. Thus, an HMM is built to evaluate an expression of a subject as belonging to one of the six expression classes, which are Joy, Anger, Surprise, Sadness, Fear and Disgust by an HMM using distance features. To evaluate the proposed classifier, experiments are conducted on new subjects using 100 video clips that contained a mixture of expressions. The average successful detection rate of 95.6% is measured from a total of 9142 frames contained in the video clips. The proposed prototype system processes facial features parts and presents improved results of facial expressions detection rather than using whole facial features as proposed by previous authors. This work has resulted in four contributions: the Ellipse Box Face Detection Algorithm (EBFDA), Facial Features Distance Algorithm (FFDA), Facial features extraction process, and Facial features classification. These were tested and verified using the prototype system.Publisher
University of BedfordshireType
Thesis or dissertationLanguage
enDescription
A thesis submetted to the University of Bedfordshire, in partial fulfilment of the requirements for the degree of Doctor of Philosophy.Collections
The following license files are associated with this item: