چكيده لاتين :
Visual speech information, for example the appearance and the movement of lip during speech utterance can characterize a personיs identity and therefore it can be used in personal authentication systems.
In this study, we propose a novel approach by using lipreading data i.e., the sequence of entire region of mouth area produced during speech and the implementation of the Unconstrained Minimum Average Correlation
Energy (UMACE) filter as a classifier for biometric authentication. The system performance is also enhanced by the implementation of multi sample fusion scheme using average operator. The results obtained from using a Digit Database shows that the use of lipreading information and UMACE filter has good potentials and is highly effective in reducing false acceptance and false rejection rates for speaker verification system performance.