Robust Speaker Verification by Combining MFCC and Entrocy in Noisy Conditions

Duraid Y. Mohammed, Khamis AlKarawi, Ahmed Aljuboori

Abstract


Automatic speaker recognition may achieve remarkable performance in matched training and test conditions. Conversely, results drop significantly in incompatible noisy conditions. Furthermore, feature extraction significantly affects performance. Mel-frequency Cepstral coefficients MFCCs are most commonly used in this field of study. The literature has reported that the conditions for training and testing are highly correlated. Taken together, these facts support strong recommendations for using MFCC features in similar environmental conditions (train/test) for speaker recognition. However, with noise and reverberation present, MFCC performance is not reliable. To address this, we propose a new feature 'Entrocy' for accurate and robust speaker recognition, which we mainly employ to support MFCC coefficients in noisy environments. Entrocy is the Fourier Transform of the Entropy, a measure of the fluctuation of the information in sound segments over time. Entrocy features are combined with MFCCs to generate a composite feature set which is tested using the Gaussian Mixture Model (GMM) speaker recognition method. The proposed method shows improved recognition accuracy over a range of signal-to- noise ratios.

Keywords


Speaker recognition, Noisy environment, Speaker verification, MFCC



DOI: https://doi.org/10.11591/eei.v10i4.2957

Refbacks

  • There are currently no refbacks.


Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Bulletin of EEI Stats