On the use of voice activity detection in speech emotion recognition

Muhammad Fahreza Alghifari, Teddy Surya Gunawan, Mimi Aminah binti Wan Nordin, Syed Asif Ahmad Qadri, Mira Kartiwi, Zuriati Janin

Abstract


Emotion recognition through speech has many potential applications, however the challenge comes from achieving a high emotion recognition while using limited resources or interference such as noise. In this paper we have explored the possibility of improving speech emotion recognition by utilizing the voice activity detection (VAD) concept. The emotional voice data from the Berlin Emotion Database (EMO-DB) and a custom-made database LQ Audio Dataset are firstly preprocessed by VAD before feature extraction. The features are then passed to the deep neural network for classification. In this paper, we have chosen MFCC to be the sole determinant feature. From the results obtained using VAD and without, we have found that the VAD improved the recognition rate of 5 emotions (happy, angry, sad, fear, and neutral) by 3.7% when recognizing clean signals, while the effect of using VAD when training a network with both clean and noisy signals improved our previous results by 50%.


Keywords


Deep neural network; Speech emotion recognition; Voice activity detection

Full Text:

PDF


DOI: https://doi.org/10.11591/eei.v8i4.1646

Refbacks

  • There are currently no refbacks.




Bulletin of EEI Stats

Bulletin of Electrical Engineering and Informatics (BEEI)
ISSN: 2089-3191, e-ISSN: 2302-9285
This journal is published by the Institute of Advanced Engineering and Science (IAES) in collaboration with Intelektual Pustaka Media Utama (IPMU).