Home / Regular Issue / JST Vol. 29 (3) Jul. 2021 / JST-2473-2021

 

An Analysis of Emotional Speech Recognition for Tamil Language Using Deep Learning Gate Recurrent Unit

Bennilo Fernandes and Kasiprasad Mannepalli

Pertanika Journal of Social Science and Humanities, Volume 29, Issue 3, July 2021

DOI: https://doi.org/10.47836/pjst.29.3.37

Keywords: Bi-Directional Long Short-Term Memory, emotional recognition, Gated Recurrent Unit, Long Short-Term Memory

Published on: 31 July 2021

Designing the interaction among human language and a registered emotional database enables us to explore how the system performs and has multiple approaches for emotion detection in patient services. As of now, clustering techniques were primarily used in many prominent areas and in emotional speech recognition, even though it shows best results a new approach to the design is focused on Long Short-Term Memory (LSTM), Bi-Directional LSTM and Gated Recurrent Unit (GRU) as an estimation method for emotional Tamil datasets is available in this paper. A new approach of Deep Hierarchal LSTM/BiLSTM/GRU layer is designed to obtain the best result for long term learning voice dataset. Different combinations of deep learning hierarchal architecture like LSTM & GRU (DHLG), BiLSTM & GRU (DHBG), GRU & LSTM (DHGL), GRU & BiLSTM (DHGB) and dual GRU (DHGG) layer is designed with introduction of dropout layer to overcome the learning problem and gradient vanishing issues in emotional speech recognition. Moreover, to increase the design outcome within each emotional speech signal, various feature extraction combinations are utilized. From the analysis an average classification validity of the proposed DHGB model gives 82.86%, which is slightly higher than other models like DHGL (82.58), DHBG (82%), DHLG (81.14%) and DHGG (80%). Thus, by comparing all the models DHGB gives prominent outcome of 5% more than other four models with minimum training time and low dataset.

  • Abdel-Hamid, O., Mohamed, A. R., Jiang, H., Deng, L., Penn, G., & Yu, D. (2014). Convolutional neural networks for speech recognition. IEEE ACM Transactions on Audio, Speech, and Language Processing, 22(10), 1533-1545. https://doi.org/10.1109/TASLP.2014.2339736.

  • Chen, Z., Watanabe, S., Erdogan, H., & Hershey, J. R. (2015, September 6-10). Speech enhancement and recognition using multi-task learning of long short-term memory recurrent neural networks. In Sixteenth Annual Conference of the International Speech Communication Association (pp. 3274-3278). Dresden, Germany. https://doi.org/10.1109/SLT.2016.7846281

  • Chung, J., Cho, K., & Bengio, Y. (2014, December 8-13). Empirical evaluation of gated recurrent neural networks on sequence modeling. In NIPS 2014 Deep Learning and Representation Learning Workshop (pp. 2342-2350). Montreal, Canada. https://doi.org/10.5555/3045118.3045367.

  • Erdogan, H., Hershey, J. R., Watanabe, S., & Le Roux, J. (2015). Phase-sensitive and recognition-boosted speech separation using deep recurrent neural networks. In 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 708-712). IEEE Conference Publication. https://doi.org/10.1109/ICASSP.2015.7178061.

  • Eyben, F., Weninger, F., Squartini, S., & Schuller, B. (2013). Real-life voice activity detection with lstm recurrent neural networks and an application to hollywood movies. In 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (pp. 483-487). IEEE Conference Publication. https://doi.org/10.1109/ICASSP.2015.7178061.

  • Graves, A., Jaitly, N., & Mohamed, A. R. (2013). Hybrid speech recognition with deep bidirectional LSTM. In 2013 IEEE workshop on automatic speech recognition and understanding (pp. 273-278). IEEE Conference Publication. https://doi.org/10.1109/ASRU.2013.6707742.

  • Hochreiter, S., & Schmidhuber, J. (1997). Long short-term memory. Neural Computation, 9(8), 1735-1780. https://doi.org/10.1162/neco.1997.9.8.1735.

  • Ioffe, S., & Szegedy, C. (2015, July 7-9). Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International conference on machine learning (pp. 448-456). Lille, France. https://doi.org/10.5555/3045118.3045167.

  • Jozefowicz, R., Zaremba, W., & Sutskever, I. (2015, July 7-9). An empirical exploration of recurrent network architectures. In International conference on machine learning (pp. 2342-2350). Lille, France. https://doi.org/10.5555/3045118.3045367.

  • Kishore, P. V. V., & Prasad, M. V. D. (2016). Optical flow hand tracking and active contour hand shape features for continuous sign language recognition with artificial neural networ. International Journal of Software Engineering and its Applications, 10(2), 149-170. https://doi.org/10.1109/IACC.2016.71

  • Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. Advances in Neural Information Processing Systems, 25, 1097-1105. https://doi.org/10.1145/3065386.

  • Kumar, K. V. V., Kishore, P. V. V., & Kumar, D. A. (2017). Indian classical dance classification with adaboost multiclass classifier on multi feature fusion. Mathematical Problems in Engineering, 20(5), 126-139. https://doi.org/10.1155/2017/6204742.

  • Laurent, C., Pereyra, G., Brakel, P., Zhang, Y., & Bengio, Y. (2016). Batch normalized recurrent neural networks. In 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 2657-2661). IEEE Conference Publication. https://doi.org/10.1109/ICASSP.2016.7472159.

  • Li, J., Deng, L., Gong, Y., & Haeb-Umbach, R. (2014). An overview of noise-robust automatic speech recognition. IEEE/ACM Transactions on Audio, Speech and Language Processing, 22(4), 745-777. https://doi.org/10.1109/TASLP.2014.2304637

  • Liu, Y., Zhang, P., & Hain, T. (2014). Using neural network front-ends on far field multiple microphones based speech recognition. In 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 5542-5546). IEEE Conference Publication. https://doi.org/10.1109/ICASSP.2014.6854663.

  • Mannepalli, K., Sastry, P. N., & Suman, M. (2016a). FDBN: Design and development of fractional deep belief networks for speaker emotion recognition. International Journal of Speech Technology, 19(4), 779-790. https://doi.org/10.1007/s10772-016-9368-y

  • Mannepalli, K., Sastry, P. N., & Suman, M. (2016b). MFCC-GMM based accent recognition system for Telugu speech signals. International Journal of Speech Technology, 19(1), 87-93. https://doi.org/10.1007/s10772-015-9328-y

  • Pascanu, R., Mikolov, T., & Bengio, Y. (2013). On the difficulty of training recurrent neural networks. Proceedings of Machine Learning Research, 28(3), 1310-1318. https://doi.org/10.5555/3042817.3043083.

  • Rao, G. A., & Kishore, P. V. V. (2016). Sign language recognition system simulated for video captured with smart phone front camera. International Journal of Electrical and Computer Engineering, 6(5), 2176-2187. https://doi.org/10.11591/ijece.v6i5.11384

  • Rao, G. A., Syamala, K., Kishore, P. V. V., & Sastry, A. S. C. S. (2018). Deep convolutional neural networks for sign language recognition. International Journal of Engineering and Technology (UAE), 7(Special Issue 5), 62-70. https://doi.org/10.1109/SPACES.2018.8316344

  • Ravanelli, M., Brakel, P., Omologo, M., & Bengio, Y. (2016). Batch-normalized joint training for DNN-based distant speech recognition. In 2016 IEEE Spoken Language Technology Workshop (SLT) (pp. 28-34). IEEE Conference Publication. https://doi.org/10.1109/SLT.2016.7846241.

  • Ravanelli, M., Brakel, P., Omologo, M., & Bengio, Y. (2017). A network of deep neural networks for distant speech recognition. In 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 4880-4884). IEEE Conference Publication. https://doi.org/10.1109/ICASSP.2017.7953084.

  • Sak, H., Senior, A. W., & Beaufays, F. (2014). Long short-term memory recurrent neural network architectures for large scale acoustic modeling. Proceeding of Interspeech, 22(1), 338-342. https://doi.org/10.1007/s10772-018-09573-7

  • Sastry, A. S. C. S., Kishore, P. V. V., Prasad, C. R., & Prasad, M. V. D. (2016). Denoising ultrasound medical images: A block based hard and soft thresholding in wavelet domain. International Journal of Measurement Technologies and Instrumentation Engineering (IJMTIE), 5(1), 1-14. https://doi.org/10.4018/IJMTIE.2015010101

  • Schwarz, A., Huemmer, C., Maas, R., & Kellermann, W. (2015). Spatial diffuseness features for DNN-based speech recognition in noisy and reverberant environments. In 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 4380-4384). IEEE Conference Publication. https://doi.org/10.1109/ICASSP.2015.7178798.

  • Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., & Salakhutdinov, R. (2014). Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15, 1929-1958. https://doi.org/10.5555/2627435.2670313.

  • Weninger, F., Erdogan, H., Watanabe, S., Vincent, E., Le Roux, J., Hershey, J. R., & Schuller, B. (2015). Speech enhancement with LSTM recurrent neural networks and its application to noise-robust ASR. In International conference on latent variable analysis and signal separation (pp. 91-99). Springer. https://doi.org/10.1007/978-3-319-22482-4_11

  • Zhang, Y., Chen, G., Yu, D., Yaco, K., Khudanpur, S., & Glass, J. (2016). Highway long short-term memory rnns for distant speech recognition. In 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 5755-5759). IEEE Conference Publication. https://doi.org/10.1109/ICASSP.2016.7472780

  • Zhou, G. B., Wu, J., Zhang, C. L., & Zhou, Z. H. (2016). Minimal gated unit for recurrent neural networks. International Journal of Automation and Computing, 13(3), 226-234. https://doi.org/10.1007/s11633-016-1006-2.

ISSN 0128-7702

e-ISSN 2231-8534

Article ID

JST-2473-2021

Download Full Article PDF

Share this article

Related Articles