Deep convolutional neural network for hand sign language recognition using model E

Yohanssen Pratama, Ester Marbun, Yonatan Parapat, Anastasya Manullang


An image processing system that based computer vision has received many attentions from science and technology expert. Research on image processing is needed in the development of human-computer interactions such as hand recognition or gesture recognition for people with hearing impairments and deaf people. In this research we try to collect the hand gesture data and used a simple deep neural network architecture that we called model E to recognize the actual hand gestured. The dataset that we used is collected from and in the form of ASL (American Sign Language) datasets. We doing accuracy comparison with another existing model such as AlexNet to see how robust our model. We find that by adjusting kernel size and number of epoch for each model also give a different result. After comparing with AlexNet model we find that our model E is perform better with 96.82% accuracy.


Convolutional neural network; Gesture; Hand recognition; Hand sign; Image processing

Full Text:




  • There are currently no refbacks.

Bulletin of EEI Stats