Fast American Sign Language Image Recognition Using CNNs with Fine-tuning

Qi Cui,
Zhili Zhou,
Chengsheng Yuan,
Xingming Sun,
Q. M. Jonathan Wu,

Abstract


Sign language images such as American Sign Language (ASL) images carry effective information for communication based on gestures. The significance of sign language recognition is that it can help hearing impaired people to communicate with others successfully. To facilitate the ASL images recognition for the hearing impaired people, a proper image classification is demanded urgently. As the computation of traditional image classification methods is usually extensive, it’s a bad choice for real-time tasks. Due to the extraordinary performance of Convolutional Neural Networks (CNNs) on various tasks of images classification, a method is proposed to use CNNs to train classification models for ASL images recognition with fine-tuning strategy. First, a structure of CNNs is optimized to make it more suitable for ASL classification task. Secondly, the models are trained on the structure by using test images. We compare the test results of proposed approach with those of the state-of-the-arts at the end with the aim to illustrate the effectiveness of the trained CNNs models. The experimental results demonstrate that the proposed method can achieve superior recognition results for ASL images.


Citation Format:
Qi Cui, Zhili Zhou, Chengsheng Yuan, Xingming Sun, Q. M. Jonathan Wu, "Fast American Sign Language Image Recognition Using CNNs with Fine-tuning," Journal of Internet Technology, vol. 19, no. 7 , pp. 2207-2214, Dec. 2018.

Full Text:

PDF

Refbacks

  • There are currently no refbacks.





Published by Executive Committee, Taiwan Academic Network, Ministry of Education, Taipei, Taiwan, R.O.C
JIT Editorial Office, Office of Library and Information Services, National Dong Hwa University
No. 1, Sec. 2, Da Hsueh Rd., Shoufeng, Hualien 974301, Taiwan, R.O.C.
Tel: +886-3-931-7314  E-mail: jit.editorial@gmail.com