Scholarly open access journals, Peer-reviewed, and Refereed Journals, Impact factor 8.14 (Calculate by google scholar and Semantic Scholar | AI-Powered Research Tool) , Multidisciplinary, Monthly, Indexing in all major database & Metadata, Citation Generator, Digital Object Identifier(DOI)
For the deaf and hard-of-hearing communities, sign language is an essential form of communication. In this study, we use skeleton-based hand gesture photos to demonstrate a deep learning-based method for real-time sign language detection. The skeleton structure of hand movements was captured as white lines on black background using camera input and Mediapipe to create a customized dataset of alphabetic signs(A-Z). Up to 200 resized 224x224 pixel photos were used to represent each class. Using this dataset, MobileNetV2, a lightweight convolutional neural network designed for embedded and mobile vision applications, was refined to carry out multi-class classification. The trained model demonstrates high accuracy in classifying isolated static signs and supports interactive prediction through a webcam interface.
Keywords:
Deep Learning, Sign Language Recognition, MobileNetV2, Skeleton-based Images, MediaPipe, Human-Computer Interaction.
Cite Article:
"SIGN LANGUAGE RECOGNITION USING DL", International Journal of Science & Engineering Development Research (www.ijrti.org), ISSN:2455-2631, Vol.10, Issue 4, page no.b12-b17, April-2025, Available :http://www.ijrti.org/papers/IJRTI2504103.pdf
Downloads:
000301
ISSN:
2456-3315 | IMPACT FACTOR: 8.14 Calculated By Google Scholar| ESTD YEAR: 2016
An International Scholarly Open Access Journal, Peer-Reviewed, Refereed Journal Impact Factor 8.14 Calculate by Google Scholar and Semantic Scholar | AI-Powered Research Tool, Multidisciplinary, Monthly, Multilanguage Journal Indexing in All Major Database & Metadata, Citation Generator