-
Multimodal Sign Language Recognition System: Integrating Image Processing and Deep Learning for Enhanced Communication Accessibility
- Mukta Jagdish and Valliappan Raju
-
2024, 20(5):
271-281.
doi:10.23940/ijpe.24.05.p2.271281
-
Abstract
PDF (613KB)
-
References |
Related Articles
Communication for individuals who are hearing- and speech-impaired, commonly referred to as the deaf and mute community, heavily relies on sign language as their primary mode of expression. This study presents a novel framework leveraging image processing techniques for the detection and recognition of sign language gestures. The developed software offers promising avenues for enhancing comprehension of Sign Language, with potential applications in educational settings, public spaces, and interpersonal interactions. The proposed method streamlines the recognition process of sign language, employing deep learning algorithms for the accurate prediction of signs. The system operates by processing input images containing signs through a convolutional neural network, encompassing stages such as pre-processing, feature extraction, model training, testing, and sign-to-text conversion. Crucially, the system's output provides text-based descriptions of the sign in the input image and notably integrates voice output for enhanced accessibility and communication. This multifaceted approach contributes towards bridging communication barriers between individuals with disabilities and those without, promoting inclusivity and understanding in diverse social contexts.