Skip to content

This project focuses on sign language recognition, using WLASL dataset for training models—one with CNN and the other with TGCN. The goal is to improve communication between the deaf and hearing communities, with potential applications in assistive technologies, education, and human-computer interaction.

Notifications You must be signed in to change notification settings

pranay-5374/Sign-Language-Recognition-using-CNN-and-GCN

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Sign-Language-Recognition-using-CNN-and-GCN

Sign language recognition serves as a vital medium of communication specifically designed for individuals who are deaf or hard of hearing. It plays a pivotal role in facilitating effective interaction within the deaf community, allowing individuals to express their thoughts, emotions, and ideas through visual gestures and movements. However, the lack of familiarity with sign language among hearing individuals often creates barriers in comprehending and interpreting these gestures, thus limiting effective communication between deaf and hearing individuals. To bridge this communication gap and promote inclusivity, sign language recognition technology has emerged as a promising solution. The primary objective is to develop sophisticated systems capable of automatically interpreting and translating sign language gestures into written or spoken language, thereby enabling seamless communication between individuals with hearing impairments and those without. This technology aims to break down barriers and create a more inclusive society where effective communication knows no bounds. The implementation of a sign language recognition system involves leveraging cutting-edge machine learning and computer vision techniques. With advancements in deep learning algorithms and image processing, models can be trained to accurately recognize and interpret a diverse range of sign language gestures.

About

This project focuses on sign language recognition, using WLASL dataset for training models—one with CNN and the other with TGCN. The goal is to improve communication between the deaf and hearing communities, with potential applications in assistive technologies, education, and human-computer interaction.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages