The study proposes a framework that can combine various types of selected words into a single vision-based platform. Not only are these sign languages used to communicate with the deaf and dumb, but they are also used to share thoughts with people who can hear but cannot speak. The majority of researchers work on sign language by processing video frames at normal or equivalent intervals. Color, angle, the location of the hand-disturb, dialectical variations of languages, and other variables all influence this phenomenon. Both of these variables serve as roadblocks in the creation of a perfect sign language system. The interior pixels and boundary of the object are exploited using region-based analysis. These shape descriptors are more sensitive to noise and distortions. Unsupervised and supervised sign language learning are the two types of sign language learning. Studies in the field of sign language can be considered difficult, and study in the fields of supervised and unsupervised learning is ongoing. The study proposes a framework that can combine various types of selected words into a single vision-based platform. In the future, feature extraction methods such as the Wavelet transform method can be used to achieve better performance. Other classifiers that can be used for conducting experiments and improving recognition rates include Component Analysis, Support Vector Machine, and Linear Discrimination Analysis.
Post Graduate and Research Department of English, Presidency College (Autonomous), Chennai, India.
R. Harihara Krishnan
Department of Computer Science, Presidency College (Autonomous), Chennai, India.
A. Maria Vinitha
Department of Computer Science, Loyola College, Chennai, India.