This chapter discusses various technologies developed for deaf people using Indian sign language synthetic animations. An automatic translation system for English Text to Indian Sign Language synthetic animations in the real domain has been developed, which consists of a parsing module that parses the input English sentenceto- phrase structure grammar representation on which Indian sign language grammar rules are applied to reorder the words of the English sentence. The elimination module eliminates the unwanted words from the reordered sentence. Lemmatization is applied to convert the words into the root form. The words (or their synonym in case the word is not available in the database) in the sentence are replaced by their HamNoSys code. In case the word or its synonym is not present in the lexicon, the HamNoSys code will be taken for each alphabet of the word. The HamNoSys code is converted into SiGML tags, which are sent to the animation module, which converts the SiGML tags into synthetic Animation using an avatar.
Prototypes for announcement systems for deaf people at railway stations, airports and bus stands have been developed. The announcements are categorized and sent to the system in written form. These announcements are dynamically converted to ISL sentences and then animated using HamNoSys and SiGML tags.
These translation and announcements systems are the only systems in the country that use continuous synthetic animations of the words in the sentence. Existing systems are limited to the conversion of words and predefined sentences into Indian sign language, whereas our conversion system converts English sentences into Indian sign language in the real domain.
Keywords: Cued Speech, HamNoSys, Hearing impaired people, Indian Sign Language, Lemmatization, Parsing, SiGML, SiGML Stanford Parser, Stemming, Synthetic Animation, Translation System, Visual special Language.