Page: 1 2 3 4 5 6 7... 282
41 |
Mono vs Multilingual BERT: A Case Study in Hindi and Marathi Named Entity Recognition ...
|
|
|
|
BASE
|
|
Show details
|
|
42 |
Informative Causality Extraction from Medical Literature via Dependency-tree based Patterns ...
|
|
|
|
BASE
|
|
Show details
|
|
43 |
AI for mapping multi-lingual academic papers to the United Nations' Sustainable Development Goals (SDGs) ...
|
|
|
|
BASE
|
|
Show details
|
|
44 |
AI for mapping multi-lingual academic papers to the United Nations' Sustainable Development Goals (SDGs) ...
|
|
|
|
BASE
|
|
Show details
|
|
45 |
AI for mapping multi-lingual academic papers to the United Nations' Sustainable Development Goals (SDGs) ...
|
|
|
|
BASE
|
|
Show details
|
|
46 |
AI for mapping multi-lingual academic papers to the United Nations' Sustainable Development Goals (SDGs) ...
|
|
|
|
BASE
|
|
Show details
|
|
49 |
Learning and controlling the source-filter representation of speech with a variational autoencoder ...
|
|
|
|
BASE
|
|
Show details
|
|
50 |
Correcting Misproducted Speech using Spectrogram Inpainting ...
|
|
|
|
BASE
|
|
Show details
|
|
51 |
WLASL-LEX: a Dataset for Recognising Phonological Properties in American Sign Language ...
|
|
|
|
BASE
|
|
Show details
|
|
52 |
A Transformer-Based Contrastive Learning Approach for Few-Shot Sign Language Recognition ...
|
|
|
|
BASE
|
|
Show details
|
|
53 |
Including Facial Expressions in Contextual Embeddings for Sign Language Generation ...
|
|
|
|
BASE
|
|
Show details
|
|
54 |
Statistical and Spatio-temporal Hand Gesture Features for Sign Language Recognition using the Leap Motion Sensor ...
|
|
|
|
BASE
|
|
Show details
|
|
56 |
Vision-Based American Sign Language Classification Approach via Deep Learning ...
|
|
|
|
BASE
|
|
Show details
|
|
58 |
Exploring Sub-skeleton Trajectories for Interpretable Recognition of Sign Language ...
|
|
|
|
BASE
|
|
Show details
|
|
59 |
Sign Language Recognition System using TensorFlow Object Detection API ...
|
|
|
|
Abstract:
Communication is defined as the act of sharing or exchanging information, ideas or feelings. To establish communication between two people, both of them are required to have knowledge and understanding of a common language. But in the case of deaf and dumb people, the means of communication are different. Deaf is the inability to hear and dumb is the inability to speak. They communicate using sign language among themselves and with normal people but normal people do not take seriously the importance of sign language. Not everyone possesses the knowledge and understanding of sign language which makes communication difficult between a normal person and a deaf and dumb person. To overcome this barrier, one can build a model based on machine learning. A model can be trained to recognize different gestures of sign language and translate them into English. This will help a lot of people in communicating and conversing with deaf and dumb people. The existing Indian Sing Language Recognition systems are designed ... : 14 pages, 5 figures, ANTIC 2021 ...
|
|
Keyword:
Artificial Intelligence cs.AI; Computer Vision and Pattern Recognition cs.CV; FOS Computer and information sciences; Machine Learning cs.LG; Multimedia cs.MM
|
|
URL: https://dx.doi.org/10.48550/arxiv.2201.01486 https://arxiv.org/abs/2201.01486
|
|
BASE
|
|
Hide details
|
|
60 |
Giant Pigeon and Small Person: Prompting Visually Grounded Models about the Size of Objects ...
|
|
Zhang, Yi. - : Purdue University Graduate School, 2022
|
|
BASE
|
|
Show details
|
|
Page: 1 2 3 4 5 6 7... 282
|
|