Bài báo quốc tế
Kho tri thức
/
Bài báo quốc tế
/
On the Combination of Multi-Input and Self-Attention for Sign Language Recognition
On the Combination of Multi-Input and Self-Attention for Sign Language Recognition
Vũ Hoài Nam
Sign language recognition can be considered as a branch of human action recognition. The deaf-muted community utilizes upper body gestures to convey sign language words. With the rapid development of intelligent systems based on deep learn-ing models, video-based sign language recognition models can be integrated into services and products to improve the quality of life for the deaf-muted community. However, comprehending the relationship between different words within videos is a complex and challenging task, particularly in understanding sign language actions in videos, further constraining the performance of previous methods. Recent methods have been explored to generate video annotations to address this challenge, such as creating questions and answers for images. An optimistic approach involves fine-tuning autoregressive language models trained using multi-input and self-attention mechanisms to facilitate understanding of sign language in videos. We have introduced a bidirectional transformer language model, MISA (multi-input self-attention), to enhance solutions for VideoQA (video question and answer) without relying on labeled annotations. Specifically, (1) one direction of the model generates descriptions for each frame of the video to learn from the frames and their descriptions, and (2) the other direction generates questions for each frame of the video, then integrates inference with the first aspect to produce questions that effectively identify sign language actions. Our proposed method has outperformed recent techniques in VideoQA by eliminating the need for manual labeling across various datasets, including CSL-Daily, PHOENIX14T, and PVSL (our dataset). Furthermore, it demonstrates competitive performance in low-data environments and operates under supervision.
Xuất bản trên:
On the Combination of Multi-Input and Self-Attention for Sign Language Recognition
Nhà xuất bản:
International Journal of Advanced Computer Science and Applications
Địa điểm:
Từ khoá:
Multi-input; self-attention; deep learning models; video-based sign language; sign language recognition
Bài báo liên quan
Optimizing Resource Allocation for Dynamic IoT Requests Using Network Function Virtualization
Phạm Tuấn MinhExploring Linguistic Patterns through Machine Learning: Evidence from Logistic Regression Analysis
Nguyễn Minh TuấnEffective Multi-Stage Training Model For Edge Computing Devices In Intrusion Detection
Huỳnh Trọng Thưa