Cổng tri thức PTIT

Bài báo quốc tế

Kho tri thức

/

/

Sign Language Recognition With Self-Learning Fusion Model

Sign Language Recognition With Self-Learning Fusion Model

Vũ Hoài Nam, Phạm Văn Cường, Hoàng Mậu Trung, Trần Tiến Công

Sign language recognition (SLR) is the task of recognizing human actions that represent the language, which is not only helpful for deaf–mute people but also a means for human–computer interaction. Although data from wearable sensors have been proven useful for this task, it is still difficult to collect such data for training deep fusion models. In this study, our contributions are twofold: 1) we collect and release a dataset for SLR consisting of both video and sensor data obtained from wearable devices and 2) we propose the first self-learning fusion model for SLR, termed STSLR, that utilizes a portion of annotated data to simulate sensor embedding vectors. By virtue of the simulated sensor features, the video features from video-only data are enhanced to allow the fusion model to recognize the annotated actions more effectively. We empirically demonstrate the superiority of STSLR over competitive benchmarks on our newly released dataset and well-known publicly available ones

Xuất bản trên:

IEEE Sensors Journal


Nhà xuất bản:

Institute of Electrical and Electronics Engineers Inc.

Địa điểm:


Từ khoá:

Sensors, Data models, Human activity recognition, Gesture recognition, Assistive technologies, Training, Sensor phenomena and characterization

Bài báo liên quan

Nguyen Xuan Ha, Hoang Nhu Dong, Nguyen V Thang, Pham D An, Nguyen Duc Toan, Đặng Minh Tuấn
Nguyễn Thị Thu Hiên, Lê Thanh Thủy
Đào Thị Thúy Quỳnh, An Hồng Sơn, Nguyễn Hữu Quỳnh, Cù Việt Dũng, Ngô Quốc Tạo
Hoàng Văn Xiêm, Nguyễn Quang Sang, Bùi Thanh Hương, Vũ Hữu Tiến
Nguyễn Hồng Quân, Lê Trung Hiếu, Trần Trung Kiên, Hoàng Nhật Tân, Trần Thị Thanh Hải, Lê Thị Lan, Vũ Hải, Nguyễn Thanh Phương, Nguyễn Hữu Thanh, Phạm Văn Cường