Cổng tri thức PTIT

Bài báo quốc tế

Kho tri thức

/

/

Multi-modal sensor fusion and federated learning for TinyML on resource-constrained IoT devices

Multi-modal sensor fusion and federated learning for TinyML on resource-constrained IoT devices

Đỗ Phúc Hảo

The surge in IoT devices demands on-device intelligence for privacy-critical, latency-sensitive tasks like activity recognition. This paper presents a federated learning framework for multi-modal sensor fusion, specifically designed to operate under the tight resource constraints of TinyML platforms. Optimized for ARM Cortex-M microcontrollers with a sub-96 KB memory footprint, our framework employs a communication-efficient protocol using gradient sparsification and 8-bit quantization to drastically reduce uplink data requirements. We conduct a detailed comparative analysis of Early and Late Fusion strategies on the PAMAP2 dataset. Our results reveal a critical trade-off: while Early Fusion can achieve a marginally higher peak accuracy (95.75%), the more resource-efficient Late Fusion architecture ensures significantly faster convergence and greater training stability. This study highlights the feasibility of deploying robust, privacy-preserving TinyML models on low-power IoT devices and provides clear insights into selecting the optimal fusion architecture for such environments.

Xuất bản trên:

Multi-modal sensor fusion and federated learning for TinyML on resource-constrained IoT devices


Nhà xuất bản:

International Journal of Parallel, Emergent and Distributed Systems

Địa điểm:


Từ khoá:

TinyML, multi-modal sensor fusion, federated learning, IoT edge devices, resource-constrained