Cổng tri thức PTIT

Bài báo quốc tế

Kho tri thức

/

/

RL-HCR: A Reinforcement Learning Based Adaptive Leader Selection Framework for Energy-Efficient WSNs

RL-HCR: A Reinforcement Learning Based Adaptive Leader Selection Framework for Energy-Efficient WSNs

Trần Huy Long

As critical infrastructure for IoT applications, wireless sensor networks must minimize energy consumption to extend the operational lifetime of the entire network. Energy-efficiency protocols, such as LEACH and PEGASIS, have been developed over many years to face challenges. However, static cluster head selection mechanisms in these original operations produce suboptimal energy efficiency and limited adaptability, motivating a move toward dynamic leader selection. PEGABC, a PEGASIS variant that employs a metaheuristic algorithm for leader selection, reduces some local inefficiencies but incurs substantial re-clustering overhead and optimises only short-term gains. To overcome these limitations, we propose RL-HCR(Reinforcement Learning–Guided Hierarchical Chain Routing), a lightweight RL framework that dynamically determines when to recluster while preserving the chain-based energy efficiency of PEG ABC. The proposed RL-HCR achieves low inference cost and avoids wasteful reclustering. Simulation results under standard scenarios show that RL-HCR extends 20% lifetime compared to PEG ABC, while reducing total energy consumption. These findings suggest that RL-HCL is a viable and scalable approach for adaptive, energy-efficient routing in wireless sensor networks.

Xuất bản trên:

RL-HCR: A Reinforcement Learning Based Adaptive Leader Selection Framework for Energy-Efficient WSNs

Ngày đăng:

DOI:


Nhà xuất bản:

Địa điểm:


Từ khoá:

Wireless Sensor Networks; PEGASIS; Reinforcement Learning; Energy Efficiency.