| Nome: | Descrição: | Tamanho: | Formato: | |
|---|---|---|---|---|
| 2.53 MB | Adobe PDF |
Autores
Orientador(es)
Resumo(s)
Wireless Sensor Actor Networks (WSAN) are a key enabler of Internet of Things applications that demand timely and reliable data exchange under dynamic conditions. Among the various domains that benefit from these networks, precision agriculture stands out, demanding adaptive strategies for effective monitoring and control. This study proposes a reinforcement learning approach that leverages the Operationalization construct of the Self-Orchestrated Web of Things (SOrWoT) framework to enhance the adaptability of Things’ internal operations. A problem is formulated as a Markov Decision Process, and a Deep Q-Learning agent is trained in a custom simulation environment to identify the most suitable Operationalizations for optimizing data accuracy and latency, under changing conditions and communication failures. The results show that during normal operation the agent favored parallel sensor data averaging to minimize read error, but after an actor failure and the consequent increase in sensor-to-actor distances, it adapted by prioritizing latency through faster Operationalization choices. Sensitivity analyses further confirmed the agent’s ability to adjust policies in response to partial failures, and to shifts in the relative importance of latency versus accuracy. These findings demonstrate that reinforcement learning can autonomously optimize WSAN performance, contributing to resilient and self-adaptive systems.
Descrição
Palavras-chave
Internet of things Wireless sensor and actor networks Optimization Deep reinforcement learning
Contexto Educativo
Citação
Editora
Institute of Electrical and Electronics Engineers (IEEE)
