a learning based mac for energy efficient wireless sensor
play

A Learning-Based MAC for Energy Efficient Wireless Sensor Networks - PowerPoint PPT Presentation

A Learning-Based MAC for Energy Efficient Wireless Sensor Networks S. Galzarano 1,2 , Prof. A. Liotta 2 , Prof. G. Fortino 1 (1) University of Calabria, Italy (2) Eindhoven University of Technology, Netherlands Outline WSN & Machine


  1. A Learning-Based MAC for Energy Efficient Wireless Sensor Networks S. Galzarano 1,2 , Prof. A. Liotta 2 , Prof. G. Fortino 1 (1) University of Calabria, Italy (2) Eindhoven University of Technology, Netherlands

  2. Outline • WSN & Machine learning • Learning-based MAC • Simulation results • Conclusion & Future work Stefano Galzarano, Antonio Liotta, Gincarlo Fortino 2

  3. Challenges in WSN Wireless Ad Hoc Medium: - unreliable, asymmetric or unidirectional links - restricted broadband Topology changes and Harsh environment: mobillity: - no physical access to - Mobile sink and/or nodes WSN network once deployed - failing nodes - nodes failure - new node joining Resource limitations: - battery - processing - memory Stefano Galzarano, Antonio Liotta, Gincarlo Fortino 3

  4. Challenges in WSN Communication Stack Application level Clustering Data Processing Routing and neighborood Data Collection management Security Medium Access Control Physical Layer Event and target detection Stefano Galzarano, Antonio Liotta, Gincarlo Fortino 4

  5. Medium Access Control • Protocol layer providing a multiple access control mechanism on a shared communication medium. • A MAC protocol for WSN should use a radio wake-up/sleep scheduling for: – Energy saving – Reduce collisions (and then also energy and latency) – Reduce idle listening periods – Maximizing throughput – Minimizing latency Stefano Galzarano, Antonio Liotta, Gincarlo Fortino 5

  6. Research Objective • Using Machine Learning to improve MAC performance in terms of energy efficiency, throughput and latency. Stefano Galzarano, Antonio Liotta, Gincarlo Fortino 6

  7. Machine Learning in WSN Machine Learning paradigms have been successfully adopted to address various challenges - Can adapt and operate efficiently in Reinforcement Decision dynamic environments. Tree Learning - Disover important correlation in sensor data Genetic - Support more intelligent Swarm Algorithms decision-making and autonomous Intelligence control ....... R. V. Kulkarni, A. Forster, and G. K. Venayagamoorthy, “Computational Intelligence in Wireless Sensor Networks: A Survey,” Communications Surveys Tutorials, IEEE, vol. 13, no. 1, pp. 68 – 96, quarter 2011. Stefano Galzarano, Antonio Liotta, Gincarlo Fortino 7

  8. Reinforcement Learning • Usually first choice when solving complex distributed problems in WSNs. • Trial and error: learning by interacting with the environment: – Learning agents – Pool of possible actions – Goodness of actions – A reward function – Select one action – Execute the action – Observe the reward – Correct the goodness of the executed action Stefano Galzarano, Antonio Liotta, Gincarlo Fortino 8

  9. Q-Learning • The achieved total reward ( Q-value ) of taking a specific action at a given state is computed using: Stefano Galzarano, Antonio Liotta, Gincarlo Fortino 9

  10. Q-Learning • The achieved total reward ( Q-value ) of taking a specific action at a given state is computed using: learning constant old Q-Value Immediate reward new Q-Value old Q-Value Stefano Galzarano, Antonio Liotta, Gincarlo Fortino 10

  11. Proposed Q-Learning based MAC Adapt Q-learning to a radio wake-up/sleep scheduling •  Learning agent Each node in the network Stefano Galzarano, Antonio Liotta, Gincarlo Fortino 11

  12. Proposed Q-Learning based MAC Adapt Q-learning to a radio wake-up/sleep scheduling •  Learning agent Each node in the network •  State Slot s k in a the frame f frame t slot s k Stefano Galzarano, Antonio Liotta, Gincarlo Fortino 12

  13. Proposed Q-Learning based MAC Adapt Q-learning to a radio wake-up/sleep scheduling •  Learning agent Each node in the network •  State Slot s k in a the frame f •  Possible actions Radio ON/OFF, for each slot frame t slot s k Stefano Galzarano, Antonio Liotta, Gincarlo Fortino 13

  14. Proposed Q-Learning based MAC Adapt Q-learning to a radio wake-up/sleep scheduling •  Learning agent Each node in the network •  State Slot s k in a the frame f •  Possible actions Radio ON/OFF, for each slot •  Goodness of actions Q(s k ) frame t slot s k Q(s k ) Stefano Galzarano, Antonio Liotta, Gincarlo Fortino 14

  15. Reward Reward signals (per slot) • Received packets + • Succesfully transmitted packets + • Over-heard packets - • Expected packets - Stefano Galzarano, Antonio Liotta, Gincarlo Fortino 15

  16. Simulation results • Castalia / OMNET++ • Comparison with other 2 different RL-based MAC Stefano Galzarano, Antonio Liotta, Gincarlo Fortino 16

  17. Simulation results Liu, Z., Elhanany, I.: RL-MAC: A reinforcement learning based MAC protocol for wireless sensor networks. International Journal of Sensor Networks 1, 117 – 124 (2006) • Linear, star and mesh topologies • Packets inter-arrival time between 1 and 10 seconds • Max throughput is between 20 and 200 byte/sec (200 bytes length payload) • Performance metrics: throughput, latency, energy efficiency Stefano Galzarano, Antonio Liotta, Gincarlo Fortino 17

  18. Simulation results Stefano Galzarano, Antonio Liotta, Gincarlo Fortino 18

  19. Simulation results Stefano Galzarano, Antonio Liotta, Gincarlo Fortino 19

  20. Simulation results Stefano Galzarano, Antonio Liotta, Gincarlo Fortino 20

  21. Simulation results Stefano Galzarano, Antonio Liotta, Gincarlo Fortino 21

  22. Simulation results Mihaylov, M., Tuyls, K., Nowé, A.: Decentralized learning in wireless sensor networks. In: Taylor, M.E., Tuyls, K. (eds.) ALA 2009. LNCS (LNAI), vol. 5924, pp. 60 – 73. Springer, Heidelberg (2010) • A nodes-to-sink communication pattern has been used. • 2 pkt/sec • Multipath ring routing • Performance metrics: latency, packet delivery, energy efficiency Stefano Galzarano, Antonio Liotta, Gincarlo Fortino 22

  23. Simulation results Mihaylov, M., Tuyls, K., Nowé, A.: Decentralized learning in wireless sensor networks. In: Taylor, M.E., Tuyls, K. (eds.) ALA 2009. LNCS (LNAI), vol. 5924, pp. 60 – 73. Springer, Heidelberg (2010) Stefano Galzarano, Antonio Liotta, Gincarlo Fortino 23

  24. Conclusions • A Q-Learning approach has been successfully employed for a self-adapting MAC layer on WSNs; • Simulation results show that it outperforms others RL-based MAC Stefano Galzarano, Antonio Liotta, Gincarlo Fortino 24

  25. Future work • Ongoing work: – implementation on real sensor platforms; – extensive experiments with varying real deployment. • Dynamically update both frame length and slot number on the basis of the network traffic. Stefano Galzarano, Antonio Liotta, Gincarlo Fortino 25

  26. A Learning-Based MAC for Energy Efficient Wireless Sensor Networks S. Galzarano, Prof. A. Liotta, Prof. G. Fortino University of Calabria, Italy & Eindhoven University of Technology, Netherlands Thank you!!! Any Questions? Stefano Galzarano, Antonio Liotta, Gincarlo Fortino 26

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend