an efficient neural network architecture for rate
play

An Efficient Neural Network Architecture for Rate Maximization in - PowerPoint PPT Presentation

Presentation for IEEE ISIT 2020 An Efficient Neural Network Architecture for Rate Maximization in Energy Harvesting Downlink Channels Communications & Machine Learning Lab Index I. Introduction II. System model III. Monotonicity of the


  1. Presentation for IEEE ISIT 2020 An Efficient Neural Network Architecture for Rate Maximization in Energy Harvesting Downlink Channels Communications & Machine Learning Lab

  2. Index I. Introduction II. System model III. Monotonicity of the Optimal Policy IV. Numerical results V. Conclusion

  3. I. Introduction 3

  4. I. Introduction ❖ Energy harvesting communications Thermal energy Solar power Energy harvesting Wind energy Rx Piezoelectric … Tx Rx Rx No extra power source An Efficient Neural Network Architecture for Rate Maximization in Energy Harvesting Downlink Channels 4 Heasung Kim, Taehyun Cho, Jungwoo Lee, Wonjae Shin, and H. Vincent Poor, Communications & Machine Learning Lab, Seoul National University, Korea Communications & Machine Learning Lab

  5. I. Introduction ❖ Energy harvesting communications Constraints Pros • • Limitation on battery capacity To recycle energy • • Sufficient extra energy is not always No external energy source needed • guaranteed Green communications … … An Efficient Neural Network Architecture for Rate Maximization in Energy Harvesting Downlink Channels 5 Heasung Kim, Taehyun Cho, Jungwoo Lee, Wonjae Shin, and H. Vincent Poor, Communications & Machine Learning Lab, Seoul National University, Korea Communications & Machine Learning Lab

  6. I. Introduction ❖ Reinforcement learning with function approximator However… Traditional Reinforcement learning algorithms easily fall into local maxima when the transmitter gets inconsistent data. Using deep neural networks without proper grounds slows down forward propagation and hinders efficient network configuration . An Efficient Neural Network Architecture for Rate Maximization in Energy Harvesting Downlink Channels 6 Heasung Kim, Taehyun Cho, Jungwoo Lee, Wonjae Shin, and H. Vincent Poor, Communications & Machine Learning Lab, Seoul National University, Korea Communications & Machine Learning Lab

  7. I. Introduction Main contribution: To eliminate ambiguity in network building The optimal policy is an increasing function of its input feature Constructing a function approximator with a monotonic property of the value function. Wasting The learning agent has computation information about the resources optimal value function required for in advance DNN An Efficient Neural Network Architecture for Rate Maximization in Energy Harvesting Downlink Channels 7 Heasung Kim, Taehyun Cho, Jungwoo Lee, Wonjae Shin, and H. Vincent Poor, Communications & Machine Learning Lab, Seoul National University, Korea Communications & Machine Learning Lab

  8. II. System Model 8

  9. II. System Model Notations ❖ System Model Notations ℎ 𝑓 𝑗 harvested energy, i.i.d. 𝐈 𝑗 channel gains, i.i.d. 𝑐 𝑗 remaining battery 𝑠 rate of 𝑙 th user in time slot 𝑗 𝑙,(𝑗) 𝑞 (𝑗) total power used in time slot 𝑗 the Shannon’s channel capacity 𝑆(𝐈, 𝑞) 𝑊(𝑡 𝑗 ) value function • Power allocation problem for discounted sum-rate on infinite horizon 𝑡 = (𝑓 ℎ , 𝐈, 𝑐) state • System running in discrete slotted time • Channel State Information at Transmitter (CSIT) 𝜌(𝑓 ℎ , 𝐈, 𝑐) power allocation policy An Efficient Neural Network Architecture for Rate Maximization in Energy Harvesting Downlink Channels 9 Heasung Kim, Taehyun Cho, Jungwoo Lee, Wonjae Shin, and H. Vincent Poor, Communications & Machine Learning Lab, Seoul National University, Korea Communications & Machine Learning Lab

  10. II. System Model Broadcast channel ❖ Broadcast channel Total transmit power at time slot i achievable rate of k th user with SIC An Efficient Neural Network Architecture for Rate Maximization in Energy Harvesting Downlink Channels 10 Heasung Kim, Taehyun Cho, Jungwoo Lee, Wonjae Shin, and H. Vincent Poor, Communications & Machine Learning Lab, Seoul National University, Korea Communications & Machine Learning Lab

  11. II. System Model Weighted sum-rate ❖ Weighted sum-rate maximization problem Minimum power required to achieve J. Yang, O. Ozel, and S. Ulukus , “Broadcasting with an energy harvesting rechargeable transmitter,” IEEE Transactions on Wireless Communications , vol. 11, no. 2, pp. 571 – 583, 2012. An Efficient Neural Network Architecture for Rate Maximization in Energy Harvesting Downlink Channels 11 Heasung Kim, Taehyun Cho, Jungwoo Lee, Wonjae Shin, and H. Vincent Poor, Communications & Machine Learning Lab, Seoul National University, Korea Communications & Machine Learning Lab

  12. II. System Model Problem Formulation ❖ Weighted sum-rate maximization problem Achievable rate in broadcast channel Battery constraints determines the transmit power An Efficient Neural Network Architecture for Rate Maximization in Energy Harvesting Downlink Channels 12 Heasung Kim, Taehyun Cho, Jungwoo Lee, Wonjae Shin, and H. Vincent Poor, Communications & Machine Learning Lab, Seoul National University, Korea Communications & Machine Learning Lab

  13. III. Monotonicity of the Optimal Policy 13

  14. III. Monotonicity of the Optimal Policy (𝑓 ℎ , 𝐈, 𝑐) 𝜌 𝑞 Harvested energy, Total power allocated at one time slot policy channel gains, remaining battery The optimal policy has the equal or greater output (power) as the input variable is greater. Why we prove that the optimal policy is an increasing function? Proof of increasing Building lightweight Policy gradient property of the monotonic neural method optimal policy network An Efficient Neural Network Architecture for Rate Maximization in Energy Harvesting Downlink Channels 14 Heasung Kim, Taehyun Cho, Jungwoo Lee, Wonjae Shin, and H. Vincent Poor, Communications & Machine Learning Lab, Seoul National University, Korea Communications & Machine Learning Lab

  15. III. Monotonicity of the Optimal Policy ❖ Increasing Property of the Optimal Power Allocation Policy • The optimal power allocation policy satisfies the Bellman’s optimality equation. • Deriving increasing property of the optimal policy: if 𝐈′ dominates 𝐈 , the optimal policy is increasing, when the following conditions are satisfied [Topkis's theorem]. Condition 1: has increasing difference in . *Increasing difference in : the change of function from increasing is larger when 𝑞 is larger. Condition 2: Upper bound and lower bound of the action space are increasing functions for . An Efficient Neural Network Architecture for Rate Maximization in Energy Harvesting Downlink Channels 15 Heasung Kim, Taehyun Cho, Jungwoo Lee, Wonjae Shin, and H. Vincent Poor, Communications & Machine Learning Lab, Seoul National University, Korea Communications & Machine Learning Lab

  16. III. Monotonicity of the Optimal Policy ❖ Increasing Property of the Optimal Power Allocation Policy Condition 1: has increasing difference in . Condition 2: Upper bound and lower bound of the action space are increasing functions for . Upper & lower bound of the action (transmit power) space does not depend on the channel gain. They only depend on remaining battery of the transmitter. An Efficient Neural Network Architecture for Rate Maximization in Energy Harvesting Downlink Channels 16 Heasung Kim, Taehyun Cho, Jungwoo Lee, Wonjae Shin, and H. Vincent Poor, Communications & Machine Learning Lab, Seoul National University, Korea Communications & Machine Learning Lab

  17. III. Monotonicity of the Optimal Policy Since Condition 1 & 2 are satisfied, the optimal power allocation policy is an increasing function for channel gains, , if . Similarly, and . An Efficient Neural Network Architecture for Rate Maximization in Energy Harvesting Downlink Channels 17 Heasung Kim, Taehyun Cho, Jungwoo Lee, Wonjae Shin, and H. Vincent Poor, Communications & Machine Learning Lab, Seoul National University, Korea Communications & Machine Learning Lab

  18. III. Monotonicity of the Optimal Policy – Monotonic Network ❖ Monotonic Neural Network for the Optimal Policy … max min … max Positive weight Monotonic neural network [J. Sill, 1998]. • Monotonic neural network has only one hidden layer and max-min activation functions. • The weights should be positive. • Update the monotonic neural network through the well-known policy gradient scheme. An Efficient Neural Network Architecture for Rate Maximization in Energy Harvesting Downlink Channels 18 Heasung Kim, Taehyun Cho, Jungwoo Lee, Wonjae Shin, and H. Vincent Poor, Communications & Machine Learning Lab, Seoul National University, Korea Communications & Machine Learning Lab

  19. VI. Numerical Results 19

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend