Adaptive proportional fair parameterization based LTE scheduling using continuous actor-critic reinforcement learning
Authors
Comşa, Ioan-SorinZhang, Sijing
Aydin, Mehmet Emin
Chen, Jianping
Kuonen, Pierre
Wagen, Jean–Frédéric
Issue Date
2015-02-12
Metadata
Show full item recordAbstract
Maintaining a desired trade-off performance between system throughput maximization and user fairness satisfaction constitutes a problem that is still far from being solved. In LTE systems, different tradeoff levels can be obtained by using a proper parameterization of the Generalized Proportional Fair (GPF) scheduling rule. Our approach is able to find the best parameterization policy that maximizes the system throughput under different fairness constraints imposed by the scheduler state. The proposed method adapts and refines the policy at each Transmission Time Interval (TTI) by using the Multi-Layer Perceptron Neural Network (MLPNN) as a non-linear function approximation between the continuous scheduler state and the optimal GPF parameter(s). The MLPNN function generalization is trained based on Continuous Actor-Critic Learning Automata Reinforcement Learning (CACLA RL). The double GPF parameterization optimization problem is addressed by using CACLA RL with two continuous actions (CACLA-2). Five reinforcement learning algorithms as simple parameterization techniques are compared against the novel technology. Simulation results indicate that CACLA-2 performs much better than any of other candidates that adjust only one scheduling parameter such as CACLA-1. CACLA-2 outperforms CACLA-1 by reducing the percentage of TTIs when the system is considered unfair. Being able to attenuate the fluctuations of the obtained policy, CACLA-2 achieves enhanced throughput gain when severe changes in the scheduling environment occur, maintaining in the same time the fairness optimality condition.Citation
Comşa I, Zhang S, Aydin M, Chen J, Kuonen P, Wagen J (2014) 'Adaptive proportional fair parameterization based LTE scheduling using continuous actor-critic reinforcement learning', 2014 IEEE Global Communications Conference - Austin, Institute of Electrical and Electronics Engineers Inc..Additional Links
https://ieeexplore.ieee.org/abstract/document/7037498Type
Conference papers, meetings and proceedingsLanguage
enISBN
9781479935116ae974a485f413a2113503eed53cd6c53
10.1109/GLOCOM.2014.7037498