JISE


  [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] [ 9 ] [ 10 ] [ 11 ] [ 12 ] [ 13 ]


Journal of Information Science and Engineering, Vol. 37 No. 3, pp. 517-533


Residual Network for Deep Reinforcement Learning with Attention Mechanism


HANHUA ZHU1 AND TOMOYUKI KANEKO2
1Graduate School of Interdisciplinary Information Studies
2Interfaculty in Information Studies
University of Tokyo
Tokyo, 113-0033 Japan
E-mail: zhu-hanhua@g.ecc.u-tokyo.ac.jp; kaneko@acm.org


Making decisions based on a good representation of the environment is advantageous in deep reinforcement learning (DRL). In this work, we propose a new network structure for DRL, Deep Residual Attention Reinforcement Learning (DRARL), by incorporating an attention-based structure into the network structure of Importance Weighted Actor-Learner Architecture (IMPALA). DRARL helps the model learn a better representation by helping the model focus on the crucial features. The effectiveness of DRARL was empirically evaluated in a subset of Atari games, with popular RL algorithms, IMPALA, PPO, and A2C. The experiments show that DRARL works robustly with the three algorithms and improves sample efficiency in seven out of ten games. Furthermore, the visualization of important features empirically shows that the DRARL helps the model concentrate on the crucial features and therefore improves the performance and sample efficiency.


Keywords: deep reinforcement learning, representation learning, attention mechanism, Atari games, visualization of reinforcement learning

  Retrieve PDF document (JISE_202103_02.pdf)