Making decisions based on a good representation of the environment is advantageous in deep reinforcement learning (DRL). In this work, we propose a new network structure for DRL, Deep Residual Attention Reinforcement Learning (DRARL), by incorporating an attention-based structure into the network structure of Importance Weighted Actor-Learner Architecture (IMPALA). DRARL helps the model learn a better representation by helping the model focus on the crucial features. The effectiveness of DRARL was empirically evaluated in a subset of Atari games, with popular RL algorithms, IMPALA, PPO, and A2C. The experiments show that DRARL works robustly with the three algorithms and improves sample efficiency in seven out of ten games. Furthermore, the visualization of important features empirically shows that the DRARL helps the model concentrate on the crucial features and therefore improves the performance and sample efficiency.