[ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] [ 9 ] [ 10 ] [ 11 ] [ 12 ] [ 13 ] [ 14 ]

Journal of Information Science and Engineering, Vol. 39 No. 2, pp. 691-709

You Only Learn One Representation: Unified Network for Multiple Tasks

1Institute of Information Science
Academia Sinica
Taipei, 115 Taiwan

2Elan Microelectronics Corporation
Hsinchu, 308 Taiwan

3Frontier Institute of Research for Science and Technology
National Taipei University of Technology
Taipei, 106 Taiwan
E-mail: kinyiu@iis.sinica.edu.tw+; ihyeh@emc.com.tw; liao@iis.sinica.edu.tw

People “understand” the world via vision, hearing, tactile, and also the past experience. Human experience can be learned through normal learning (we call it explicit knowledge), or subconsciously (we call it implicit knowledge). These experiences learned through normal learning or subconsciously will be encoded and stored in the brain. Using these abundant experience, as a huge database, human beings can effectively process data, even they were unseen beforehand. In this paper, we propose a unified network to encode implicit knowledge and explicit knowledge together, just like the human brain can learn knowledge from normal learning as well as subconsciousness learning. The unified network can generate a unified representation to simultaneously serve various tasks. We can perform kernel space alignment, prediction refinement, and multi-task learning in a convolutional neural network. The results demonstrate that when implicit knowledge is introduced into the neural network, it benefits the performance of all tasks. We further analyze the implicit representation learnt from the proposed unified network, and it shows great capability on catching the physical meaning of different tasks. The source code of this work is at: https://github.com/WongKinYiu/yolor.

Keywords: unified network, representation learning, multiple task learning, image classification, object detection, multiple object tracking

  Retrieve PDF document (JISE_202302_01.pdf)