JISE



每頁顯示
  • 1
  • 5
  • 10
  • 15
  • 20
  • 25
PEI GE, GAO HAI-CHANG, ZHOU XIN, CHENG NUO

Recent works have shown that convolutional neural networks (CNNs) are now the most effective machine learning method for solving various computer vision problems. A key advantage of CNNs is that they extract features automatically; users do not need to know what features should be extracted for a certain task. It is typically believed that the deeper the CNNs are, the higher the features that can be extracted and the more powerfully the resulting representations networks will be. Therefore, present-day CNNs are becoming substantially deeper. Previous works have proven that not all features extracted by deep CNNs are useful. In this paper, we tentatively consider a question: how do we simply remove the useless features? We propose a simple pooling method called feature pooling to compress features extracted in deep CNNs. In contrast to traditional CNNs, which input feature maps from the previous layer directly to the next layer, feature pooling compresses features from the channel below, reconstructs feature maps and then sends them to the next layer. We evaluate feature pooling based on two tasks: image classification and image denoising. Each task has a distinct network architecture and uses several benchmarks. Promising results are achieved in both tasks, especially image denoising, in which we obtain state-of-the-art results. This finding verifies the previous proposition that feature pooling is a straightforward method to perform further feature compression in CNNs. We have also observed that feature pooling has several competitive advantages: it reduces the number of parameters, increases the compactness of the networks, and strengthens the representation power with both high effectiveness and wide applicability.



Download
1135 KB
Keywords: convolutional neural network, features compression, pooling, image classification, image denoising
1