JISE


  [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14]


Journal of Information Science and Engineering, Vol. 36 No. 5, pp. 1069-1078


Convolutional and Fully Connected Layer in DFN


MIAN MIAN LAU AND KING HANN LIM
Department of Electrical and Computer Engineering
Curtin University Malaysia
CDT 250, 98009 Miri, Sarawak, Malaysia
E-mail: mian.lau@postgrad.curtin.edu.my


Deep feedforward network (DFN) is the general structure of many well-known deep neural networks (DNN) for image classification. The recent research emphasizes on going deeper and wider network architecture to achieve higher accuracy and lower misclassification rate. This paper provides a study and investigation on stacking three basic operation of neural layers, i:e: convolutional layer, pooling layer and fully connected layer. As a result, a new framework of convolutional deep feedforward network (C-DFN) is proposed in this paper. C-DFN performed significantly better than deep feedforward network (DFN), deep belief network (DBN), and convolutional deep belief network (C-DBN) in MNIST dataset, INRIA pedestrian dataset and Daimler pedestrian dataset. The convolutional layer acts as a trainable feature extractor improving the network performance significantly. Moreover, it reduced 14% of the trainable parameters in DFN. With the use of trainable activation function  such as PReLU in the C-DFN, it achieves an average misclassification rate of 9.22% of the three benchmark datasets.


Keywords: convolution network, deep feedforward network, fully connected network, stacking effect, convolutional deep belief network

  Retrieve PDF document (JISE_202005_09.pdf)