JISE


  [1] [2] [3] [4] [5] [6] [7] [8]


Journal of Information Science and Engineering, Vol. 8 No. 2, pp. 283-304


A Fast Learning Multilayer Neural Model and Its Array Processor Implementation


Cheng-Chin Chiang and Hsin-Chia Fu
Department of Computer and Information Engineering 
National Chiao-Tung University 
Hsinchu, Taiwan 300, R.O.C.


    This paper advocates a novel method of achieving faster learning of Boolean functions in a multilayer neural network. The key new ingredient is the concept of the proposed floaring positive/negative threshods used to edtermine the putput states in the output neurons. In a traditional multilayer perceptron (MLP), an output state is determined to be 1or 0 when its activation value exceeds the fixed target threshold. In the proposed approach, the state of an output activation is determined by comparing the differences between its output activation and its two floating positive/negative thresholds. When the output activation is closer to the positive threshold (or neative threshold), then the output stare is interpreted to be 1(or 0). During the learning phase, the output activation and the two thresholds are adjusted due to weight updating. Simulation results show that the number of learning iterations for a successful training performed by our model is significantly lesser than the learning iterations performed by a traditional multilayer perceptron for many different problems. In addition, we also have mapped this multilayer neural onto a ring systolic array which maximizes the strength of VLSI in terms of intensive and pipeline computing and yet circumvents the limitation on communications.


Keywords: neural network, multilayer perceptron, backpropagation learning alsorithm, ring array processor, transputer

  Retrieve PDF document (JISE_199202_07.pdf)