JISE


  [1] [2] [3] [4] [5] [6] [7] [8]


Journal of Information Science and Engineering, Vol. 9 No. 2, pp. 299-317


REchargeable Parallel Learning Architecture: REPLA


Wen-Kuang Chou and David Y. Y. Yun*
Department of Information Science 
Providence University 
Shalu 43309, Taiwan R.O.C. 
*Department of Electrical Engineering 
University of Hawaii at Manoa 
Honolulu, Hawaii 96822 U.S.A.


    It has been pointed out that current neural networks cannot learn unless extra computer resources are used [7]. In this paper, a REchargeable Parallel Learning Architecture (REPLA) is proposed to help hidden Markov models learn faster. It can be regarded as a learning heuristics controller (LHC) in the model L3 [7]. By introducing REPLA and parallel learning algorithms, the time complexity of learning has been reduced from O(max(M,N)NT) to O(max(M,N)T) for each iteration. For an N-parallelism architecture, REPLA has achieved 100 percent utilization of processors. The significance of REPLA lies in the faster learning and reusability.


Keywords: hierarchical neural models (L3), massively parallel architecture for recalling (MPAR), learning heuristics controller (LHC), hidden markov models (HMM), computer aided learning, parallel algorithms and architectures

  Retrieve PDF document (JISE_199302_07.pdf)