It has been pointed out that current neural networks cannot learn unless extra computer resources are used [7]. In this paper, a REchargeable Parallel Learning Architecture (REPLA) is proposed to help hidden Markov models learn faster. It can be regarded as a learning heuristics controller (LHC) in the model L3 [7]. By introducing REPLA and parallel learning algorithms, the time complexity of learning has been reduced from O(max(M,N)NT) to O(max(M,N)T) for each iteration. For an N-parallelism architecture, REPLA has achieved 100 percent utilization of processors. The significance of REPLA lies in the faster learning and reusability.