JISE


  [1] [2] [3] [4] [5] [6] [7] [8]


Journal of Information Science and Engineering, Vol. 13 No. 2, pp. 235-266


Conglomerate'Neural Network Architectures: The Way Ahead for Simulating Early Language Development


Syed Sibte Raza Abidi and Khrushid Ahmad*
School of Computer Sciences 
Universiti Sains Malaysia 
11800 Penang, MALAYSIA 
*Department of Computing Sciences 
University of Surrey 
Guildford, ENGLAND


    Neural networks provide a basis for studying child language development in that such networks emphasise learning, and their design simplifies questions related to the representation of linguistic and world knowledge through the use of a network of idealised 'neurons'. We report a simulation of some key aspects of child language development during infancy. We argue that in order to simulate uniquely human language learning, it is important to use a 'conglomerate' neural network architecture that integrates the collective strengths of a variety of neural networks in some principled fashion to take into account the diverse nature of inputs to and outputs from a child and also the variety of inherent learning mechanisms involved whilst learning language. We present such a 'conglomerate' neural network architecture, ACCLAIM, which integrates both supervised and unsupervised learning algorithms, to simulate the learning ofconcepts, words, conceptualandsemantic relationsand simpleword-orderrules, thus mimicking the production of child-likeone-wordandtwo-wordlanguage. The simulations carried out are 'language informed' as realistic child language data have been used to train the neural networks.


Keywords: conglomerate architecture, supervised & unsupervised learning, cognitive modelling, language development

  Retrieve PDF document (JISE_199702_03.pdf)