Neural networks provide a basis for studying child language development in that such networks emphasise learning, and their design simplifies questions related to the representation of linguistic and world knowledge through the use of a network of idealised 'neurons'. We report a simulation of some key aspects of child language development during infancy. We argue that in order to simulate uniquely human language learning, it is important to use a 'conglomerate' neural network architecture that integrates the collective strengths of a variety of neural networks in some principled fashion to take into account the diverse nature of inputs to and outputs from a child and also the variety of inherent learning mechanisms involved whilst learning language. We present such a 'conglomerate' neural network architecture, ACCLAIM, which integrates both supervised and unsupervised learning algorithms, to simulate the learning ofconcepts, words, conceptualandsemantic relationsand simpleword-orderrules, thus mimicking the production of child-likeone-wordandtwo-wordlanguage. The simulations carried out are 'language informed' as realistic child language data have been used to train the neural networks.