An endeavour is made in this paper to describe a constructive modular neural network called Growing Multi-Experts Network (GMN), which can approxiniate to us an unknown nonlinear function from observed input-output training data. In the GMN, the problem space is decomposed into overlapping regions by an expertise domain and the local expert models are graded according to their expertise level. The network output is computed by the smooth combination of local linear models. On the other hand, in order to avoid over-fitting problems, the GMN deploys a Redundant Experts Removal Algorithm to remove the redundant local experts from the network. In addition, a Growing Neural Gas algorithm is used to generate an induced Delaunay triangulation that is highly desired for optimal function approxiniation. The GMN is tested by four benchmark problems to compare its performance with other modeling approaches. The performance of the GMN compares favorably with the existing techniqnes. Thus, it seems to be extremely promising to determine an optimal structure of the network with a lot of potentials to be exploited.
In this paper, we describe a self-organizing neural network model that
addresses the process of early lexical acquisition in young children. The growing lexicon is modeled by combined semantic word representations based on distributional statistics of words and on grounded semantic features of words. Changing semantic word representations are assumed to model the maturation of word meaning and serve as inputs to the growing semantic map. The model has been tested on a real child-directed parental language corpus and as a result, the map demonstrates the emergence and reorganization of various word categories, as quantified by two measures.