As an important artificial neural network, associative memory model can be employed to mimic human thinking and machine intelligence. In this paper, first, a multi-valued many-to-many Gaussian associative memory model (M3GAM) is proposed by introducing the Gaussian unidirectional associative memory model (GUAM) and Gaussian bidirectional associative memory model (GBAM) into Hattori {et al}'s multi-module associative memory model ((MMA)2). Second, the M3GAM's asymptotical stability is proved theoretically in both synchronous and asynchronous update modes, which ensures that the stored patterns become the M3GAM's stable points. Third, by substituting the general similarity metric for the negative squared Euclidean distance in M3GAM, the generalized multi-valued many-to-many Gaussian associative memory model (GM3GAM) is presented, which makes the M3GAM become its special case. Finally, we investigate the M3GAM's application in association-based image retrieval, and the computer simulation results verify the M3GAM's robust performance.
This paper deals with a generalized automatic method used for designing artificial neural network (ANN) structures. One of the most important problems is designing the optimal ANN for many real applications. In this paper, two techniques for automatic finding an optimal ANN structure are proposed. They can be applied in real-time applications as well as in fast nonlinear processes. Both techniques proposed in this paper use the genetic algorithms (GA). The first proposed method deals with designing a structure with one hidden layer. The optimal structure has been verified on a nonlinear model of an isothermal reactor. The second algorithm allows designing ANN with an unlimited number of hidden layers each of which containing one neuron. This structure has been verified on a highly nonlinear model of a polymerization reactor. The obtained results have been compared with the results yielded by a fully connected ANN.
In this paper, a multi-layer perceptron (MLP) neural network (NN) is put forward as an efficient tool for performing two tasks: 1) optimization of multi-objective problems and 2) solving a non-linear system of equations. In both cases, mathematical functions which are continuous and partially bounded are involved. Previously, these two tasks were performed by recurrent neural networks and also strong algorithms like evolutionary ones. In this study, multi-dimensional structure in the output layer of the MLP-NN, as an innovative method, is utilized to implicitly optimize the multivariate functions under the network energy optimization mechanism. To this end, the activation functions in the output layer are replaced with the multivariate functions intended to be optimized. The effective training parameters in the global search are surveyed. Also, it is demonstrated that the MLP-NN with proper dynamic learning rate is able to find globally optimal solutions. Finally, the efficiency of the MLP-NN in both aspects of speed and power is investigated by some well-known experimental examples. In some of these examples, the proposed method gives explicitly better globally optimal solutions compared to that of the other references and also shows completely satisfactory results in other experiments.
In this paper, neural network based cryptology is performed. The system consists of two stages. In the first stage, neural network-based pseudo-random numbers (NPRNGs) are generated and the results are tested for randomness using National Institute of Standard Technology (NIST) randomness tests. In the second stage, a neural network-based cryptosystem is designed using NPRNGs. In this cryptosystem, data, which is encrypted by non-linear techniques, is subject to decryption attempts by means of two identical artificial neural networks (ANNs). With the first neural network, non-linear encryption is modeled using relationbuilding functionality. The encrypted data is decrypted with the second neural network using decision-making functionality.
This work addresses the problem of overfitting the training data. We suggest smoothing the decision boundaries by eliminating border instances from the training set before training Artificial Neural Networks (ANNs). This is achieved by using a variety of instance reduction techniques. A large number of experiments were performed using 21 benchmark data sets from UCI machine learning repository, the experiments were performed with and without the introduction of noise in the data set. Our empirical results show that using a noise filtering algorithm to filter out border instances before training an ANN does not only improve the classification accuracy but also speeds up the training process by reducing the number of training epochs. The effectiveness of the approach is more obvious when the training data contains noisy instances.