In this short note, we introduce a new architecture for spiking perceptron: The actual output is a linear combination of the firing time of the perceptron and the spiking intensity (the gradient of the state function) at the firing time. It is shown by numerical experiments that this novel spiking perceptron can solve the XOR problem, while a classical spiking neuron usually needs a hidden layer to solve the XOR problem.
An approximated gradient method for training Elman networks is considered. For the finite sample set, the error function is proved to be monotone in the training process, and the approximated gradient of the error function tends to zero if the weights sequence is bounded. Furthermore, after adding a moderate condition, the weights sequence itself is also proved to be convergent. A numerical example is given to support the theoretical findings.
This paper investigates a split-complex backpropagation algorithm with momentum (SCBPM) for complex-valued neural networks. Convergence results for SCBPM are proved under relaxed conditions and compared with the existing results. Monotonicity of the error function during the training iteration process is also guaranteed. Two numerical examples are given to support the theoretical findings.
Intuitionistic fuzzy sets (IFSs) are generalization of fuzzy sets by adding an additional attribute parameter called non-membership degree. In this paper, a max-min intuitionistic fuzzy Hopfield neural network (IFHNN) is proposed by combining IFSs with Hopfield neural networks. The stability of IFHNN is investigated. It is shown that for any given weight matrix and any given initial intuitionistic fuzzy pattern, the iteration process of IFHNN converges to a limit cycle. Furthermore, under suitable extra conditions, it converges to a stable point within finite iterations. Finally, a kind of Lyapunov stability of the stable points of IFHNN is proved, which means that if the initial state of the network is close enough to a stable point, then the network states will remain in a small neighborhood of the stable point. These stability results indicate the convergence of memory process of IFHNN. A numerical example is also provided to show the effectiveness of the Lyapunov stability of IFHNN.
Autoencoder networks have been demonstrated to be effcient for unsupervised learning of representation of images, documents and time series. Sparse representation can improve the interpretability of the input data and the generalization of a model by eliminating redundant features and extracting the latent structure of data. In this paper, we use L1/2 regularization method to enforce sparsity on the hidden representation of an autoencoder for achieving sparse representation of data. The performance of our approach in terms of unsupervised feature learning and supervised classiffcation is assessed on the MNIST digit data set, the ORL face database and the Reuters-21578 text corpus. The results demonstrate that the proposed autoencoder can produce sparser representation and better reconstruction performance than the Sparse Autoencoder and the L1 regularization Autoencoder. The new representation is also illustrated to be useful for a deep network to improve the classiffcation performance.