An efficient training and pruning method based on the HY filtering algorithm is proposed for feedforward neural networks (FNN). A FNN's weight importance measure linking up prediction error sensitivity obtained from the HY filtering training, and then a weight salience based pruning algorithm is derived. Moreover, based on the monotonicity property of the HY filtering Riccati equation and the initial value of the error covariance matrix, performance of the HY filtering training algorithm will also be investigated. The simulation results show that our approach is an effective training and pruning method for neural networks.
It is one of the fundamental and challenging problems to determine the node numbers of hidden layers in neural networks. Various efforts have been made to study the relations between the approximation ability and the number of hidden nodes of some specific neural networks, such as single-hidden-layer and two-hiddenlayer feedforward neural networks with specific or conditional activation functions. However, for arbitrary feedforward neural networks, there are few theoretical results on such issues. This paper gives an upper bound on the node number of each hidden layer for the most general feedforward neural networks called multilayer perceptrons (MLP), from an algebraic point of view. First, we put forward the method of expansion linear spaces to investigate the algebraic structure and properties of the outputs of MLPs. Then it is proved that given k distinct training samples, for any MLP with k nodes in each hidden layer, if a certain optimization problem has solutions, the approximation error keeps invariant with adding nodes to hidden layers. Furthermore, it is shown that for any MLP whose activation function for the output layer is bounded on R, at most k hidden nodes in each hidden layer are needed to learn k training samples.