This paper proposes an offline gradient method with smoothing L1/2 regularization for learning and pruning of the pi-sigma neural networks (PSNNs). The original L1/2 regularization term is not smooth at the origin, since it involves the absolute value function. This causes oscillation in the computation and difficulty in the convergence analysis. In this paper, we propose to use a smooth function to replace and approximate the absolute value function, ending up with a smoothing L1/2 regularization method for PSNN. Numerical simulations show that the smoothing L1/2 regularization method eliminates the oscillation in computation and achieves better learning accuracy. We are also able to prove a convergence theorem for the proposed learning method.
In this paper, we introduce and study a new class of completely generalized nonlinear variational inclusions for fuzzy mappings and construct some new iterative algorithms. We prove the existence of solutions for this kind of completely generalized nonlinear variational inclusions and the convergence of iterative sequences generated by the algorithms.
Effect basic algebras (which correspond to lattice ordered effect algebras) are studied. Their ideals are characterized (in the language of basic algebras) and one-to-one correspondence between ideals and congruences is shown. Conditions under which the quotients are OMLs or MV-algebras are found.
In this paper, we investigate the convergence behavior of the asymmetric Deffuant-Weisbuch (DW) models during the opinion evolution. Based on the convergence of the asymmetric DW model that generalizes the conventional DW model, we first propose a new concept, the separation time, to study the transient behavior during the DW model's opinion evolution. Then we provide an upper bound of the expected separation time with the help of stochastic analysis. Finally, we show relations of the separation time with model parameters by simulations.
An approximated gradient method for training Elman networks is considered. For the finite sample set, the error function is proved to be monotone in the training process, and the approximated gradient of the error function tends to zero if the weights sequence is bounded. Furthermore, after adding a moderate condition, the weights sequence itself is also proved to be convergent. A numerical example is given to support the theoretical findings.
This paper investigates a split-complex backpropagation algorithm with momentum (SCBPM) for complex-valued neural networks. Convergence results for SCBPM are proved under relaxed conditions and compared with the existing results. Monotonicity of the error function during the training iteration process is also guaranteed. Two numerical examples are given to support the theoretical findings.
Consider the delay differential equation (1) ˙x(t) = g(x(t), x(t − r)), where r > 0 is a constant and g : 2 → is Lipschitzian. It is shown that if r is small, then the solutions of (1) have the same convergence properties as the solutions of the ordinary differential equation obtained from (1) by ignoring the delay.
Two new time-dependent versions of div-curl results in a bounded domain \domain⊂\RR3 are presented. We study a limit of the product \vectorvk\vectorwk, where the sequences \vectorvk and \vectorwk belong to \Lp2. In Theorem ??? we assume that \rotor\vectorvk is bounded in the Lp-norm and \diver\vectorwk is controlled in the Lr-norm. In Theorem ??? we suppose that \rotor\vectorwk is bounded in the Lp-norm and \diver\vectorwk is controlled in the Lr-norm. The time derivative of \vectorwk is bounded in both cases in the norm of \Hk−1. The convergence (in the sense of distributions) of \vectorvk\vectorwk to the product \vectorv\vectorw of weak limits of \vectorvk and \vectorwk is shown.
The paper is devoted to convergence of double sequences and its application to products. In a convergence space we recognize three types of double convergences and points, respectively. We give examples and describe their structure and properties. We investigate the relationship between the topological and convergence closure product of two Fréchet spaces. In particular, we give a necessary and sufficient condition for the topological product of two compact Hausdorff Fréchet spaces to be a Fréchet space.