Arithmetic networks consist of neural, Boolean and fuzzy ones. Supposing the acyclic structure, decomposition of arithmetic network is possible. There are three results of our analysis: node unification, edge unification and network decomposition. We obtain only 14 node types and 4 edge types for realization of a wide class of traditional arithmetic networks from literature. The main result of our work is the splitting of the competitive neurons (nodes) to distance and soft extreme nodes. The side result of analysis is using the group of nodes instead of layer. It enables grouping the nodes of the same type but with the possibility of long interconnections. The main aiin of our work was to realize the system of arithmetic networks in the SQL language on any SQL server. The database realization enables not only saving, watching and editing the network structures and parameters but also studying the response of archived networks. The learning process was not included because of being iterative in general and unrealizable without loops on database server at that time.
Noisy time series are typical results of observations or technical measurements. Noise reduction and signál structure saving are contradictory but useful aims. Non-linear time series processing is a way for non-gaussian noise suppression. Many valued algebras enriched by square root are able to realize the operators close to the weighted averages. Fuzzy data processing based on Łukasiewicz algebra [3] with square root satisfies the Lipschitz condition and causes constrained sensitivity of the mapping. The paper presents a fuzzy neural network based on Modus Ponens [1] with fuzzy logic function [6] preprocessing in the hidden layer. AU the fuzzy algorithms were realized in the Matlab systém and in C++. The fuzzy processing is applied to prediction of sunspot numbers. The systematic approach based on filter selection is combined with weight optimization.
One of the useful areas of the 2D image processing is called de-noising. When both original (ideal) and noisy images are available, the quality of de-noising is measurable. Our paper is focused to local 2D image processing using Łukasiewicz algebra with the square root function. The basic operators and functions in given algebra are first defined and then analyzed. The first result of the fuzzy logic function (FLF) analysis is its decomposition and realization as Łukasiewicz network (ŁN) with three types of processing nodes. The second result of FLF decomposition is the decomposed Łukasiewicz network (DŁN) with dyadic preprocessing and two types of processing nodes. The decomposition chain, which begins with FLF and converts it to ŁN and then to DŁN, terminates as Łukasiewicz artificial neural network (\lann) with dyadic preprocessing and only one type of processing node. Then the ŁANN is able to learn its integer weights in the ANN style. We are able to realize a set of individual FLF filters as ŁANN. Their preprocessing strategies are based on the pixel neighborhood, sorted list, Walsh list, and L-estimates. The quality of de-noising can be improved via compromise filtering. Two types of compromise de-noising filters are also realizable as ŁANN. One of them is called constrained referential neural network (CRNN). The other one is called dyadic weight neural network (DWNN). The compromise filters operate with the values from the set of individual filters. Both CRNN and DWNN are able to increase the quality of image processing as demonstrated on the biomedical MR image. All the calculations are realized in the Matlab environment
The image de-noising is a practical application of image processing.
Both linear and nonlinear filters are ušed for the noise reduction. The filters which are realizable in Lukasiewicz algebra with square root were analyzed first and then they were used for the 2D image de-noising. There is a set of quality measures recommended for the evaluation of de-noising quality. In čase of various quality measures we can find the best filter. The Pareto optimality principle and the AIA technique were used for this purpose. The procedures were demonstrated on a set of MRI biomedical images.
There are two basic types of artificial neural networks: Multi-Layer Perceptron (MLP) and Radial Basis Function network (RBF). The first type (MLP) consists of one type of neuron, which can be decomposed into a linear and sigmoid part. The second type (RBF) consists of two types of neurons: radial and linear ones. The radial basis function is analyzed and then used for decomposition of RBF network. The resulting Perceptron Radial Basis Function Network (PRBF) consists of two types of neurons: linear and extended sigmoid ones. Any RBF network can be directly converted to a four-layer PRBF network while any MLP network with three layers can be approximated by a five-layer PRBF network. The new PRBF network is then a generalization of MLP and RBF network abilities. Learning strategies are also discussed. The new type of PRBF network and its learning via repeated local optimization is demonstrated on a numerical example together with RBF and MLP for comparison. This paper is organized as follows: Basic properties of MLP and RBF neurons are summarized in the first two chapters. The third chapter includes novel relationship between sigmoidal and radial functions, which is useful for RBF decomposition and generalization. Description of new PRBF network, together with its properties, is subject of the fourth chapter. Numerical experiments with a PRBF and their requests are given in the last chapters.
The Self Organized Mapping (SOM) is a kind of artificial neural network (ANN) which enables the pattern set self-organization in space with Euclidean metrics. Thus, the traditional SOM consists of two layers; input one with n nodes and output one with H ones. Every output node is characterized by its weight vector Wk G in this case. The absence of pattern coordinates in special cases is a good motivation for self-organization in any metric space (U, d). The learning in the metric space is introduced on the cluster analysis problém and a basic clustering algorithm is obtained. The relationship with the traditional ISODATA method and NP-completeness is proven. The direct generalization comes to SOM learning in the metric space, its algorithm, properties and NP-completeness. The SOM learning is based on an objective function and its batch minimization. Three estimates of the proposed objective function are included. They will help to study the relationship with Kohonen batch learning, the cluster analysis and the convex programming task. The Matlab source code for the SOM in the metric space is available in the appendix. Two numeric examples are oriented at self-organization in the metric space of written words and the metric space of functions.