We present a new Generalized Learning Vector Quantization classifier called Optimally Generalized Learning Vector Quantization based on a novel weight-update rule for learning labeled samples. The algorithm attains stable prototype/weight vector dynamics in terms of estimated current and previous weights and their updates. Resulting weight update term is then related to the proximity measure used by Generalized Learning Vector Quantization classifiers. New algorithm and some major counterparts are tested and compared for synthetic and publicly available datasets. For both the datasets studied, it is seen that the new classifier outperforms its counterparts in training and testing with accuracy above 80% its counterparts and in robustness against model parameter varition.
A single-step information-theoretic algorithm that is able to identify possible clusters in dataset is presented. The proposed algorithm consists in representation of data scatter in terms of similarity-based data point entropy and probability descriptions. By using these quantities, an information-theoretic association metric called mutual ambiguity between data points is defined, which then is to be employed in determining particular data points called cluster identifiers. For forming individual clusters corresponding to cluster identifiers determined as such, a cluster relevance rule is defined. Since cluster identifiers and associative cluster member data points can be identified without recursive or iterative search, the algorithm is single-step. The algorithm is tested and justified with experiments by using synthetic and anonymous real datasets. Simulation results demonstrate that the proposed algorithm also exhibits more reliable performance in statistical sense compared to major algorithms.
An assembly neural network based on the binary Hebbian rule is suggested for pattern recognition. The network consists of several sub-networks according to the number of classes to be recognized. Each sub-network consists of several neural columns according to the dimensionality of the signal space so that the value of each signal component is encoded by activity of adjacent neurons of the column. A new recognition algorithm is presented which realizes the nearest-neighbor method in the assembly neural network. Computer simulation of the network is performed. The model is tested on a texture segmentation task. The experiments have demonstrated that the network is able to segment reasonably real-world texture images.
In the marketing area, new trends are emerging, as customers are not only interested in the quality of the products or delivered services, but also in a stimulating shopping experience. Creating and influencing customers' experiences has become a valuable differentiation strategy for retailers. Therefore, understanding and assessing the customers' emotional response in relation to products/services represents an important asset. The purpose of this paper consists of investigating whether the customer's facial expressions shown during product appreciation are positive or negative and also which types of emotions are related to product appreciation. We collected a database of emotional facial expressions, by presenting a set of forty product related pictures to a number of test subjects. Next, we analysed the obtained facial expressions, by extracting both geometric and appearance features. Furthermore, we modeled them both in an unsupervised and supervised manner. Clustering techniques proved to be efficient at differentiating between positive and negative facial expressions in 78\% of the cases. Next, we performed more refined analysis of the different types of emotions, by employing different classification methods and we achieved 84\% accuracy for seven emotional classes and 95\% for the positive vs. negative.
The unsupervised learning of feature extraction in high-dimesional patterns is a central problem for the neural network approach. Feature extraction is a procedure which maps original patterns into the feature (or factor) space of reduced dimension. In this paper we demonstrate that Hebbian learning in Hopfield-like neural network is a natural procedure for unsupervised learning of feature extraction. Due to this learning, factors become the attractors of network dynamics, hence they can be revealed by the random search. The neurodynamics is analysed by Single-Step approximation which is known [8] to be rather accurate for sparsely encoded Hopfield-network. Thus, the analysis is restricted by the case of sparsely encoded factors. The accuracy of Single-Step approximation is confirmed by Computer simulations.
We consider the problem of separating noisy overcomplete sources from linear mixtures, i.e., we observe N mixtures of M > N sparse sources. We show that the ``Sparse Coding Neural Gas'' (SCNG) algorithm [8,9] can be employed in order to estimate the mixing matrix. Based on the learned mixing matrix the sources are obtained by orthogonal matching pursuit. Using synthetically generated data, we evaluate the influence of (i) the coherence of the mixing matrix, (ii) the noise level, and (iii) the sparseness of the sources with respect to the performance that can be achieved on the representation level. Our results show that if the coherence of the mixing matrix and the noise level are sufficiently small and the underlying sources are sufficiently sparse, the sources can be estimated from the observed mixtures. In order to apply our method to real-world data, we try to reconstruct each single instrument of a jazz audio signal given only a two-channel recording. Furthermore, we compare our method to the well-known FastICA [4] algorithm and show that in case of sparse sources and presence of additive noise, our method provides a superior estimation of the mixing matrix.