Neural networks have shown good results for detecting a certain pattern in a given image. In this paper, faster neural networks for pattern detection are presented. Such processors are designed based on cross correlation in the frequency domain between the input matrix and the input weights of neural networks. This approach is developed to reduce the computation steps required by these faster neural networks for the searching process. The principle of divide and conquer strategy is applied through matrix decomposition. Each matrix is divided into submatrices small in size, and then each one is tested separately by using a single faster neural processor. Furthermore, faster pattern detection is obtained by using parallel processing techniques to test the resulting submatrices at the same time, employing the same number of faster neural networks. In contrast to faster neural networks, the speed-up ratio is increased with the size of the input matrix when using faster neural networks and matrix decomposition. Moreover, the problem of local submatrix normalization in the frequency domain is solved. The effect of matrix normalization on the speed-up ratio of pattern detection is discussed. Simulation results show that local submatrix normalization through weight normalization is faster than submatrix normalization in the spatial domain. The overall speed-up ratio of the detection process is increased as the normalization of weights is done offline.
A representative dimensionality reduction is an important step in the analysis of real-world data. Vast amounts of raw data are generated by cyberphysical and information systems in different domains. They often feature a combination of high dimensionality, large volume, and vague, loosely defined structure. The main goal of visual data analysis is an intuitive, comprehensible, efficient, and graphically appealing representation of information and knowledge that can be found in such collections. In order to achieve an efficient visualisation, raw data need to be transformed into a refined form suitable for machine and human analysis. Various methods of dimension reduction and projection to low-dimensional spaces are used to accomplish this task. Sammon's projection is a well-known non-linear projection algorithm valued for its ability to preserve dependencies from an original high-dimensional data space in the low-dimensional projection space. Recently, it has been shown that bio-inspired real-parameter optimization methods can be used to implement the Sammon's projection on data from the domain of social networks. This work investigates the ability of several advanced types of the differential evolution algorithm as well as their parallel variants to minimize the error function of the Sammon's projection and compares their results and performance to a traditional heuristic algorithm.
The architecture and working of the Artificial Neural Networks are an inspiration from the human brain. The brain due to its highly parallel nature and immense computational powers still remains the motivation for researchers. A single system-single processor approach is a highly unlikely way to model a neural network for large computational needs. Many approaches have been proposed that adopt a parallel implementation of ANNs. These methods do not consider the difference in processing powers of the constituting units and hence workload distribution among the nodes is not optimal. Human brain not always has equal processing power among the neurons. A person having disability in some part of brain may be able to perform every task with reduced capabilities. Disabilities weaken the processing of some parts. This inspires us to make a self-adaptive system of ANN that would optimally distribute computation among the nodes. The self-adaptive nature of the algorithm makes it possible for the algorithm to taper dynamic changes in node performance. We used data, node and layer partitioning in a hierarchical manner in order to evolve the most optimal architecture comprising of the best features of these partitioning techniques. The adaptive hierarchical architecture enables performance optimisation in whatever condition and problem the algorithm is used. The system was implemented and tested on 20 systems working in parallel. Besides, the computational speed-up, the algorithm was able to monitor changes in performance and adapt accordingly.