A new method to detect damages on crates of beverages is investigated. It is based on a pattern-recognition-system by an artificial neural network (ANN) with a feedforward multilayer-perceptron topology. The sorting criterion is obtained by mechanical vibration analysis which provides characteristic frequency spectra for all possible damage cases and crate models. To support the network training, a large number of numerical data-sets is calculated by the finite-elementmethod (FEM). The combination of artificial neural networks with methods of numerical simulation is a powerful instrument to cover the broad range of possible damages. First results are discussed with respect to the influence of modelling inaccuracies of the finite-element-model and the support of the ANN by training-data obtained from numerical simulation. Also the feasibility of neuro-numerical ANN training will be dwelled on.
Combining classifiers, so-called Multiple Classifier Systems (MCSs), gained a lot of interest has recent years. Researchers, developed a large variety of methods in order to exploit strengths of individual classifiers. In this paper, we address the problem of how to implement a multi-class classifier by an ensemble of one-class classifiers. To improve performance of a compound classifier, different individual classifiers (which may, e.g., differ in complexity, type, training algorithm or other) can be combined and that could increase its both performance, and robustness. The model of one-class classifiers can only recognize one of the classes, therefore, it is quite difficult to produce MCSs on the basis of one-class classifiers. Thus, we introduce a new scheme for decision-making in MCSs through a fuzzy inference system. Specifically, we address two important open problems in the context: model selection and combiner training. Classifiers' outputs as supports for given classes are combined by means of a fuzzy engine. Thus, we are interested in such individual classifiers which can return support for given classes. There are no other restrictions on the used classifiers. The proposed model has been evaluated by computer experiments on several benchmark datasets in the Matlab environment. Their results prove that fuzzy combination of binary classifiers may be a valuable classifier itself. Additionally, there are indicated both some application areas of the models, and new research frontiers to be examined.
In this paper, a deep learning-based method for earthquake prediction is proposed. Large-magnitude earthquakes and tsunamis triggered by earthquakes can kill thousands of people and cause millions of dollars worth of economic losses. The accurate prediction of large-magnitude earthquakes is a worldwide problem. In recent years, deep learning technology that can automatically extract features from mass data has been applied in image recognition, natural language processing, object recognition, etc., with great success. We explore to apply deep learning technology to earthquake prediction. We propose a deep learning method for continuous earthquake prediction using historical seismic events. First, we project the historical seismic events onto a topographic map. Taking Taiwan as an example, we generate the images of the dataset for deep learning and mark a label "1" or "0", depending on whether in the upcoming 30 days a greater than M6 earthquake will occur. Second, we train our deep leaning network model, using the images of the dataset. Finally, we make earthquake predictions, using the trained network model. The result shows that we can get the best result, when we predict earthquakes in the upcoming 30 days using data from the past 120 days. Here, we use R score as the performance metrics. The best R score is 0.303. Although the R score is not high enough, using the past 120 days' historic seismic event to predict the upcoming 30 days' biggest earthquake magnitude can be seen as the pattern of Taiwan earthquake because the R score is rather good compared to other datasets. The proposed method performs well without manually designing feature vectors, as in the traditional neural network method. This method can be applied to earthquake prediction in other seismic zones.
Artificial Neural Networks have gained increasing popularity as an alternative to statistical methods for classification of remote sensed images. The superiority of neural networks is that, if they are trained with representative training samples they show improvement over statistical methods in terms of overall accuracies. However, if the distribution functions of the information classes are known, statistical classification algorithms work very well. To retain the advantages of both the classifiers, decision fusion is used to integrate the decisions of individual classifiers. In this paper a new unsupervised neural network has been proposed for the classification of multispectral images. Classification is initially achieved using Maximum Likelihood and Minimum-Distance-to-Means classifier followed by neural network classifier and the decisions of these classifiers are fused in the decision fusion center implemented using Majority-Voting technique. The results show that the scheme is effective in terms of increased classification accuracies (98%) compared to the conventional methods.
We summarize the main results on probabilistic neural networks recently published in a series of papers. Considering the framework of statistical pattern recognition we assume approximation of class-conditional distributions by finite mixtures of product components. The probabilistic neurons correspond to mixture components and can be interpreted in neurophysiological terms. In this way we can find possible theoretical background of the functional properties of neurons. For example, the general formula for synaptical weights provides a statistical justification of the well known Hebbian principle of learning. Similarly, the mean effect of lateral inhibition can be expressed by means of a formula proposed by Perez as a measure of dependence tightness of involved variables.
Random Neural Networks (RNNs) area classof Neural Networks (NNs) that can also be seen as a specific type of queuing network. They have been successfully used in several domains during the last 25 years, as queuing networks to analyze the performance of resource sharing in many engineering areas, as learning tools and in combinatorial optimization, where they are seen as neural systems, and also as models of neurological aspects of living beings. In this article we focus on their learning capabilities, and more specifically, we present a practical guide for using the RNN to solve supervised learning problems. We give a general description of these models using almost indistinctly the terminology of Queuing Theory and the neural one. We present the standard learning procedures usedby RNNs, adapted from similar well-established improvements in the standard NN field. We describe in particular a set of learning algorithms covering techniques based on the use of first order and, then, of second order derivatives. We also discuss some issues related to these objects and present new perspectives about their use in supervised learning problems. The tutorial describes their most relevant applications, and also provides a large bibliography.
Speaker identification is becoming an increasingly popular technology in today's society. Besides being cost effective and producing a strong return on investment in all the defined business cases, speaker identification lends itself well to a variety of uses and implementations. These implementations can range from corridor security to safer driving to increased productivity. By focusing on the technology and companies that drive today's voice recognition and identification systems, we can learn current implementations and predict future trends.
In this paper one-dimensional discrete cosine transform (DCT) is used as a feature extractor to reduce signal information redundancy and transfer the sampled human speech signal from time domain to frequency domain. Only a subset of these coefficients, which have large magnitude, are selected. These coefficients are necessary to save the most important information of the speech signal, which are enough to recognize the original speech signal, and then these coefficients are normalized globally. The normalized coefficients are fed to a multilayer momentum backpropagation neural network for classification. The recognition rate can be very high by using a very small number of the coefficients which are enough to reflect the specifications of the speaker voice.
An artificial neural network ANN is learned to classify the voices of eight speakers, five voice samples for each speaker are used in the learning phase. The network is tested using other five different samples for the same speakers. During the learning phase many parameters are tested which are: the number of selected coefficients, number of hidden nodes and the value of the momentum parameter. In the testing phase the identification performance is computed for each value of the above parameters.