In this work we report on the implementation of methods for data processing signals from microelectrode arrays (MEA) and the application of these methods for signals originated from two types of MEAs to detect putative neurons and sort them into subpopulations. We recorded electrical signals from firing neurons using titanium nitride (TiN) and boron doped diamond (BDD) MEAs. In previous research, we have shown that these methods have the capacity to detect neurons using commerciallyavailable TiN-MEAs. We have managed to cultivate and record hippocampal neurons for the first time using a newly developed custom-made multichannel BDD-MEA with 20 recording sites. We have analysed the signals with the algorithms developed and employed them to inspect firing bursts and enable spike sorting. We did not observe any significant difference between BDD- and TiN-MEAs over the parameters, which estimated spike shape variability per each detected neuron. This result supports the hypothesis that we have detected real neurons, rather than noise, in the BDD-MEA signal. BDD materials with suitable mechanical, electrical and biocompatibility properties have a large potential in novel therapies for treatments of neural pathologies, such as deep brain stimulation in Parkinson’s disease.
The purpose of feature selection in machine learning is at least two-fold - saving measurement acquisition costs and reducing the negative effects of the curse of dimensionality with the aim to improve the accuracy of the models and the classification rate of classifiers with respect to previously unknown data. Yet it has been shown recently that the process of feature selection itself can be negatively affected by the very same curse of dimensionality - feature selection methods may easily over-fit or perform unstably. Such an outcome is unlikely to generalize well and the resulting recognition system may fail to deliver the expectable performance. In many tasks, it is therefore crucial to employ additional mechanisms of making the feature selection process more stable and resistant the curse of dimensionality effects. In this paper we discuss three different approaches to reducing this problem. We present an algorithmic extension applicable to various feature selection methods, capable of reducing excessive feature subset dependency not only on specific training data, but also on specific criterion function properties. Further, we discuss the concept of criteria ensembles, where various criteria vote about feature inclusion/removal and go on to provide a general definition of feature selection hybridization aimed at combining the advantages of dependent and independent criteria. The presented ideas are illustrated through examples and summarizing recommendations are given.
Artificial Neural Networks have gained increasing popularity as an alternative to statistical methods for classification of remote sensed images. The superiority of neural networks is that, if they are trained with representative training samples they show improvement over statistical methods in terms of overall accuracies. However, if the distribution functions of the information classes are known, statistical classification algorithms work very well. To retain the advantages of both the classifiers, decision fusion is used to integrate the decisions of individual classifiers. In this paper a new unsupervised neural network has been proposed for the classification of multispectral images. Classification is initially achieved using Maximum Likelihood and Minimum-Distance-to-Means classifier followed by neural network classifier and the decisions of these classifiers are fused in the decision fusion center implemented using Majority-Voting technique. The results show that the scheme is effective in terms of increased classification accuracies (98%) compared to the conventional methods.
Feature reduction is an important issue in pattern recognition. Lower feature dimensionality could reduce the complexity and enhance the generalization ability of classifiers. In this paper we propose a new supervised dimensionality reduction method based on Locally Linear Embedding and Distance Metric Learning. First, in order to increase the interclass separability, a linear discriminant transformation learnt from distance metric learning is used to map the original data points to a new space. Then Locally Linear Embedding is adopted to reduce the dimensionality of data points. This process extends the traditional unsupervised Locally Linear Embedding to supervised scenario in a clear and natural way. In addition, it can also be seen as a general framework for developing new supervised dimensionality reduction algorithms by utilizing corresponding unsupervised methods. Extensive classification experiments performed on some real-world and artificial datasets show that the proposed method can achieve comparable to or even better results over other state-of-the-art dimensionality reduction methods.
The paper presents new methodology how to find and estimate the main features of time series to achieve the reduction of their components (dimensionality reduction) and so to provide the compression of information contained in it under keeping the selected features invariant. The presented compression algorithm is based on estimation of truncated time series components in such a way that the spectrum functions of both original and truncated time series are sufficiently close together. In the end, the set of examples is shown to demonstrate the algorithm performance and to indicate the applications of the presented methodology.