We summarize the main results on probabilistic neural networks recently published in a series of papers. Considering the framework of statistical pattern recognition we assume approximation of class-conditional distributions by finite mixtures of product components. The probabilistic neurons correspond to mixture components and can be interpreted in neurophysiological terms. In this way we can find possible theoretical background of the functional properties of neurons. For example, the general formula for synaptical weights provides a statistical justification of the well known Hebbian principle of learning. Similarly, the mean effect of lateral inhibition can be expressed by means of a formula proposed by Perez as a measure of dependence tightness of involved variables.
Considering the statistical recognition of multidimensional binary observations we approximato the unknown class-conditioiial probability distributions by multivariate Bernoulli mixtures. We show that both the parameter optimization and the resulting Bayesian decision-making can be realized by a probabilistic neural network having strictly modular properties. In particular, the process of learning based on the EM algorithm can be perfomied by means of a sequential autonomous adaptation of neurons involving only the infomiation from the input synapses and the interior of neurons. In this sense the probabilistic neural network can be designed automatically. The properties of the sequential strictly modular learning procedure are illustrated by mumerical exainples.