The popularity of feed-forward neural networks such as multilayer perceptrons and radial basis function networks is to a large extent due to their universal approximation capability. This paper concerns its theoretical principles, together with the influence of network architecture and of the distribution of training data on this capability. Then, the possibility to exploit this influence in order to improve the approximation capability of multilayer perceptrons by means of cross-validation and boosting is explained. Although in theory, the impact of both methods on the approximation capability of feed-forward networks is known, they are still not common in real-world applications. Therefore, the paper documents usefulness of both methods on a detailed case study in materials science.