In this paper, a multi-layer perceptron (MLP) neural network (NN) is put forward as an efficient tool for performing two tasks: 1) optimization of multi-objective problems and 2) solving a non-linear system of equations. In both cases, mathematical functions which are continuous and partially bounded are involved. Previously, these two tasks were performed by recurrent neural networks and also strong algorithms like evolutionary ones. In this study, multi-dimensional structure in the output layer of the MLP-NN, as an innovative method, is utilized to implicitly optimize the multivariate functions under the network energy optimization mechanism. To this end, the activation functions in the output layer are replaced with the multivariate functions intended to be optimized. The effective training parameters in the global search are surveyed. Also, it is demonstrated that the MLP-NN with proper dynamic learning rate is able to find globally optimal solutions. Finally, the efficiency of the MLP-NN in both aspects of speed and power is investigated by some well-known experimental examples. In some of these examples, the proposed method gives explicitly better globally optimal solutions compared to that of the other references and also shows completely satisfactory results in other experiments.