We investigate Solutions provided by the finite-context predictive model called neural prediction machine (NPM) built on the recurrent layer of two types of recurrent neural networks (RNNs). One type is the first-order Elman’s simple recurrent network (SRN) trained for the next symbol prediction by the technique of extended Kalman filter (EKF). The other type of RNN is an interesting unsupervised counterpart to the “claissical” SRN, that is a recurrent version of the Bienenstock, Cooper, Munro (BCM) network that performs a kind of time-conditional projection pursuit. As experimental data we chose a complex symbolic sequence with both long and short memory structures. We compared the Solutions achieved by both types of the RNNs with Markov models to find out whether training can improve initial Solutions reached by random network dynamics that can be interpreted as an iterated function system (IFS). The results of our simulations indicate that SRN trained by EKF achieves better next symbol prediction than its unsupervised counterpart. Recurrent BCM network can provide only the Markovian solution that is not able to cover long memory structures in sequence and thus beat SRN.