Extreme learning machine (ELM) is an emergent method for training single hidden layer feedforward neural networks (SLFNs) with extremely fast training speed, easy implementation and good generalization performance. This work presents effective ensemble procedures for combining ELMs by exploiting diversity. A large number of ELMs are initially trained in three different scenarios: the original feature input space, the obtained feature subset by forward selection and different random subsets of features. The best combination of ELMs is constructed according to an exact ranking of the trained models and the useless networks are discarded. The experimental results on several regression problems show that robust ensemble approaches that exploit diversity can effectively improve the performance compared with the standard ELM algorithm and other recent ELM extensions.