In this paper, we propose a novel hybrid metaheuristic algorithm, which integrates a Threshold Accepting algorithm (TA) with a traditional Particle Swarm Optimization (PSO) algorithm. We used the TA as a catalyst in speeding up convergence of PSO towards the optimal solution. In this hybrid, at the end of every iteration of PSO, the TA is invoked probabilistically to refine the worst particle that lags in the race of finding the solution for that iteration. Consequently the worst particle will be refined in the next iteration. The robustness of the proposed approach has been tested on 34 unconstrained optimization problems taken from the literature. The proposed hybrid demonstrates superior preference in terms of functional evaluations and success rate for 30 simulations conducted.
In this paper, we propose a primal interior-point method for large sparse generalized minimax optimization. After a short introduction, where the problem is stated, we introduce the basic equations of the Newton method applied to the KKT conditions and propose a primal interior-point method. Next we describe the basic algorithm and give more details concerning its implementation covering numerical differentiation, variable metric updates, and a barrier parameter decrease. Using standard weak assumptions, we prove that this algorithm is globally convergent if a bounded barrier is used. Then, using stronger assumptions, we prove that it is globally convergent also for the logarithmic barrier. Finally, we present results of computational experiments confirming the efficiency of the primal interior point method for special cases of generalized minimax problems.
In this paper, we propose a primal interior-point method for large sparse minimax optimization. After a short introduction, the complete algorithm is introduced and important implementation details are given. We prove that this algorithm is globally convergent under standard mild assumptions. Thus the large sparse nonconvex minimax optimization problems can be solved successfully. The results of extensive computational experiments given in this paper confirm efficiency and robustness of the proposed method.
In this report we propose a new recursive matrix formulation of limited memory variable metric methods. This approach can be used for an arbitrary update from the Broyden class (and some other updates) and also for the approximation of both the Hessian matrix and its inverse. The new recursive formulation requires approximately 4mn multiplications and additions per iteration, so it is comparable with other efficient limited memory variable metric methods. Numerical experiments concerning Algorithm 1, proposed in this report, confirm its practical efficiency.