The laws of gravity and mass interactions inspire the gravitational search algorithm (GSA), which finds optimal regions of complex search spaces through the interaction of individuals in a population of particles. Although GSA has proven effective in both science and engineering, it is still easy to suffer from premature convergence especially facing complex problems. In this paper, we proposed a new hybrid algorithm by integrating genetic algorithm (GA) and GSA (GA-GSA) to avoid premature convergence and to improve the search ability of GSA. In GA-GSA, crossover and mutation operators are introduced from GA to GSA for jumping out of the local optima. To demonstrate the search ability of the proposed GA-GSA, 23 complex benchmark test functions were employed, including unimodal and multimodal high-dimensional test functions as well as multimodal test functions with fixed dimensions. Wilcoxon signed-rank tests were also utilized to execute statistical analysis of the results obtained by PSO, GSA, and GA-GSA. Experimental results demonstrated that the proposed algorithm is both efficient and effective.
The functional structure of our new network is not preset; instead, it
comes into existence in a random, stochastic manner.
The anatomical structure of our model consists of two input “neurons”, hundreds up to five thousands of hidden-layer “neurons” and one output “neuron”.
The proper process is based on iteration, i.e., mathematical operation governed by a set of rules, in which repetition helps to approximate the desired result.
Each iteration begins with data being introduced into the input layer to be processed in accordance with a particular algorithm in the hidden layer; it then continues with the computation of certain as yet very crude configurations of images regulated by a genetic code, and ends up with the selection of 10% of the most accomplished “offspring”. The next iteration begins with the application of these new, most successful variants of the results, i.é., descendants in the continued process of image perfection. The ever new variants (descendants) of the genetic algorithm are always generated randomly. The determinist rule then only requires the choice of 10% of all the variants available (in our case 20 optimal variants out of 200).
The stochastic model is marked by a number of characteristics, e.g., the initial conditions are determined by different data dispersion variance, the evolution of the network organisation is controlled by genetic rules of a purely stochastic nature; Gaussian distribution noise proved to be the best “organiser”.
Another analogy between artificial networks and neuronal structures lies in the use of time in network algorithms.
For that reason, we gave our networks organisation a kind of temporal development, i.e., rather than being instantaneous; the connection between the artificial elements and neurons consumes certain units of time per one synapse or, better to say, per one contact between the preceding and subsequent neurons.
The latency of neurons, natural and artificial alike, is very importaiit as it
enables feedback action.
Our network becomes organised under the effect of considerable noise. Then, however, the amount of noise must subside. However, if the network evolution gets stuek in the local minimum, the amount of noise has to be inereased again. While this will make the network organisation waver, it will also inerease the likelihood that the erisis in the local minimum will abate and improve substantially the state of the network in its self-organisation.
Our system allows for constant state-of-the-network reading by ineaiis of establishing the network energy level, i.e., basically ascertaining progression of the network’s rate of success in self-organisation. This is the principal parameter for the detection of any jam in the local minimum. It is a piece of input information for the formator algorithm which regulates the level of noise in the system.
Combinatorial optimization is a discipline of decision making in the case of diserete alternatives. The Genetic Neighborhood Search (GNS) is a hybrid method for these combinatorial optimization problems. The main feature of the approach is iterative use of local search on extended neighborhoods, where the better solution will be the center of a new extended neighborhood. When the center of the neighborhood would be t.he better solution the algorithm will stop. We propose using a genetic algorithm to exi)lore the extended neighborhoods. This GA is characterized by the method of evaluating the fitness of individuals and useing two new operators. Computational experience with the Symmetric TSP shows that this approach is robust with respect to the starting point and that high quality solutions are obtained in a reasonable time.
The social foraging behavior of Escherichia coli bacteria has recently been studied by several researchers to develop a new algorithm for distributed optimization control. The Bacterial Foraging Optimization Algorithm (BFOA), as it is called now, has many features analogous to classical Evolutionary Algorithms (EA). Passino [1] pointed out that the foraging algorithms can be integrated in the framework of evolutionary algorithms. In this way BFOA can be used to model some key survival activities of the population, which is evolving. This article proposes a hybridization of BFOA with another very popular optimization technique of current interest called Differential Evolution (DE). The computational chemotaxis of BFOA, which may also be viewed as a stochastic gradient search, has been coupled with DE type mutation and crossing over of the optimization agents. This leads to the new hybrid algorithm, which has been shown to overcome the problems of slow and premature convergence of both the classical DE and BFOA over several benchmark functions as well as real world optimization problems.
The study is analogy of the natural evolution and the technical object design dates back more than 50 years. The genetic algorithm (GA) is considered to be a stochastic heuristic (or meta-heuristic) optimisation method. The best use of GA can be found in solving multidimensional optimisation problems, for which analytical solutions are unknown (or extremely complex) and efficient numerical methods are also not known. GAs are inspired by adaptive and evolutionary mechanisms of live organisms, but they do not copy the natural process precisely. The paper describes the main terms, principles and original implementation details of GA. The main goal of this paper is to help readers to use proper GAs on the field of technical object design. and Obsahuje seznam literatury
Time series forecasting, such as stock price prediction, is one of the most important complications in the financial area as data is unsteady and has noisy variables, which are affected by many factors. This study applies a hybrid method of Genetic Algorithm (GA) and Artificial Neural Network (ANN) technique to develop a method for predicting stock price and time series. In the GA method, the output values are further fed to a developed ANN algorithm to fix errors on exact point. The analysis suggests that the GA and ANN can increase the accuracy in fewer iterations. The analysis is conducted on the 200-day main index, as well as on five companies listed on the NASDAQ. By applying the proposed method to the Apple stocks dataset, based on a hybrid model of GA and Back Propagation (BP) algorithms, the proposed method reaches to 99.99% improvement in SSE and 90.66% in time improvement, in comparison to traditional methods. These results show the performances and the speed and the accuracy of the proposed approach.
Recently, a support vector machine (SVM) has been receiving increasing attention in the field of regression estimation due to its remarkable characteristics such as good generalization performance, the absence of local minima and sparse representation of the solution. However, within the SVMs framework, there are very few established approaches for identifying important features. Selecting significant features from all candidate features is the first step in regression estimation, and this procedure can improve the network performance, reduce the network complexity, and speed up the training of the network.
This paper investigates the use of saliency analysis (SA) and genetic algorithm (GA) in SVMs for selecting important features in the context of regression estimation. The SA measures the importance of features by evaluating the sensitivity of the network output with respect to the feature input. The derivation of the sensitivity of the network output to the feature input in terms of the partial derivative in SVMs is presented, and a systematic approach to remove irrelevant features based on the sensitivity is developed. GA is an efficient search method based on the mechanics of natural selection and population genetics. A simple GA is used where all features are mapped into binary chromosomes with a bit "1" representing the inclusion of the feature and a bit of "0" representing the absence of the feature. The performances of SA and GA are tested using two simulated non-linear time series and five real financial time series. The experiments show that with the simulated data, GA and SA detect the same true feature set from the redundant feature set, and the method of SA is also insensitive to the kernel function selection. With the real financial data, GA and SA select different subsets of the features. Both selected feature sets achieve higher generation performance in SVMs than that of the full feature set. In addition, the generation performance between the selected feature sets of GA and SA is similar. All the results demonstrate that that both SA and GA are effective in the SVMs for identifying important features.
This paper deals with a generalized automatic method used for designing artificial neural network (ANN) structures. One of the most important problems is designing the optimal ANN for many real applications. In this paper, two techniques for automatic finding an optimal ANN structure are proposed. They can be applied in real-time applications as well as in fast nonlinear processes. Both techniques proposed in this paper use the genetic algorithms (GA). The first proposed method deals with designing a structure with one hidden layer. The optimal structure has been verified on a nonlinear model of an isothermal reactor. The second algorithm allows designing ANN with an unlimited number of hidden layers each of which containing one neuron. This structure has been verified on a highly nonlinear model of a polymerization reactor. The obtained results have been compared with the results yielded by a fully connected ANN.
Web 2.0 has led to the expansion and evolution of web-based communities that enable people to share information and communicate on shared platforms. The inclination of individuals towards other individuals of similar choices, decisions and preferences to get related in a social network prompts the development of groups or communities. The identification of community structure is one of the most challenging task that has received a lot of attention from the researchers. Network community structure detection can be expressed as an optimisation problem. The objective function selected captures the instinct of a community as a group of nodes in which intra-group connections are much denser than inter-group connections. However, this problem often cannot be well solved by traditional optimisation methods due to the inherent complexity of network structure. Therefore, evolutionary algorithms have been embraced to deal with community detection problem. Many objective functions have been proposed to capture the notion of quality of a network community. In this paper, we assessed the performance of four important objective functions namely Modularity, Modularity Density, Community Score and Community Fitness on real-world benchmark networks, using Genetic Algorithm (GA). The performance measure taken to assess the quality of partitions is NMI (Normalized mutual information). From the experimental results, we found that the communities' identified by these objectives have different characteristics and modularity density outperformed the other three objective functions by uncovering the true community structure of the networks. The experimental results provide a direction to researchers on choosing an objective function to measure the quality of community structure in various domains like social networks, biological networks, information and technological networks.
In this study Active Learning Method (ALM) as a novel fuzzy modeling approach is compared with optimized Support Vector Machine (SVM) using simple Genetic Algorithm (GA), as a well known datadriven model for long term simulation of daily streamflow in Karoon River. The daily discharge data from 1991 to 1996 and from 1996 to 1999 were utilized for training and testing of the models, respectively. Values of the Nash-Sutcliffe, Bias, R2 , MPAE and PTVE of ALM model with 16 fuzzy rules were 0.81, 5.5 m3 s -1, 0.81, 12.9%, and 1.9%, respectively. Following the same order of parameters, these criteria for optimized SVM model were 0.8, -10.7 m3 s-1, 0.81, 7.3%, and -3.6%, respectively. The results show appropriate and acceptable simulation by ALM and optimized SVM. Optimized SVM is a well-known method for runoff simulation and its capabilities have been demonstrated. Therefore, the similarity between ALM and optimized SVM results imply the ability of ALM for runoff modeling. In addition, ALM training is easier and more straightforward than the training of many other data driven models such as optimized SVM and it is able to identify and rank the effective input variables for the runoff modeling. According to the results of ALM simulation and its abilities and properties, it has merit to be introduced as a new modeling method for the runoff modeling. and Cieľom štúdie bolo porovnať možnosti dlhodobej simulácie denných prietokov v rieke Karoon pomocou novovyvinutej fuzzy metódy aktívneho učenia (Active Learning Method - ALM) a známej metódy vektormi podporených strojov (Support Vector Machine - SVM), optimalizovanej genetickým algoritmom (GA). Na tréning a testovanie modelov boli použité časové rady denných prietokov za obdobie rokov 1991 až 1996 a 1996 až 1999. Hodnoty parametrov Nash-Sutcliffe, Bias, R2 , MPAE a PTVE pre model ALM boli 0,81; 5,5 m3 s-1; 0,81; 12,9% a 1,9%. Parametre v tom istom poradí pre model SVM boli 0,8 -10,7 m3 s-1, 0,81; 7,3%; a -3,6%. Z výsledkov simulácií vyplýva, že aplikáciou metód ALM a SVM možno získať porovnateľné a akceptovateľné výsledky. Podobnosť výsledkov medzi ALM a SVM implikuje vhodnosť novovyvinutej metódy ALM pre simuláciu odtoku. Tréning ALM je ľahší a jednoduchší ako je tréning ďalších dátami riadených modelov podobného typu. Navyše algoritmus ALM je schopný identifikovať a zoradiť efektívne vstupné premenné pre modelovanie odtoku. Na základe dosiahnutých výsledkov možno metódu ALM zaradiť medzi nové, alternatívne metódy modelovania odtoku.