This paper presents a new approach to 3D object recognition by using an Octree model library (OML) I, II and fast search algorithm. The fast search algorithm is used for finding the 4 pairs of feature points to estimate the viewing direction uses on effective two level database. The method is based on matching the object contour to the reference occluded shapes of 49, 118 viewing directions. The initially bestmatched viewing direction is calibrated by searching for the 4 pairs of feature points between the input image and the image projected along the estimated viewing direction. At this point, the input shape is recognized by matching it to the projected shape. The computational complexity of the proposed method is shown to be O(n^2) in the worst case, and that of the simple combinatorial method of O(m^4,n^2), where n and m denote the number of feature points of the 3D model object and the 2D object, respectively.
Over the years, one of the challenges of a rule based expert system is the possibility of evolving a compact and consistent knowledge-base with a fewer numbers of rules that are relevant to the application domain, in order to enhance the comprehensibility of the expert system. In this paper, the hybrid of fuzzy rule mining interestingness measures and fuzzy expert system is exploited as a means of solving the problem of unwieldiness and maintenance complication in the rule based expert system. This negatively increases the knowledge-base space complexity and reduces rule access rate which impedes system response time. To validate this concept, the Coronary Heart Disease risk ratio determination is used as the case study. Results of fuzzy expert system with a fewer numbers of rules and fuzzy expert system with a large numbers of rules are presented for comparison. Moreover, the effect of fuzzy linguistic variable risk ratio is investigated. This makes the expert system recommendation close to human perception.
Finding reducts is one of the key problems in the increasing applications of rough set theory, which is also one of the bottlenecks of the rough set methodology. The population-based reduction approaches are attractive to find multiple reducts in the decision systems, which could be applied to generate multi-knowledge and to improve decision accuracy. In this paper, we design a multi-swarm synergetic optimization algorithm (MSSO) for rough set reduction and multi-knowledge extraction. It is a multi-swarm based search approach, in which different individual trends to be encoded to different reduct. The approach discovers the best feature combinations in an efficient way to observe the change of positive region as the particles proceed throughout the search space. The performance of our approach is evaluated and compared with Standard Particle Swarm Optimization (SPSO) and Genetic Algorithms (GA). Empirical results illustrate that the approach can be applied for multiple reduct problems and multi-knowledge extraction effectively.
Writer Identification (WI) is one of the areas in pattern recognition that have created a center of attention for many researchers to work in. Recently, its main focus is in forensics and biometric application, e.g. writing style can be used as biometric features for authenticating individuality uniqueness. Existing works in WI concentrate on feature extraction and classification task in order to identify the handwritten authorship. However, additional steps need to be performed in order to have a better representation of input prior to the classification task. Features extracted from the feature extraction task for a writer are in various representations, which degrades the classification performance. This paper will discuss this additional process that can transform the various representations into a better representation of individual features for Individuality of Handwriting, in order to improve the performance of identification in WI.
This paper provides a theoretical proof illustrating that for a certain class of functions having the property that the partial derivatives have the same equation with respect to all variables, the optimum value (minimum or maximum) takes place at a point where all the variables have the same value. This information will help the researchers working with high dimensional functions to minimize the computational burden due to the fact that the search has to be performed only with respect to one variable.
Digital Watermarking (DW) based on computational intelligence (CI) is currently attracting considerable interest from the research community. This article provides an overview of the research progress in applying CI methods to the problem of DW. The scope of this review will encompass core methods of CI, including rough sets (RS), fuzzy logic (FL), artificial neural networks (ANNs), genetic algorithms (GA), swarm intelligence (SI), and hybrid intelligent systems. The research contributions in each field are systematically summarized and compared to highlight promising new research directions. The findings of this review should provide useful insights into the current DW literature and be a good source for anyone who is interested in the application of CI approaches to DW systems or related fields. In addition, hybrid intelligent systems are a growing research area in CI.
Job Scheduling in Computational Grids is gaining importance due to the need for efficient large-scale Grid-enabled applications. Among different optimization techniques designed for the problem, Genetic Algorithm (GA) is a popular class of solution methods. As GAs are high level algorithms, specific algorithms can be designed by choosing the genetic operators as well as the evolutionary strategies such as Steady State GAs and Struggle GAs. In this paper we focus on Struggle GAs and their tuning for scheduling of independent jobs in computational grids. Our results showed that a careful hash implementation for computing the similarity of solutions was able to alleviate the computational burden of Struggle GA and perform better than standard similarity measures. This is particularly interesting for the scheduling problem in Grid systems, which due to changeability over time, has demanding time restrictions on the computation of planning the jobs to resources.