Swarm intelligence is an emerging field with wide-reaching application opportunities in problems of optimization, analysis and machine learning. While swarm systems have proved very effective when applied to a variety of problems, swarm-based methods for computer vision have received little attention. This paper proposes a swarm system capable of extracting and exploiting the geometric properties of objects in images for fast and accurate recognition. In this approach, computational agents move over an image and affix themselves to relevant features, such as edges and corners. The resulting feature profile is then processed by a classification subsystem to categorize the object. The system has been tested with images containing several simple geometric shapes at a variety of noise levels, and evaluated based upon the accuracy of the system's predictions. The swarm system is able to accurately classify shapes even with high image noise levels, proving this approach to object recognition to be robust and reliable.
This article presents the problem of improving the classifier of handwritten letters from historical alphabets, using letter classification algorithms and transliterating them to Latin. We apply it on Palmyrene alphabet, which is a complex alphabet with letters, some of which are very similar to each other. We created a mobile application for Palmyrene alphabet that is able to transliterate hand-written letters or letters that are given as photograph images. At first, the core of the application was based on MobileNet, but the classification results were not suitable enough. In this article, we suggest an improved, better performing convolutional neural network architecture for hand-written letter classifier used in our mobile application. Our suggested new convolutional neural network architecture shows an improvement in accuracy from 0.6893 to 0.9821 by 142% for hand-written model in comparison with the original MobileNet. Future plans are to improve the photographic model as well.
The dataset with 409,679 images belonging to 772 snake species from 188 countries and all continents (386,006 images with labels targeted for development and 23,673 images without labels for testing). In addition, we provide a simple train/val (90% / 10%) split to validate preliminary results while ensuring the same species distributions. Furthermore, we prepared a compact subset (70,208 images) for fast prototyping. The test set data consists of 23,673 images submitted to the iNaturalist platform within the "first four months of 2021.
All data were gathered from online biodiversity platforms (i.e., iNaturalist, HerpMapper) and further extended by data scraped from Flickr. The provided dataset has a heavy long-tailed class distribution, where the most frequent species (Thamnophis sirtalis) is represented by 22,163 images and the least frequent by just 10 (Achalinus formosanus).