• Visual Loop-Closure Detection via Prominent Feature Tracking

    Visual Loop-Closure Detection via Prominent Feature Tracking

    Loop-closure detection (LCD) has become an essential part of any simultaneous localization and mapping (SLAM) framework. It provides a means to rectify the drift error, which is typically accumulated along a robot’s trajectory. In this article we propose an LCD method based on tracked visual features, combined with a signal peak-trace filtering approach for loop-closure identification. In particular, local binary features are firstly extracted and tracked through consecutive frames. This way online visual words are generated, which in turn form an incremental bag of visual words (BoVW) vocabulary. Loop-closures (LCs) result from a classification method, which considers current and past state peaks on the similarity matrix. The system discerns the movement of the peaks to identify whether they come about to be true-positive detections or background noise. The suggested peak-trace filtering technique provides exceeding robustness to noisy signals, enabling the usage of only a handful of visual local features per image; thus resulting into a considerably downsized visual vocabulary.

    Authors
    Ioannis Tsampikos Papapetros, Vasiliki Balaska, Antonios Gasteratos

    Journal / Conference
    Journal of Intelligent & Robotic Systems
    Publication Date
    March 12th, 2022

  • Do Neural Network Weights Account for Classes Centers?

    Do Neural Network Weights Account for Classes Centers?

    The exploitation of deep neural networks (DNNs) as descriptors in feature learning challenges enjoys apparent popularity over the past few years. The above tendency focuses on the development of effective loss functions that ensure both high feature discrimination among different classes, as well as low geodesic distance between the feature vectors of a given class. The vast majority of the contemporary works rely their formulation on an empirical assumption about the feature space of a network’s last hidden layer, claiming that the weight vector of a class accounts for its geometrical center in the studied space. This article at hand follows a theoretical approach and indicates that the aforementioned hypothesis is not exclusively met. This fact raises stability issues regarding the training procedure of a DNN, as shown in our experimental study. Consequently, a specific symmetry is proposed and studied both analytically and empirically that satisfies the above assumption, addressing the established convergence issues. More specifically, the aforementioned symmetry suggests that all weight vectors are unit, coplanar, and their vector summation equals zero. Such a layout is proven to ensure a more stable learning curve compared against the corresponding ones succeeded by popular models in the field of feature learning.

    Authors
    Ioannis Kansizoglou, Loukas Bampis, Antonios Gasteratos

    Journal / Conference
    IEEE Transactions on Neural Networks and Learning Systems
    Publication Date
    March 8th, 2022