Do Neural Network Weights Account for Classes Centers?

The exploitation of deep neural networks (DNNs) as descriptors in feature learning challenges enjoys apparent popularity over the past few years. The above tendency focuses on the development of effective loss functions that ensure both high feature discrimination among different classes, as well as low geodesic distance between the feature vectors of a given class. The vast majority of the contemporary works rely their formulation on an empirical assumption about the feature space of a network’s last hidden layer, claiming that the weight vector of a class accounts for its geometrical center in the studied space. This article at hand follows a theoretical approach and indicates that the aforementioned hypothesis is not exclusively met. This fact raises stability issues regarding the training procedure of a DNN, as shown in our experimental study. Consequently, a specific symmetry is proposed and studied both analytically and empirically that satisfies the above assumption, addressing the established convergence issues. More specifically, the aforementioned symmetry suggests that all weight vectors are unit, coplanar, and their vector summation equals zero. Such a layout is proven to ensure a more stable learning curve compared against the corresponding ones succeeded by popular models in the field of feature learning.

Authors
Ioannis Kansizoglou, Loukas Bampis, Antonios Gasteratos

Journal
IEEE Transactions on Neural Networks and Learning Systems
Publication Date
March 8th, 2022

en_USEN
Skip to content