Quantifying the separability of data classes in neural networks

Schilling A, Maier A, Gerum R, Metzner C, Krauß P (2021)


Publication Type: Journal article, Original article

Publication year: 2021

Journal

Book Volume: 139

Pages Range: 278-293

DOI: 10.1016/j.neunet.2021.03.035

Abstract

We introduce the Generalized Discrimination Value (GDV) that measures, in a non-invasive manner, how well different data classes separate in each given layer of an artificial neural network. It turns out that, at the end of the training period, the GDV in each given layer L attains a highly reproducible value, irrespective of the initialization of the network's connection weights. In the case of multi-layer perceptrons trained with error backpropagation, we find that classification of highly complex data sets requires a temporal reduction of class separability, marked by a characteristic ‘energy barrier’ in the initial part of the GDV(L) curve. Even more surprisingly, for a given data set, the GDV(L) is running through a fixed ‘master curve’, independently from the total number of network layers. Finally, due to its invariance with respect to dimensionality, the GDV may serve as a useful tool to compare the internal representational dynamics of artificial neural networks with different architectures for neural architecture search or network compression; or even with brain activity in order to decide between different candidate models of brain function.

Authors with CRIS profile

Involved external institutions

How to cite

APA:

Schilling, A., Maier, A., Gerum, R., Metzner, C., & Krauß, P. (2021). Quantifying the separability of data classes in neural networks. Neural Networks, 139, 278-293. https://doi.org/10.1016/j.neunet.2021.03.035

MLA:

Schilling, Achim, et al. "Quantifying the separability of data classes in neural networks." Neural Networks 139 (2021): 278-293.

BibTeX: Download