Share Email Print

Proceedings Paper

The structure of spaces of neural network functions
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

We analyse spaces of deep neural networks with a fixed architecture. We demonstrate that, when interpreted as a set of functions, spaces of neural networks exhibit many unfavourable properties: They are highly non-convex and not closed with respect to Lp-norms, for 0 < p < ∞ and all commonly-used activation functions. They are not closed with respect to the L-norm for almost all practically-used activation functions; here, the (parametric) ReLU is the only exception. Finally, we show that the function that maps a family of neural network weights to the associated functional representation of a network is not inverse stable for every practically-used activation function.

Paper Details

Date Published: 9 September 2019
PDF: 8 pages
Proc. SPIE 11138, Wavelets and Sparsity XVIII, 111380F (9 September 2019); doi: 10.1117/12.2528313
Show Author Affiliations
Philipp Petersen, Univ. of Oxford (United Kingdom)
Mones Raslan, Technische Univ. Berlin (Germany)
Felix Voigtlaender, Catholic Univ. Eichstätt-Ingolstadt (Germany)

Published in SPIE Proceedings Vol. 11138:
Wavelets and Sparsity XVIII
Dimitri Van De Ville; Manos Papadakis; Yue M. Lu, Editor(s)

© SPIE. Terms of Use
Back to Top
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research
Forgot your username?