Share Email Print
cover

Proceedings Paper

Relating information complexity and training in deep neural networks
Author(s): Alex Gain; Hava Siegelmann
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

Deep Neural Networks may be costly to train, and if testing error is too large, retraining may be required, unless lifelong learning methods are applied. Crucial to addressing learning at the edge, without access to powerful cloud computing, is the notion of problem difficulty for non-standard data domains. While it is known that training is harder for classes that are more entangled, the complexity of data points was not previously studied as an important contributor to training dynamics. We analyze data points by their information complexity and relate the complexity of the data to the test error. We elucidate training dynamics of DNNs, demonstrating that high complexity datapoints contribute to the error of the network, and that training DNNs consist of two important aspects - (1) Minimization of error due to high complexity datapoints, and (2) Margin decrease where entanglement of classes occurs. Whereas data complexity may be ignored when training in a cloud, it must be considered as part of the setting when training at the edge.

Paper Details

Date Published: 13 May 2019
PDF: 9 pages
Proc. SPIE 10982, Micro- and Nanotechnology Sensors, Systems, and Applications XI, 109822H (13 May 2019); doi: 10.1117/12.2520172
Show Author Affiliations
Alex Gain, The Johns Hopkins Univ. (United States)
Hava Siegelmann, Univ. of Massachusetts Amherst (United States)


Published in SPIE Proceedings Vol. 10982:
Micro- and Nanotechnology Sensors, Systems, and Applications XI
Thomas George; M. Saif Islam, Editor(s)

© SPIE. Terms of Use
Back to Top