Deep Learning by Josh Patterson & Adam Gibson

Deep Learning by Josh Patterson & Adam Gibson

Author:Josh Patterson & Adam Gibson
Language: eng
Format: epub
Publisher: O'Reilly Media, Inc.
Published: 2017-08-09T16:00:00+00:00


Understanding the Debug Output During Training

During training, we’ll see command-line output such as the following:

21:36:00.358 [main] INFO o.d.o.l.ScoreIterationListener - Score at iteration 0 is 0.5154157920151949

The value at the end is the average of the loss function for each example in the current mini-batch. This number will not always begin and end at the same levels and is dependent on the loss function used in the network architecture. If we take MSE, for example, we’d get a different progress score as opposed to, say, a negative log likelihood loss function.

Here’s the easiest way to understand this number: “low is good, high is bad,” and we want to see it generally drop over time.

To turn on this debugging output, add the following line to an example:



Download



Copyright Disclaimer:
This site does not store any files on its server. We only index and link to content provided by other sites. Please contact the content providers to delete copyright contents if any and email us, we'll remove relevant links or contents immediately.