Mastering Deep Learning Fundamentals with Python: The Absolute Ultimate Guide for Beginners To Expert and Step By Step Guide to Understand Python Programming Concepts by Wilson Richard

Mastering Deep Learning Fundamentals with Python: The Absolute Ultimate Guide for Beginners To Expert and Step By Step Guide to Understand Python Programming Concepts by Wilson Richard

Author:Wilson, Richard [Wilson, Richard]
Language: eng
Format: epub
Published: 2019-07-08T16:00:00+00:00


Even though our analysis of CNNs is progressing well, there is always something missing: Where do the characteristics come from? And how do we define the weights of our fully connected layers? If you had to choose everything manually, the CNNs would be much less popular than they are. Fortunately, a very important concept is doing this work for us: backpropagation .

To be able to use backpropagation, we must have a collection of images for which we already know the category to select. This means that some charitable (and patient) souls have analyzed thousands of images upstream and have associated each of them with their respective categories, X or O.

These images are therefore used with a CNN that has not yet been driven, which means that each pixel of each characteristic and each weight of each fully connected layer is configured with a random value. After that, we start to shoot images at CNN, one by one.

For each image analyzed by the CNN, a vote is obtained. The number of errors that we make in our classification then informs us about the quality of our characteristics and weights. The characteristics and weights can then be adjusted to reduce the error. Each value is increased or decreased, and the new error value of our network is recalculated each time. Whatever adjustment is made, if the error decreases, the fit is retained.

After doing this, for each pixel of each feature, each convolutional layer, and each weight in each fully connected layer, the new weights provide a response that works slightly better for that image.

This process is then repeated with each of the other images that have a label .

Elements that appear only in rare images are quickly forgotten, but patterns that are found regularly in a large number of images are retained by the characteristics and weight of the connections. If you have enough labelled images, these values ​​stabilize at a set of values ​​that work quite well for a variety of cases.

As is certainly apparent, backpropagation is another expensive calculation step and another motivation for specialized computer hardware.

Hyperparameters

Unfortunately, not all aspects of CNN's are as intuitive to learn and understand as we have seen so far. Thus, there is always a long list of parameters that must be set manually to allow CNN to have better results.

For each convolutional layer, how many characteristics should one choose? How many pixels should be considered in each feature?

For each Pooling layer, which window size should we choose? What not?

For each additional fully connected layer, how many hidden neurons should one define?

Furthermore, to these parameters, there are also other architectural elements of a higher level to consider: How many layers of each type should one include? In which order? Some models of deep learning can have more than a hundred layers, which makes the number of possibilities extremely important.

With so many possible combinations and permutations, only a small fraction of the possible configurations have been tested so far. The different designs of CNNs are generally



Download



Copyright Disclaimer:
This site does not store any files on its server. We only index and link to content provided by other sites. Please contact the content providers to delete copyright contents if any and email us, we'll remove relevant links or contents immediately.