Learning in Python: Study Data Science and Machine Learning including Modern Neural Networks produced in Python, Theano, and TensorFlow by K K.M

Learning in Python: Study Data Science and Machine Learning including Modern Neural Networks produced in Python, Theano, and TensorFlow by K K.M

Author:K, K.M. [K, K.M.]
Language: eng
Format: epub
Published: 2021-03-08T16:00:00+00:00


A neural network in Theano

In the first place, I will portray my data sources, yields, and loads (the heaps will be shared variables):

thX = T.matrix('X')

thT = T.matrix('T')

W1 = theano.shared(np.random.randn(D, M), 'W1')

W2 = theano.shared(np.random.randn(M, K), 'W2')

Notice I've added a "th" prefix to the Theano factors since I will call my genuine data, which are Numpy groups, X and T.

A survey that M is the number of units in the covered layer.

Then, I portray the feedforward movement.

thZ = T.tanh( thX.dot(W1))

thY = T.nnet.softmax( thZ.dot(W2) )

T.tanh is a non-straight limit like the sigmoid, in any case, it runs between - 1 and +1.

Then, I portray my expense work and my assumption work (this is used to calculate the game plan botch later).

cost = - (thT * T.log(thY)).sum()

forecast = T.argmax(thY, axis=1)

Also, I describe my refreshed verbalizations. (notice how Theano has an ability to determine points!)

update_W1 = W1 - lr*T.grad(cost, W1)

update_W2 = W2 - lr*T.grad(cost, W2)

I make train work like the fundamental model above:

train = theano.function(

inputs=[thX, thT],

updates=[(W1, update_W1),(W2, update_W2)],

Besides, I make an assumption ability to uncover to me the cost and conjecture of my test set so I can later learn the screw-up rate and gathering rate.

get_prediction = theano.function(

inputs=[thX, thT],

outputs=[cost, prediction],

)

Likewise, similar to the last portion, I do a for-circle where I just call train() again and again until mixing. (Note that the auxiliary, in any event, will be 0, so much that point the weight will not change anymore). This code uses a system called "cluster slant drop", which rehashes over lots of the planning set one by one, instead of the entire getting ready set. This is a "stochastic" procedure, which implies that we believe that over incalculable models that begin from a comparable scattering, we will converge to a value that is ideal for all of them.

for I in xrange(max_iter):

for j in xrange(n_batches):

Xbatch = Xtrain[j*batch_sz:(j*batch_sz + batch_sz),]

Ybatch = Ytrain_ind[j*batch_sz:(j*batch_sz + batch_sz),]

train(Xbatch, Ybatch)

on the off chance that j % print_period == 0:

cost_val, prediction_val = get_prediction(Xtest, Ytest_ind)



Download



Copyright Disclaimer:
This site does not store any files on its server. We only index and link to content provided by other sites. Please contact the content providers to delete copyright contents if any and email us, we'll remove relevant links or contents immediately.