Neural Networks and TensorFlow - Deep Learning Series [Part 5]

in programming •  7 years ago  (edited)

Neural Networks and Tensorflow - 5 - Backpropagation.png


In the previous lesson we discussed the gradient descent optimizer. This optimizer helps us modify the parameters of our model, in most cases the weights and the bias, to minimize the loss or the cost function - hence, to improve the performance of the algorithm.

So, in this lesson we're going to look at backpropagation or how the optimization process takes place. When the computation has reached the output, we'll compare the predicted output (achieved through forward propagation) with the real output (from our labels). That is going to give us the cost function.

Then, it'll adjust and back-propagate the weights and the bias in order to minimize the loss or the cost function. Reducing this function will bring us closer to the real output, thus the performance of our algorithm will be improved.

Please watch the full lesson below for a slightly more technical explanation of backpropagation, which is a crucial concept in deep learning and neural networks.



To stay in touch with me, follow @cristi


Cristi Vlad Self-Experimenter and Author

Authors get paid when people like you upvote their post.
If you enjoyed what you read here, create your account today and start earning FREE STEEM!
Sort Order:  

Basics, excellent!

At some point, I would LOVE it if you could do a video on how to build a network using LSTMs at a low level, like with Numpy and stuff. I use them all the time, but I can't wrap my head around they do the unrolling trick or compute the gradient. Thanks!

That is definitely in the workings. We'll first go through convolutional neural networks first.