In this lesson on deep learning with TensorFlow we will be looking at feed forward neural networks.
More specifically, we'll be discussing how they are made of artificial neurons and how these neurons receive inputs, process them, and send signals forward.
In TensorFlow and also in other libraries that work with neural networks, there is this convention that the inputs which enter the neural network are being multiplied by weights, to which we add a bias terms. It appears that this convention makes the computational process, as well as the optimization of the network much more efficient.
Here we also discuss the three most common activation functions (which are in the neurons):
- the sigmoid activation
- the hyperbolic tangent
- the rectified linear unit
Each of these functions have their place and some of them are more fit to a type of project than to others. To be honest, the two that I've used the most (in computer vision) are sigmoid and ReLU. Anyway, we'll be discussing in more detail about activation functions when the time comes. Here we're only introducing them and briefly mentioning how they work and how they look like. Please watch the video for the complete lesson:
To stay in touch with me, follow @cristi
Cristi Vlad Self-Experimenter and Author
Being A SteemStem Member
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit
creative posts can add insight.success always
@cristi
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit
Github link not opening. Can you provide some other link for the code?
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit
click on show more to see the full github link. If you click on it directly it's truncated.
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit