![]() Backward propagation is from the output layer to the input layer. It does this by processing information from layer to layer, and each layer has a set of neurones with activation functions, that give an output to the neurones in the subsequent layer. This is the process of the model doing what it was designed to do (ex. Forward propagation is from the input layer to the output layer. There is forward propagation and backward propagation. Propagation in AI is just the flow of sequences for making decisions and learning. Since I have introduced a term propagation, I think it is necessary to give a brief explanation for what it is. ![]() Saturation, and dead neurones make it hard for the models to learn during back propagation. This saturation means all outputs will be relatively equal making subsequent outputs not-so-useful. In some cases, you will have dead neurones when they are saturated, this is when there is little or no variation in the outputs based on the nature of the inputs and the activation function used. So, if the input itself is useless and the algorithm can’t make sense of the input, then the output is consequently useless as well. Activation functions basically determine the output of a function based on the input. Activation functions are the algorithms that the neurones use to sort out useful data from useless data. ![]() When the neural network is fed with a lot of data, it tries to separate useful from useless data much like how human brains work. The mechanism of neural networks works similarly to human brains. Pre-requisite: Types of Activation Functions used in Machine Learning Overview of Neural Networks and the use of Activation functions
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |