learn-neural-networks.com
Single-layer neural network training | | Learn Neural Networks
In this tutoral we will discuss about mathematical basis of single-layer neural network training methods. Gradient descent method Gradient descent method is a method of finding a local extremum (minimum or maximum) of a function by moving along a gradient of error function. According to gradient descent method the weights and thresholds of neurons calculated by the formulas: [latexpage] \begin{equation} w_{ij}(t+1)=w_{ij}(t)-\alpha \frac{\partial E}{\partial w_{ij}} \end{equation} \begin{equation} b_{i}(t+1)=b_{i}(t)-\alpha \frac{\partial E}{\partial w_{ij}} \end{equation} Here, $E$ is the error function, and $\alpha$ is the learning rate of training algorithm. Delta rule Delta rule also called Widrow-Hoff's learning rule was introduced by Bernard Widrow and Marcian Hoff, to minimize the error over all training patterns. It implies the minimization of the root-mean-square error of the neural network, determined by the formula: $E=\frac{1}{2}(Y-d)^2$, where d- is a target