首页 > 代码库 > BP, Gradient descent and Generalisation
BP, Gradient descent and Generalisation
For each training pattern presented to a multilayer neural network, we can computer the error:
yd(p)-y(p)
Sum-Squared Error squaring and summing across all n patterns, SSE give a good measure of the overall performance of the network.
SSE depends on weight and threshholds.
Back-propagation
Back-propagation is a "gradient descent" training algorithm
Step:
1. Calculate error for a single patternm
2. Compute weight changes that will make the greatest change in error with error gradient(steepest slope)
only possible with differentiable activation functions(e.g. sigmoid)
gradient descent only approximate training proceeds pattern-by pattern.
gradient descent may not always reach true global error minimum, otherwise it may get stuck in "local" minimum.
solution: momentum term
BP, Gradient descent and Generalisation