Multilayer perceptron: Difference between revisions
| (4 intermediate revisions by the same user not shown) | |||
| Line 122: | Line 122: | ||
:<math>m</math> weight m |
:<math>m</math> weight m |
||
:<math>i</math> input |
:<math>i</math> input |
||
This PHP implementation supports dynamic learning rate by default. |
|||
=== Weight decay === |
=== Weight decay === |
||
| Line 165: | Line 163: | ||
:<math>t</math> time (training step) |
:<math>t</math> time (training step) |
||
:<math>\alpha</math> learning rate |
:<math>\alpha</math> learning rate |
||
To avoid too big changes the maximum weight change is limited by the following equation: |
|||
:<math>\Delta w_{i}(t) \leq \mu \cdot \Delta w_{i}(t-1)</math> |
|||
:<math>\mu = [1.75 .. 2.25]</math> |
|||
:<math>w</math> weight |
|||
:<math>i</math> neuron |
|||
:<math>t</math> time (training step) |
|||
:<math>\mu</math> maximal weight change factor |
|||
=== RProp (Resilient Propagation) === |
|||
The RProp algorithm just refers to the direction of the gradient. |
|||
<math>\Delta w_{ij}(t) = \begin{cases} |
|||
-\Delta p_{ij}, & \text{if } \frac{\part E}{\part w_{ij}} > 0 \\ |
|||
+\Delta p_{ij}, & \text{if } \frac{\part E}{\part w_{ij}} < 0 \\ |
|||
0, & \text{if } \frac{\part E}{\part w_{ij}} = 0 |
|||
\end{cases}</math> |
|||
<math>\Delta p_{ij}(t) = \begin{cases} |
|||
\alpha^+ \cdot \Delta w_{ij}(t-1), & \text{if } \frac{\part E}{\part w_{ij}}(t-1) \cdot \frac{\part E}{\part w_{ij}}(t) > 0 \\ |
|||
\alpha^- \cdot \Delta w_{ij}(t-1), & \text{if } \frac{\part E}{\part w_{ij}}(t-1) \cdot \frac{\part E}{\part w_{ij}}(t) < 0 \\ |
|||
\Delta w_{ij}(t-1), & \text{if } \frac{\part E}{\part w_{ij}}(t-1) \cdot \frac{\part E}{\part w_{ij}}(t) = 0 |
|||
\end{cases}</math> |
|||
:<math>\alpha</math> learning rate |
|||
:<math>w</math> weight |
|||
:<math>p</math> weight change |
|||
:<math>\alpha^+ = 1.2</math> |
|||
:<math>\alpha^- = 0.5</math> |
|||
:<math>\Delta w(0) = 0.5</math> |
|||
:<math>\Delta w(t)_{max} = 50</math> |
|||
:<math>\Delta w(t)_{min} = 0</math> |
|||
=== RProp+ === |
|||
The RProp+ algorithm reduce the previous weight change from the last weight change if the mathematical sign of the gradient changes. |
|||
<math>\Delta w_{ij}(t) = \begin{cases} |
|||
\alpha^+ \cdot \Delta w_{ij}(t-1), & \text{if } \frac{\part E}{\part w_{ij}}(t-1) \cdot \frac{\part E}{\part w_{ij}}(t) > 0 \\ |
|||
\Delta w_{ij}(t-1) - \Delta w_{ij}(t-2), & \text{if } \frac{\part E}{\part w_{ij}}(t-1) \cdot \frac{\part E}{\part w_{ij}}(t) < 0 \\ |
|||
\Delta w_{ij}(t-1), & \text{if } \frac{\part E}{\part w_{ij}}(t-1) \cdot \frac{\part E}{\part w_{ij}}(t) = 0 |
|||
\end{cases}</math> |
|||
=== iRProp+ === |
|||
The iRProp+ is a improve RProp+ algorithm with a little change. Before reducing the previous weight change from the last weight change, the network error will be calculated and compared. If the network error increases from <math>E(t-2)</math> to <math>E(t-1)</math>, then the procedure of RProp+ will be done. Otherwise no change will be done, because if <math>E(t-1)</math> has a lower value than <math>E(t-2)</math> the weight change seems to be correct to convergent the neural network. |
|||
<math>\Delta w_{ij}(t) = \begin{cases} |
|||
\alpha^+ \cdot \Delta w_{ij}(t-1), & \text{if } \frac{\part E}{\part w_{ij}}(t-1) \cdot \frac{\part E}{\part w_{ij}}(t) > 0 \\ |
|||
\Delta w_{ij}(t-1) - \Delta w_{ij}(t-2), & \text{if } \frac{\part E}{\part w_{ij}}(t-1) \cdot \frac{\part E}{\part w_{ij}}(t) < 0 \text{ and if } E(t) > E(t-1) \\ |
|||
\Delta w_{ij}(t-1), & \text{if } \frac{\part E}{\part w_{ij}}(t-1) \cdot \frac{\part E}{\part w_{ij}}(t) = 0 |
|||
\end{cases}</math> |
|||
== Binary and linear input == |
== Binary and linear input == |
||
Latest revision as of 11:25, 22 December 2009
General
A multilayer perceptron is a feedforward artificial neural network. This means the signal inside the neural network flows from input layer passing hidden layers to output layer. While training the error correction of neural weights are done in the opposite direction. This is done by the backpropagation algorithm.
Activation
At first a cumulative input is calculated by the following equation:
Considering the BIAS value the equation is:
Sigmoid activation function
Hyperbolic tangent activation function
using output range between -1 and 1, or
using output range between 0 and 1.
- cumulative input
- weight of input
- value of input
- number of inputs
- Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle k} number of neuron
Error of neural network
If the neural network is initialized by random weights it has of course not the expected output. Therefore training is necessary. While supervised training known inputs and their corresponded output values are presented to the network. So it is possible to compare the real output with the desired output. The error is described as the following algorithm:
- Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle E={1\over2} \sum^{n}_{i=1} (t_{i}-o_{i})^{2}}
- Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle E} network error
- Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle n} count of input patterns
- Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle t_{i}} desired output
- Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle o_{i}} calculated output
Backpropagation
The learning algorithm of a single layer perceptron is easy compared to a multilayer perceptron. The reason is that just the output layer is directly connected to the output, but not the hidden layers. Therefore the calculation of the right weights of the hidden layers is difficult mathematically. To get the right delta value for changing the weights of hidden neuron is described in the following equation:
- Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle \Delta w_{ij}= -\alpha \cdot {\partial E \over \partial w_{ij}} = \alpha \cdot \delta_{j} \cdot x_{i}}
- Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle E} network error
- Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle \Delta w_{ij}} delta value Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle w_{ij}} of neuron connection Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle i} to Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle j}
- Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle \alpha} learning rate
- Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle \delta_{j}} the error of neuron Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle j}
- Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle x_{i}} input of neuron Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle i}
- Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle t_{j}} desired output of output neuron Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle j}
- Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle o_{j}} real output of output neuron Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle j} .
Programming solution of backpropagation
In this PHP implementation of multilayer perceptron the following algorithm is used for weight changes in hidden layers and output layer.
Weight change of output layer
- Failed to parse (Conversion error. Server ("https://wikimedia.org/api/rest_") reported: "Cannot get mml. Server problem."): {\displaystyle \Delta w_{k}=o_{k}\cdot (a_{k}-o_{k})\cdot (1-o_{k})}
- neuron k
- output
- input
- desired output
- weight
- weight m
- learning rate
- momentum
- neuron k
- neuron l
- weight
- weight m
- input
- output
- count of neurons
Momentum
To avoid oscillating weight changes the momentum factor is defined. Therefore the calculated weight change would not be the same always.
Overfitting
To avoid overfitting of neural networks in this PHP implementation the training procedure is finished if real output value has a fault tolerance of 1 per cent of desired output value.
Choosing learning rate and momentum
The proper choosing of learning rate () and momentum () is done by experience. Both values have a range between 0 and 1. This PHP implementation uses a default value of 0.5 for and 0.95 for . and cannot be zero. Otherwise no weight change will be happen and the network would never reach an errorless level. Theses factors can be changed by runtime.
Dynamic learning rate
To convergent the network faster to its lowest error, use of dynamic learning rate may be a good way.
- Failed to parse (Conversion error. Server ("https://wikimedia.org/api/rest_") reported: "Cannot get mml. Server problem."): {\displaystyle w_{mk}=w_{mk}+\alpha \cdot \gamma \cdot i_{m}\cdot \Delta w_{k}}
- learning rate
- momentum
- dynamic learning rate factor
- neuron k
- weight
- weight m
- input
Weight decay
Normally weights grow up to large numbers. But in fact this is not necessary. The weight decay algorithm tries to avoid large weights. Through large weights maybe the network convergence takes too long.
The weight change algorithm without weight decay is the following:
- Failed to parse (Conversion error. Server ("https://wikimedia.org/api/rest_") reported: "Cannot get mml. Server problem."): {\displaystyle \Delta w_{i}(t)=\alpha \cdot {\frac {\partial E(t)}{\partial w_{i}(t)}}}
By subtracting a value the weight change will be reduce in relation to the last weight.
- weight
- neuron
- error function
- time (training step)
- learning rate
- weight decay factor
Quick propagation algorithm
The Quickprop algorithm calculates the weight change by using the quadratic function . Two different error values of two different weights are the two points of a secant. Relating this secant to a quadratic function it is possible to calculate its minimum . The x-coordinate of the minimum point is the new weight value.
- Failed to parse (Conversion error. Server ("https://wikimedia.org/api/rest_") reported: "Cannot get mml. Server problem."): {\displaystyle S(t)={\frac {\partial E}{\partial w_{i}(t)}}}
- Failed to parse (Conversion error. Server ("https://wikimedia.org/api/rest_") reported: "Cannot get mml. Server problem."): {\displaystyle \Delta w_{i}(t)=\alpha \cdot {\frac {\partial E}{\partial w_{i}(t)}}} (normal backpropagation)
- Failed to parse (Conversion error. Server ("https://wikimedia.org/api/rest_") reported: "Cannot get mml. Server problem."): {\displaystyle {\frac {\Delta w_{i}(t)}{\alpha }}={\frac {\partial E}{\partial w_{i}(t)}}}
- Failed to parse (Conversion error. Server ("https://wikimedia.org/api/rest_") reported: "Cannot get mml. Server problem."): {\displaystyle S(t)={\frac {\partial E}{\partial w_{i}(t)}}={\frac {\Delta w_{i}(t)}{\alpha }}}
- (quick propagation)
- weight
- neuron
- error function
- time (training step)
- learning rate
To avoid too big changes the maximum weight change is limited by the following equation:
- weight
- neuron
- time (training step)
- maximal weight change factor
RProp (Resilient Propagation)
The RProp algorithm just refers to the direction of the gradient.
Failed to parse (Conversion error. Server ("https://wikimedia.org/api/rest_") reported: "Cannot get mml. Server problem."): {\displaystyle \Delta w_{ij}(t)={\begin{cases}-\Delta p_{ij},&{\text{if }}{\frac {\partial E}{\partial w_{ij}}}>0\\+\Delta p_{ij},&{\text{if }}{\frac {\partial E}{\partial w_{ij}}}<0\\0,&{\text{if }}{\frac {\partial E}{\partial w_{ij}}}=0\end{cases}}}
- learning rate
- weight
- weight change
- Failed to parse (Conversion error. Server ("https://wikimedia.org/api/rest_") reported: "Cannot get mml. Server problem."): {\displaystyle \alpha ^{-}=0.5}
- Failed to parse (Conversion error. Server ("https://wikimedia.org/api/rest_") reported: "Cannot get mml. Server problem."): {\displaystyle \Delta w(0)=0.5}
- Failed to parse (Conversion error. Server ("https://wikimedia.org/api/rest_") reported: "Cannot get mml. Server problem."): {\displaystyle \Delta w(t)_{max}=50}
RProp+
The RProp+ algorithm reduce the previous weight change from the last weight change if the mathematical sign of the gradient changes.
Failed to parse (Conversion error. Server ("https://wikimedia.org/api/rest_") reported: "Cannot get mml. Server problem."): {\displaystyle \Delta w_{ij}(t)={\begin{cases}\alpha ^{+}\cdot \Delta w_{ij}(t-1),&{\text{if }}{\frac {\partial E}{\partial w_{ij}}}(t-1)\cdot {\frac {\partial E}{\partial w_{ij}}}(t)>0\\\Delta w_{ij}(t-1)-\Delta w_{ij}(t-2),&{\text{if }}{\frac {\partial E}{\partial w_{ij}}}(t-1)\cdot {\frac {\partial E}{\partial w_{ij}}}(t)<0\\\Delta w_{ij}(t-1),&{\text{if }}{\frac {\partial E}{\partial w_{ij}}}(t-1)\cdot {\frac {\partial E}{\partial w_{ij}}}(t)=0\end{cases}}}
iRProp+
The iRProp+ is a improve RProp+ algorithm with a little change. Before reducing the previous weight change from the last weight change, the network error will be calculated and compared. If the network error increases from to Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle E(t-1)} , then the procedure of RProp+ will be done. Otherwise no change will be done, because if Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle E(t-1)} has a lower value than Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle E(t-2)} the weight change seems to be correct to convergent the neural network.
Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle \Delta w_{ij}(t) = \begin{cases} \alpha^+ \cdot \Delta w_{ij}(t-1), & \text{if } \frac{\part E}{\part w_{ij}}(t-1) \cdot \frac{\part E}{\part w_{ij}}(t) > 0 \\ \Delta w_{ij}(t-1) - \Delta w_{ij}(t-2), & \text{if } \frac{\part E}{\part w_{ij}}(t-1) \cdot \frac{\part E}{\part w_{ij}}(t) < 0 \text{ and if } E(t) > E(t-1) \\ \Delta w_{ij}(t-1), & \text{if } \frac{\part E}{\part w_{ij}}(t-1) \cdot \frac{\part E}{\part w_{ij}}(t) = 0 \end{cases}}
Binary and linear input
If binary input is used easily the input value is 0 for false and 1 for true.
- Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle 0 : False}
- Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle 1 : True}
Using linear input values normalization is needed:
- Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle i = \frac{f - f_{min}}{f_{max} - f_{min}}}
- Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle i} input value for neural network
- Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle f} real world value
This PHP implementation is supporting input normalization.
Binary and linear output
The interpretation of output values just makes sense for the output layer. The interpretation is depending on the use of the neural network. If the network is used for classification, so binary output is used. Binary has two states: True or false. The network will produce always linear output values. Therefore these values has to be converted to binary values:
- Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle o < 0.5 : False}
- Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle o >= 0.5 : True}
- Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle o} output value
If using linear output the output values have to be normalized to a real value the network is trained for:
- Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle f = o \cdot (f_{max} - f_{min}) + f_{min}}
- Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle f} real world value
- Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle o} real output value of neural network
The same normalization equation for input values is used for output values while training the network.
- Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle o = \frac{f - f_{min}}{f_{max} - f_{min}}}
- Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle o} desired output value for neural network
- Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle f} real world value
This PHP implementation is supporting output normalization.