Artificial Neural Network - Hopfield Networks
Hopfield neural network was invented by Dr. John J. Hopfield in 1982. It consists of a single layer which contains one or more fully connected recurrent neurons. The Hopfield network is commonly used for auto-association and optimization tasks.
Discrete Hopfield Network
A Hopfield network which operates in a discrete line fashion or in other words, it can be said the input and output patterns are discrete vector, which can be either binary (0,1) or bipolar (+1, -1) in nature. The network has symmetrical weights with no self-connections i.e., wij = wji and wii = 0.
Architecture
Following are some important points to keep in mind about discrete Hopfield network −
· This model consists of neurons with one inverting and one non-inverting output.
· The output of each neuron should be the input of other neurons but not the input of self.
· Weight/connection strength is represented by wij.
· Connections can be excitatory as well as inhibitory. It would be excitatory, if the output of the neuron is same as the input, otherwise inhibitory.
· Weights should be symmetrical, i.e. wij = wji
The output from Y1 going to Y2, Yi and Yn have the weights w12, w1i and w1nrespectively. Similarly, other arcs have the weights on them.
Training Algorithm
During training of discrete Hopfield network, weights will be updated. As we know that we can have the binary input vectors as well as bipolar input vectors. Hence, in both the cases, weight updates can be done with the following relation
Case 1 − Binary input patterns
For a set of binary patterns s(p), p = 1 to P
Here, s(p) = s1(p), s2(p),..., si(p),..., sn(p)
Weight Matrix is given by
wij=∑p=1P[2si(p)−1][2sj(p)−1]fori≠jwij=∑p=1P[2si(p)−1][2sj(p)−1]fori≠j
Case 2 − Bipolar input patterns
For a set of binary patterns s(p), p = 1 to P
Here, s(p) = s1(p), s2(p),..., si(p),..., sn(p)
Weight Matrix is given by
wij=∑p=1P[si(p)][sj(p)]fori≠jwij=∑p=1P[si(p)][sj(p)]fori≠j
Testing Algorithm
Step 1 − Initialize the weights, which are obtained from training algorithm by using Hebbian principle.
Step 2 − Perform steps 3-9, if the activations of the network is not consolidated.
Step 3 − For each input vector X, perform steps 4-8.
Step 4 − Make initial activation of the network equal to the external input vector X as follows −
yi=xifori=1tonyi=xifori=1ton
Step 5 − For each unit Yi, perform steps 6-9.
Step 6 − Calculate the net input of the network as follows −
yini=xi+∑jyjwjiyini=xi+∑jyjwji
Step 7 − Apply the activation as follows over the net input to calculate the output −
yi=⎧⎩⎨1yi0ifyini>θiifyini=θiifyini<θiyi={1ifyini>θiyiifyini=θi0ifyini<θi
Here θiθi is the threshold.
Step 8 − Broadcast this output yi to all other units.
Step 9 − Test the network for conjunction.
Energy Function Evaluation
An energy function is defined as a function that is bonded and non-increasing function of the state of the system.
Energy function Ef, also called Lyapunov function determines the stability of discrete Hopfield network, and is characterized as follows −
Ef=−12∑i=1n∑j=1nyiyjwij−∑i=1nxiyi+∑i=1nθiyiEf=−12∑i=1n∑j=1nyiyjwij−∑i=1nxiyi+∑i=1nθiyi
Condition − In a stable network, whenever the state of node changes, the above energy function will decrease.
Suppose when node i has changed state from y(k)iyi(k) to y(k+1)iyi(k+1) then the Energy change ΔEfΔEf is given by the following relation
ΔEf=Ef(y(k+1)i)−Ef(y(k)i)ΔEf=Ef(yi(k+1))−Ef(yi(k))
=−(∑j=1nwijy(k)i+xi−θi)(y(k+1)i−y(k)i)=−(∑j=1nwijyi(k)+xi−θi)(yi(k+1)−yi(k))
=−(neti)Δyi=−(neti)Δyi
Here Δyi=y(k+1)i−y(k)iΔyi=yi(k+1)−yi(k)
The change in energy depends on the fact that only one unit can update its activation at a time.
Continuous Hopfield Network
In comparison with Discrete Hopfield network, continuous network has time as a continuous variable. It is also used in auto association and optimization problems such as travelling salesman problem.
Model − The model or architecture can be build up by adding electrical components such as amplifiers which can map the input voltage to the output voltage over a sigmoid activation function.
Energy Function Evaluation
Ef=12∑i=1n∑j=1j≠inyiyjwij−∑i=1nxiyi+1λ∑i=1n∑j=1j≠inwijgri∫yi0a−1(y)dyEf=12∑i=1n∑j=1j≠inyiyjwij−∑i=1nxiyi+1λ∑i=1n∑j=1j≠inwijgri∫0yia−1(y)dy
Here λ is gain parameter and gri input conductance.