next up previous contents index
Next: Scaled Conjugate Gradient Up: Stochastic Learning Functions Previous: Monte-Carlo

Simulated Annealing

 

Simulated annealing is a more sophisticated method for finding the global minima of a error surface. In contrast to monte carlo learning only one weight or bias is changed at a learning cycle. Dependant on the error development and a system temperature this change is accepted or rejected. One of the advantages of simulated annealing is that learning does not get stuck in local minima.

At the beginning of learning the temperature T is set to . Each training cycle consists of the following four steps.

  1. Change one weight or bias by random in the range .

  2. Calculate the net error as sum of the given error function for all patterns.

  3. Accept change if the error decreased or if the error increased by with the probability p given by:

  4. Decrease the temperature:

The three implemented simulated annealing functions only differ in the way the net error is calculated. Sim_Ann_SS calculates a summed squared error like the backpropagation learning functions; Sim_Ann_WTA calculates a winner takes all error; and Sim_Ann_WWTA calculates a winner takes all error and adds a term corresponding to the security of the winner takes all decision.



Niels.Mache@informatik.uni-stuttgart.de
Tue Nov 28 10:30:44 MET 1995