Back Propagation (BP) Networks are the quintessential Neural Nets. Probably eighty percent of nets used today are of this type. Actually though, Back Propagation is the learning or training method, rather than the network structure itself.The network operates in the same way as the type we’ve looked at in part 1 — you apply the inputs and calculate an output exactly as described. What the Back Propagation part does, is allow you to change the weights, so that the network learns and gives you the output you want. The weights which the network starts off with are simply set to small random numbers (say between –1 and +1).
Discussion (0 comments)