Understanding the Neurons in Neural Networks (Part 2)
Logical Neurons
In the previous article, we discovered how researchers slowly approximated the functionality of the neuron. The real breakthrough in artificial neurons came about with the multilayer perceptron (MLP) and using backpropagation to teach it how to classify inputs. Using a from-scratch implementation of an MLP in Processing, we also showed how it worked and adjusted its weights to learn. Here we go back to the experiments of the past to teach our neural network how logic gates function and check if our MLP is capable of learning the XOR function.
Downloading of this magazine article is reserved for registered users only.
Discussion (0 comments)