#2 Coding Multilayer Perceptrons
There are many problems in real life which are easy to solve. You have two choices. In any new decision, you either go with Choice 1, or Choice 2. In machine learning, if we can divide any two classes with a straight line, this means it is linearly separable. If not, they are not linearly separable.
In deep learning, things get complicated. Now we want to predict something where we have an 'N' number of inputs. 'N' can be as large as possible. And after all the calculations, and training, we want to generate a 'Z' number of outputs. They can also be as large as possible.
The most famous example is the handwritten number dataset by MNIST. Here, the inputs are digits from 0 to 9. The output tells us probabilities of what digit it is. For example, there is a 0.3 chance that the number is 3, a 0.5 chance it is an 8, 0.2 chance it is 6. All these add up to 1.
In artificial neural networks, a multilayer perceptron (MLP) has three layers: input, hidden, and output.
Input:
Desired Output:
Hidden Layers:
Feedforward Process:
领英推荐
Project
Using Keras (the Tensorflow high-level API), a standard model workflow looks like this:
I used the California dataset to predict house values:
Exploring the dataset: When plotted the Latitude, and Longitude, this is how they relate to the house prices. As expected, Los Angeles and San Franciso are hubs where prices are highest.
Model building: Divided the dataset into train and test. Built a model with 3 layers. You can see the clips of the code here:
This is the final interpretation of the model:
Github Repo: MLP for California Housing Price Regression problem
Sources