Machine Learning Using Perceptron Neural Networks

Machine Learning Using Perceptron Neural Networks

By Matthew Loong

Neural Networks

The concept of neurons in biology was conceived between the 1940s to 1950s. Scientists discovered that the brain was in fact made up of a network of neuron cells, with electric pulses firing from the input dendrite through the soma to the axon terminals or nerve endings.

No alt text provided for this image

This concept was later adopted in machine learning. Similar to dendrites, the perceptron rule takes inputs (x1, x2...xn) along with a bias e.g. 1, assigns respective initial weights (w0, w1, w2...wm), summates them and transfers to an activation function. The activation function will determine whether the net input is above or below the threshold; if it is above, it fires an output, if it is below, it back propagates an error to adjust the weightage, and the process continues until an output is fired.

No alt text provided for this image

A very simple example of single layer perceptron without hidden layers can be seen below.

No alt text provided for this image
No alt text provided for this image
No alt text provided for this image
No alt text provided for this image

(Image Source: SUTD)

With hidden layers and numerous weightages i.e. multi-layer perceptron (MLP), it can get pretty complex.

No alt text provided for this image

(Image Source: TensorFlow)

Demonstration Using Weka

I will now show how failure of a compressor can be predicted using perceptron in Weka. You can download Weka for free from the link below.

As well as download the sample data set (compressor.csv) with 4999 instances, used in this demonstration, from the link below.

Open File In Weka

In GUI, select Explorer.

No alt text provided for this image

Click on the Preprocess tab and select Open File. Choose where you saved compressor.csv and click Open.

No alt text provided for this image

The attributes will be populated along with the visual histogram.

No alt text provided for this image

Run MLP

Click on Classify tab and select weka > classifiers > functions > MultilayerPerceptron.

No alt text provided for this image

Select test option. For this example I have chose percentage split of 66%, which means that 66% of the data will be used to train the model, while the remaining 34% (1700 instances) will be tested by the trained model. To generate a CSV output of predictions, click on More options, under Output predictions choose CSV, click on the CSV text box and click on outputFile to select the directory where you want to save the output file.

No alt text provided for this image

Click OK and Start. The model will run. When it is completed, the bird at the bottom right corner will stop moving and the results will be generated. From the confusion matrix in the results, we can see that there are 1261 true positives, 433 true negatives, 1 false positive and 5 false negatives out of a total of 1700 test instances, which translates to 99.6% accuracy.

No alt text provided for this image

Examining the output CSV file we can see the predictions.

No alt text provided for this image

The 6 instances predicted incorrectly can be seen below.

No alt text provided for this image

Viewing The Perceptron Diagram

Clicking on MultilayerPerceptron and selecting True for GUI, you can view the perceptron model. You can add layers and change the learning rate as well.

No alt text provided for this image
No alt text provided for this image


要查看或添加评论,请登录

Matt L.的更多文章

社区洞察

其他会员也浏览了