Artificial Neural Network (ANN): Learning by Training

Artificial Neural Network (ANN): Learning by Training

A neural network is a machine learning algorithm based on the model of a human neuron. The human brain consists of millions of neurons that send and process signals in the form of electrical and chemical signals. A network of simulated neurons forms neural networks.

An Artificial Neural Network (ANN) is an information processing technique, and it includes a large number of connected processing units that work together to process information and help to generate meaningful results from it.

Below we have trained a network to complete a rather simple task using Backpropagation Algorithm (BA) which is a subset of a machine learning model known as reinforcement learning.?

The backpropagation algorithm consists of two phases: forward propagation and backward propagation. During forward propagation, the input is fed into the neural network, and the network calculates the output. During backward propagation, the error between the predicted output and the actual output is calculated, and the weights and biases of each neuron are adjusted to reduce the error.

BA adjusts the weights and biases of the neural network based on the error between the predicted output and the actual output. It works by propagating the error backwards through the network, from the output layer to the input layer, using gradient descent optimization to adjust the weights and biases of each neuron along the way. The gradient descent method calculates the gradient of the error function with respect to the weights and biases, and then updates the weights and biases in the direction of the negative gradient, which reduces the error.


We will use R language as it provides depth in terms of its packages, and we opt for ‘neuralnet’.


#The objective is to train the network to perform a cube root transformation #We start by randomly choosing 50 numbers within a range from 0 to 100

> traininginput <- data.frame(runif(50, min=0, max=100))

> trainingoutput <- (traininginput)^(1/3)


#Bind the data

> trainingdata <- cbind(traininginput,trainingoutput)

> colnames(trainingdata) <- c("Input","Output")


#Train the neural network

#Going to have 1 hidden layer with 10 neurons

#Threshold is a numeric value specifying the threshold for the partial derivatives of the error function as stopping criteria

> net.sq <- neuralnet(Output~Input,trainingdata, hidden=10, threshold=0.01)


#A view of the neural network

> plot(net.sq)

#We test the neural network on the training data

> testdata <- data.frame((1:5)^3)

> net.results <- compute(net.sq, testdata)


#A preview of the results

> print(net.results$net.result)

[,1]

[1,] 0.9018869094

[2,] 1.9922174192

[3,] 3.0012337351

[4,] 4.0025262073

[5,] 4.9005253347


#Results in format of actual vs neural net predictions

> cleanoutput <- cbind(testdata,(testdata)^(1/3), data.frame(net.results$net.result))

> colnames(cleanoutput) <- c("Input","Expected Output","Neural Net Output")

> print(cleanoutput)

Input Expected Output Neural Net Output

1 1 1 0.9939239

2 8 2 1.9890896

3 27 3 2.9986523

4 64 4 4.0008094

5 125 5 4.8912122


Neural networks perform well with linear and nonlinear data, and they work even if one or few units fail to respond to network but to implement large and effective software neural networks, much more processing and storage resources need to be committed. Given that Neural networks learns from the analysed data, the biggest value add is in automating repetitive tasks.

要查看或添加评论,请登录

Boolean Algorithmic Trading的更多文章

社区洞察

其他会员也浏览了