Class 22 - NEURAL NETWORKS

Notes from the AI Basic Course by Irfan Malik & Dr Sheraz Naseer (Xeven Solutions)

Class 22 - NEURAL NETWORKS Notes from the AI Basic Course by Irfan Malik & Dr Sheraz Naseer (Xeven Solutions)

Class 22 - NEURAL NETWORKS

Notes from the AI Basic Course by Irfan Malik & Dr Sheraz Naseer (Xeven Solutions)

To Inspire Other's, you have to be a Role Model.

First work on yourself, than inspire the other's.

Today lecture is very Important in Practical World.

If you have strong base & concepts, then the probability of winning the Projects will become high.

Be a practitioner or gain latest industry exposure.

HYPER PARAMETERS:

Doesn't learn from data.

e.g. Learning rate

Develop with Trial & Error + Experience

Gradient Descent:

Gradient means slope type thing

Descent means moving downward

A hyperparameter that controls the step size during gradient descent optimization. It determines how quickly the model adjusts its parameters in the direction that reduces the loss.

A higher learning rate might lead to faster convergence but risks overshooting, while a lower rate might slow down convergence.

Learning Rate:

Like a Jump size in gradient descent moving towards the low point.

Converge means reaching to lower point.

Batch Size:

Batch is a subset of the training dataset used in each iteration of the training process.

Instead of processing the entire dataset

at once, we divide it into smaller batches.

Number of Epochs:

The number of times the entire training dataset is seen by the model during training.

Too few epochs might result in underfitting, while too many epochs can lead to overfitting.

Activation Functions (Neuron):

It determine the functions applied to the outputs of each neuron to introduce non-linearity.

Common activation functions include

ReLU (Rectified Linear Unit),

Softmax

If you don't know what neuron to use in hidden layer, then use ReLU.

Sigmoid Neuron is for Binary Classification.

Sigmoid is actually Logistic Regression.

Softmax:

To differentiate b/w more than one classes.

Neurons, Activation Function can be used interchangeably.

Normal Learning Rate may vary from project to project.

KERAS:

It contain built-in functions of TensorFlow.

Google Colab Link:

https://colab.research.google.com/drive/1mA1-Qp4wLfJ9heprE0KR-X_UIhfrieNN

#AI #artificialintelligence #datascience #irfanmalik #drsheraz #xevensolutions #neuralnetworks #hamzanadeem

要查看或添加评论,请登录

Hamza Nadeem的更多文章

社区洞察

其他会员也浏览了