Machine Learning & Models

Machine Learning & Models

Welcome once again Today, we make programming simpler for you with films that are simple to comprehend by code, and I'll be giving you a quick rundown of all machine learning models. Therefore, let's get started by stating that all machine learning models may be divided into supervised and unsupervised categories. We will explore each of them and the various varieties they have.

Number one Supervised learning, It involves a series of function that map's an input to an output based on a series of example. Input-output pairs; for example if we have a data set of two variables, one being age which is the input and other being the shoe size as output. We could implement supervised learning models to predict the shoe size of a person based on their age. Further with supervised learning there are two sub categories.

One is regression and other is classification in relation model. We find a target value based on independent predictors that means you can use this to find relationship between a dependent variable and an independent variable. In regression models the output is continuous, Some of the most common types of resistant model include number one linear regression which is simply finding a line that fits the data. Its extensions include multiple linear Regression that is finding a plane of best fit and polynomial regression that is finding a curve for best fit.

No alt text provided for this image

Let’s see decision tree, It looks something like where each square is called a node and the more nodes. You have the more accurate your decision tree will be in general next. The third type random forest, These are assemble learning techniques that builds off over decision trees and involve creating multiple decision trees. Using bootstrap data sets of original data and randomly selecting a subset of variables at each step of the decision tree the model. Then selects the mode of all the predictions of each decision trees and by relying on the majority winds model. It reduces the risk of error from individual tree next neural network. It is quite popular and is a multi-layered model inspired by human minds like the neurons in our brain. The circle represents a node the blue circle represents an input layer. The black circle represents a hidden layer and the green circle represents the output layer. Each node in the hidden layer represents a function that input goes through ultimately leading to the output in the green circles.

No alt text provided for this image

Next is classification so with regression types being over now let's jump to classification. So in classification the output is discrete in some of the most common types of classification models. Include first logistic regression which is similar to linear regression but is used to model the probability of a finite number of outcomes typically two next support vector machine. It is a supervised classification technique that carries an objective to find a hyper lane in n-dimensional space that can distinctly classify the data points.

No alt text provided for this image

Navies it's a classifier which acts as a probabilistic machine learning model used for classification tasks the crux of the classifier is based on the Bayes theorem. coming up next decision trees random forests and neural networks these models follow the same logic as previously explained the only difference here is that the output is discrete rather than continuous.

No alt text provided for this image

Now next let's jump over to unsupervised learning unlike supervised learning unsupervised learning is used to draw inferences and find patterns from input data without references to the labelled. Outcome two main methods used in supervised learning include clustering and dimensionality reduction. K- Clustering involves grouping of data points it's frequently used for customer segmentation fraud detection and document classification. Common clustering techniques include k-means clustering hierarchical clustering means shape clustering and density based clustering while each technique has different methods in finding clusters they all aim to achieve the same thing.

Coming up next dimensionality reduction, it is a process of reducing dimensions of your feature set auto States simply reducing the number of features most dimensionality reduction techniques can be categorised. As either feature elimination or feature extraction a popular method of dimensionality reduction is called principal component analysis or PCA, Obviously there's a ton of complexity.

要查看或添加评论,请登录

Avinash Kumar的更多文章

  • Startups: India vs. Silicon Valley – Are We Creating or Copying?

    Startups: India vs. Silicon Valley – Are We Creating or Copying?

    India is buzzing with entrepreneurial energy. Startups are springing up everywhere, with passionate founders and…

  • Shark Tank India vs. Shark Tank US: A Tale of Two Entrepreneurial Worlds

    Shark Tank India vs. Shark Tank US: A Tale of Two Entrepreneurial Worlds

    Is entrepreneurship about making money or changing the world? This question often comes to mind when comparing Shark…

  • AI Can Learn Anything

    AI Can Learn Anything

    The expressive power of neural networks is important for understanding deep learning. Most existing works consider this…

  • Endowment Plans Worth taking?

    Endowment Plans Worth taking?

    Well as I explained in my previous article about the “Term Insurance”. I wish to talk about few more phrases which keep…

    4 条评论
  • Debunking Term Insurance

    Debunking Term Insurance

    I was researching Term insurance for the last couple of days because I want to buy a term insurance before I will turn…

    11 条评论

社区洞察

其他会员也浏览了