Understanding Model Evaluation Metrics: Accuracy, Precision, Recall and F1 Score
Mukesh Verma
Digital Transformation Leader | Cloud Infrastructure Architect | CyberSecurity | Cloud FinOps | AI-ML | Lifelong Learner
Confusion Matrix: A table used to visualize the performance of a classification model.
True Positive(TP): Correctly predicted positive instances.
True Negative(TN): Correctly predicted negative instances.
False Positive(FP): Incorrectly predicted positive instances (Type I error).
False Negative(FN):Incorrectly predicted negative instances (Type II error).
Accuracy: The proportion of correctly classified instances out of the total instances.
领英推荐
Accuracy = (TP + TN) / (TP + TN + FP + FN).
Precision: Out of all instances predicted as positive, how many are actually positive.
Precision = TP / (TP + FP).
Recall: Out of all actual positive instances, how many were correctly predicted as positive.
Recall = TP / (TP + FN).
F1 Score: The harmonic mean of precision and recall. Provides a balanced measure when there is an uneven class distribution.
F1 = 2* (Precision Recall) / (Precision + Recall).
points to ponder before selecting your best model