Transfer Learning: Brain Tumor MRI Binary Classification Using Deep Learning Models VGG16 and InceptionV3
Introduction
In my recent deep learning project, I worked on a brain tumor classification task using a dataset of MRI images labeled either as tumor or normal. To tackle this challenge, I employed two prominent deep learning models: VGG16 and InceptionV3 (GoogLeNet). The goal was to compare the performance of these architectures in identifying brain tumors through transfer learning.
Data Preprocessing and Splitting
Before feeding the data into the models, I first preprocessed the dataset. This involved storing the MRI images and their respective labels in arrays. Each image was resized to 224x224 pixels, a common input size for both VGG16 and InceptionV3. The dataset was then split, with 77% of the data allocated for training and 33% for testing. This ensured a balanced approach, allowing the models to generalize effectively to unseen data during testing.
Transfer Learning
To make the models suitable for my classification task, I applied transfer learning:
Model Training
领英推荐
VGG16 Model:
For the VGG16 model:
InceptionV3 Model:
For the InceptionV3 model:
ACCURACY:
In conclusion i would say that through this project, I gained comprehensive hands-on experience in several critical areas of deep learning. First, in data preprocessing, I learned the importance of not only resizing images to meet the model’s input requirements but also carefully labeling the data to reduce inconsistency in the dataset. Proper labeling ensures accurate training and helps avoid bias in the model's predictions. Additionally, transfer learning and model customization enhanced my understanding of how to adapt pretrained models for specific tasks. By freezing the base layers and incorporating custom layers, I was able to effectively apply the model's existing knowledge while tailoring it to my dataset.
In terms of fine-tuning, I explored different optimizers like AdamW and Adam, and also experimented with various loss functions such as categorical crossentropy and sparse categorical crossentropy. This allowed me to better understand how fine adjustments in these parameters could lead to significant improvements in model performance. Fine-tuning the architecture and optimizers enabled me to optimize the learning process and achieve higher accuracy during both training and validation.
B tech computer science @2024, Maulana Azad National Urdu University, Data science II Machine learning II Artificial Intelligence II Python II Data analysis II Data visualization II Deep Learning II computer Vision
5 个月Keep growing brother ?? I m also doing work on same project
--
5 个月Great job mashallah?
Biology Teacher at None
5 个月MashaAllah great work....??
Computer Science Engineer | Intern at SAK Doha, Qatar | JavaScript | Web Development | Graphic Design |
5 个月Impressive ??