Exploring Tuberculosis Binary Classification with Transfer Learning using VGG16 and InceptionV3 by GoogleNet Architecture

Exploring Tuberculosis Binary Classification with Transfer Learning using VGG16 and InceptionV3 by GoogleNet Architecture

I recently worked on an exciting deep learning project where I classified tuberculosis from X-ray images. Leveraging transfer learning techniques, I explored both the VGG16 and GoogLeNet (InceptionV3) architectures to achieve high accuracy in distinguishing between tuberculosis and normal X-rays.

Dataset & Preprocessing

The dataset contained a total of 1400 images, with 700 tuberculosis and 700 normal X-ray images. To prepare the data for model training:

  • I resized the images to 224x224 pixels for consistency.
  • Created two arrays: one to store the images and another for the labels.
  • Applied label encoding to convert the data into numerical format, making it ready for training.
  • The dataset was split into 77% for training and 33% for testing, ensuring a balanced evaluation.

Dataset Images


Splitting Dataset into training and testing AND performing Label encoding

Modeling with VGG16 and GoogLeNet (InceptionV3)

I chose to experiment with two powerful convolutional neural networks (CNNs):

VGG16 Architecture:

  • I used a pre-trained VGG16 model, freezing the last 4 layers and adding custom layers on top.
  • Fine-tuned the model with Adam and AdamW optimizers for optimal performance.
  • Trained the model over 6 epochs, achieving impressive results:99% Training Accuracy100% Validation Accuracy

Loading base Model and Architecture VGG16


Adding layers


Training Model


Training accuracy and validation accuracy

InceptionV3 Architecture (GoogLeNet):

  • Similarly, I applied transfer learning to the InceptionV3 model, modifying the layers to suit my dataset.
  • After training, the model achieved:89% Training Accuracy91% Validation Accuracy

Loading Base Model InceptionV3


Training Model


Training and validation accuracy

In conclusion I would say that, VGG16 performed exceptionally well, reaching near-perfect accuracy, while InceptionV3 also delivered solid results, demonstrating that both architectures are robust for medical image classification tasks. The combination of transfer learning and the use of effective optimizers like Adam and AdamW proved crucial in fine-tuning and maximizing model performance.

This project has greatly strengthened my understanding of deep learning and transfer learning, and has provided valuable insights into optimizing models for real-world applications. I look forward to exploring more advanced architectures and working with other medical imaging datasets in the future.



Adeela Shahid

Computer Science Engineer | Intern at SAK Doha, Qatar | JavaScript | Web Development | Graphic Design |

5 个月

Very helpful Ahmad I enjoyed reading your article. Your detailed analysis of both architectures effectively highlights their unique strengths. I particularly appreciated your discussion on the importance of these type of models. Great job! MASHALLAH????

Susan Stewart

Sales Executive at HINTEX

5 个月

That sounds like an important analysis!

要查看或添加评论,请登录

Mohammad Ahmad的更多文章

社区洞察

其他会员也浏览了