#11 Finding Nemo: Exploring pre-trained Keras models
Now that we have a foundation on the basic models, let's experiment with them on a classification task. The task is to detect anemone fish/ clown fish.
For the task, I explored the following pre-trained model:
Before seeing the results, I expected ResNet50 to outperform the others.
Let's have a look at how the experiment went!
Sample images:
Results
Both of these models are taken from the paper mentioned in the image below.
领英推荐
In the paper, the effect of increasing depth on accuracy is explored. The convolutional filters are very small (3 x 3). In this work, they evaluated very deep convolutional networks (up to 19 weight layers) for large-scale image classification. It was demonstrated that the representation depth is beneficial for classification accuracy and that state-of-the-art performance on the ImageNet challenge dataset can be achieved using a conventional ConvNet architecture with substantially increased depth.
They also showed that their models generalised well to a wide range of tasks and datasets, matching or outperforming more complex recognition pipelines built around less deep image representations.
ResNet50
Inception V3
The output for Inception V3 is quite interesting. Firstly, it misclassified Puffer fish as goldfish. Secondly, there is a visible pattern in its accuracy to detect fishes other than anemone.
Discussion
Why VGG19 Outperformed Others (possibly):