Betta fish classification using transfer learning and fine-tuning of CNN models

(1) Rihwan Munif Mail (Universitas Ahmad Dahlan, Indonesia)
(2) * Adhi Prahara Mail (Universitas Ahmad Dahlan, Indonesia)
*corresponding author

Abstract


Betta fish, known as freshwater fighters, are in demand because of their beauty and characteristics. These betta fish such as Crowntail, Halfmoon, Doubletail, Spadetail, Plakat, Veiltail, Paradise, and Rosetail are hard to recognize without knowledge about them. Therefore, transfer learning of Convolutional Neural Network models was proposed to classify the betta fish from the image. The transfer learning process used a pre-trained model from ImageNet of VGG16, MobileNet, and InceptionV3 and fine-tuned the models on the betta fish dataset. The models were trained on 461 images, validated with 154 images, and tested on 156 images. The result shows that the InceptionV3 model excels with 0.94 accuracies compared to VGG16 and MobileNet which acquire 0.93 and 0.92 accuracy respectively. With good accuracy, the trained model can be used in betta fish recognition applications to help people easily identify betta fish from the image.

Keywords


Betta Fish; Classification; Transfer Learning; CNN; Computer Vision

   

DOI

https://doi.org/10.31763/sitech.v5i1.1378
      

Article metrics

10.31763/sitech.v5i1.1378 Abstract views : 232 | PDF views : 84

   

Cite

   

Full Text

Download

References


[1] F. Shidiq, E. W. Hidayat, and N. I. Kurniati, “Application of K-nearest Neighbor (Knn) Method to Determine Cupang Fish Using Canny Edge Detection and Invariant Moment,” J. Tek. Inform., vol. 3, no. 1, pp. 11–20, 2022. [Online]. Available at: 10.37058/innovatics.v3i2.3093.

[2] C. C. F. Pleeging and C. P. H. Moons, “Potential welfare issues of the Siamese fighting fish (Betta splendens) at the retailer and in the hobbyist aquarium,” Vlaams Diergeneeskd. Tijdschr., vol. 86, no. 4, p. 213, Aug. 2017, doi: 10.21825/vdt.v86i4.16182.

[3] N. Rafi, Z. Zainuddin, and I. Nurtanio, “Betta Fish Classification Using Faster R-CNN Approach with Multi-Augmentation,” in 2024 International Seminar on Intelligent Technology and Its Applications (ISITIA), Jul. 2024, pp. 500–505, doi: 10.1109/ISITIA63062.2024.10668167.

[4] I. J. Goodfellow, Y. Bengio, and A. Courville, Deep Learning. Cambridge, MA, USA: MIT Press, 2016. [Online]. Available at: http://alvarestech.com/temp/deep/Deep%20Learning%20by%.

[5] D. M. Hibban and W. F. Al Maki, “Classification of Ornamental Betta Fish Using Convolutional Neural Network Method and Grabcut Segmentation,” in 2021 International Conference on Data Science and Its Applications (ICoDSA), Oct. 2021, pp. 102–109, doi: 10.1109/ICoDSA53588.2021.9617213.

[6] X. Lan, J. Bai, M. Li, and J. Li, “Fish Image Classification Using Deep Convolutional Neural Network,” in Proceedings of the 2020 International Conference on Computers, Information Processing and Advanced Education, Oct. 2020, pp. 18–22, doi: 10.1145/3419635.3419643.

[7] D. V, J. R, D. M, N. S, H. M, and S. Johnson, “Fish Classification Using Convolutional Neural Network,” in 2023 International Conference on Research Methodologies in Knowledge Management, Artificial Intelligence and Telecommunication Engineering (RMKMATE), Nov. 2023, pp. 1–4, doi: 10.1109/RMKMATE59243.2023.10369927.

[8] N. Hasan, S. Ibrahim, and A. Aqilah Azlan, “Fish diseases detection using convolutional neural network (CNN),” Int. J. Nonlinear Anal. Appl., vol. 13, no. 1, pp. 1977–1984, Mar. 2022. [Online]. Available at: https://ijnaa.semnan.ac.ir/article_5839.html.

[9] K. Auliasari, M. Wasef, and M. Kertaningtyas, “Leveraging VGG16 for Fish Classification in a Large-Scale Dataset,” Brill. Res. Artif. Intell., vol. 3, no. 2, pp. 316–328, Dec. 2023, doi: 10.47709/brilliance.v3i2.3270.

[10] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” 3rd Int. Conf. Learn. Represent. (ICLR 2015), Apr. 2015. [Online]. Available at: http://www.robots.ox.ac.uk/.

[11] M.-H. Chen, T.-H. Lai, Y.-C. Chen, and T.-Y. Chou, “A Robust Fish Species Classification Framework: FRCNN-VGG16-SPPNet,” Apr. 2023, doi: 10.21203/RS.3.RS-2825927/V1.

[12] A. G. Howard et al., “MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications,” no. April 2017, 2017, [Online]. Available at: http://arxiv.org/abs/1704.04861.

[13] P. D. Hung and N. N. Kien, “SSD-Mobilenet Implementation for Classifying Fish Species,” in Advances in Intelligent Systems and Computing, vol. 1072, Springer, Cham, 2020, pp. 399–408, doi: 10.1007/978-3-030-33585-4_40.

[14] E. Suharto, Suhartono, A. P. Widodo, and E. A. Sarwoko, “The use of mobilenet v1 for identifying various types of freshwater fish,” J. Phys. Conf. Ser., vol. 1524, no. 1, p. 012105, Apr. 2020, doi: 10.1088/1742-6596/1524/1/012105.

[15] X. Liu, Z. Jia, X. Hou, M. Fu, L. Ma, and Q. Sun, “Real-time Marine Animal Images Classification by Embedded System Based on Mobilenet and Transfer Learning,” in OCEANS 2019 - Marseille, Jun. 2019, vol. 2019-June, pp. 1–5, doi: 10.1109/OCEANSE.2019.8867190.

[16] G. Yu, L. Wang, M. Hou, Y. Liang, and T. He, “An adaptive dead fish detection approach using SSD-MobileNet,” in 2020 Chinese Automation Congress (CAC), Nov. 2020, pp. 1973–1979, doi: 10.1109/CAC51589.2020.9326648.

[17] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 2016-December, pp. 770–778, Dec. 2016, doi: 10.1109/CVPR.2016.90.

[18] M. Mathur and N. Goel, “FishResNet: Automatic Fish Classification Approach in Underwater Scenario,” SN Comput. Sci., vol. 2, no. 4, p. 273, Jul. 2021, doi: 10.1007/s42979-021-00614-8.

[19] X. Xu, W. Li, and Q. Duan, “Transfer learning and SE-ResNet152 networks-based for small-scale unbalanced fish species identification,” Comput. Electron. Agric., vol. 180, p. 105878, Jan. 2021, doi: 10.1016/j.compag.2020.105878.

[20] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the Inception Architecture for Computer Vision,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jun. 2016, vol. 2016-Decem, pp. 2818–2826, doi: 10.1109/CVPR.2016.308.

[21] O. Russakovsky et al., “ImageNet Large Scale Visual Recognition Challenge,” Int. J. Comput. Vis., vol. 115, no. 3, pp. 211–252, Dec. 2015, doi: 10.1007/s11263-015-0816-y.

[22] C. Tan, F. Sun, T. Kong, W. Zhang, C. Yang, and C. Liu, “A survey on deep transfer learning,” in Artificial Neural Networks and Machine Learning--ICANN 2018: 27th International Conference on Artificial Neural Networks, Rhodes, Greece, October 4-7, 2018, Proceedings, Part III 27, 2018, pp. 270–279, doi: 10.1007/978-3-030-01424-7_27.

[23] M. Grandini, E. Bagli, and G. Visani, “Metrics for Multi-Class Classification: an Overview,” arXiv, pp. 1–17, Aug. 2020. [Online]. Available at: https://arxiv.org/abs/2008.05756v1.


Refbacks

  • There are currently no refbacks.


Copyright (c) 2024 Rihwan Munif, Adhi Prahara

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

___________________________________________________________
Science in Information Technology Letters
ISSN 2722-4139
Published by Association for Scientific Computing Electrical and Engineering (ASCEE)
W : http://pubs2.ascee.org/index.php/sitech
E : sitech@ascee.org, andri@ascee.org, andri.pranolo.id@ieee.org

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0

View My Stats