Palm oil classification using deep learning

(1) * Abdulrazak Yahya Saleh Mail (Universiti Malaysia Sarawak, Malaysia)
(2) Ermawatih Liansitim Mail (Universiti Malaysia Sarawak, Malaysia)
*corresponding author

Abstract


Deep Convolutional Neural Networks (CNNs) have been established as a dominant class of models for image classification problems. This study aims to apply and analyses the accuracy of deep learning for classifying ripes on palm oil fruit.  The CNN used to classify 628 images into 2 different classes. Furthermore, the experiment of CNN with 5 epochs gives promising classification results with an accuracy of 98%, which is better than previous methods.  To sum up, this study was successfully solving an image classification by detected and differentiated the ripeness of oil palm fruit.


Keywords


Classification; Deep Convolutional; Neural Network; Palm Oil Ripeness

   

DOI

https://doi.org/10.31763/sitech.v1i1.1
      

Article metrics

10.31763/sitech.v1i1.1 Abstract views : 3286 | PDF views : 1439

   

Cite

   

Full Text

Download

References


E. Onoja, S. Chandren, F. I. Abdul Razak, N. A. Mahat, and R. A. Wahab, “Oil Palm (Elaeis guineensis) Biomass in Malaysia: The Present and Future Prospects,” Waste and Biomass Valorization, vol. 10, no. 8, pp. 2099–2117, Aug. 2019, doi: 10.1007/s12649-018-0258-1.

A. Manandhar, L. Hoegner, and U. Stilla, “Palm tree detection using circular autocorrelation of polar shape matrix,” ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci., vol. III–3, pp. 465–472, Jun. 2016, doi: 10.5194/isprs-annals-III-3-465-2016.

E. Khankhaje et al., “On blended cement and geopolymer concretes containing palm oil fuel ash,” Mater. Des., vol. 89, pp. 385–398, Jan. 2016, doi: 10.1016/j.matdes.2015.09.140.

A. Mancini et al., “Biological and nutritional properties of palm oil and palmitic acid: effects on health,” Molecules, vol. 20, no. 9, pp. 17339–17361, Sep. 2015, doi: 10.3390/molecules200917339.

S. Yue, J. F. Brodie, E. F. Zipkin, and H. Bernard, “Oil palm plantations fail to support mammal diversity,” Ecol. Appl., vol. 25, no. 8, pp. 2285–2292, Dec. 2015, doi: 10.1890/14-1928.1.

O. M. Bensaeed, A. M. Shariff, A. B. Mahmud, H. Shafri, and M. Alfatni, “Oil palm fruit grading using a hyperspectral device and machine learning algorithm,” IOP Conf. Ser. Earth Environ. Sci., vol. 20, p. 012017, Jun. 2014, doi: 10.1088/1755-1315/20/1/012017.

M. K. Shabdin, A. R. M. Shariff, M. N. A. Johari, N. K. Saat, and Z. Abbas, “A study on the oil palm fresh fruit bunch (FFB) ripeness detection by using Hue, Saturation and Intensity (HSI) approach,” IOP Conf. Ser. Earth Environ. Sci., vol. 37, p. 012039, Jun. 2016, doi: 10.1088/1755-1315/37/1/012039.

S. Naik and B. Patel, “Machine vision based fruit classification and grading - a review,” Int. J. Comput. Appl., vol. 170, no. 9, pp. 22–34, Jul. 2017, doi: 10.5120/ijca2017914937.

O. I. Mba, M.-J. Dumont, and M. Ngadi, “Palm oil: Processing, characterization and utilization in the food industry – A review,” Food Biosci., vol. 10, pp. 26–41, Jun. 2015, doi: 10.1016/j.fbio.2015.01.003.

J. Roseleena, J. Nursuriati, J. Ahmed, and C. Y. Low, “Assessment of palm oil fresh fruit bunches using photogrammetric grading system,” Int. Food Res. J., vol. 18, no. 3, pp. 999–1005, 2011.

M. Makky and P. Soni, “Development of an automatic grading machine for oil palm fresh fruits bunches (FFBs) based on machine vision,” Comput. Electron. Agric., vol. 93, pp. 129–139, Apr. 2013, doi: 10.1016/j.compag.2013.02.008.

L. C. Lee, C.-Y. Liong, and A. A. Jemain, “Validity of the best practice in splitting data for hold-out validation strategy as performed on the ink strokes in the context of forensic science,” Microchem. J., vol. 139, pp. 125–133, Jun. 2018, doi: 10.1016/j.microc.2018.02.009.

P. C. Verpoort, P. MacDonald, and G. J. Conduit, “Materials data validation and imputation with an artificial neural network,” Comput. Mater. Sci., vol. 147, pp. 176–185, May 2018, doi: 10.1016/j.commatsci.2018.02.002.

N. Li, X. Zhao, Y. Yang, and X. Zou, “Objects Classification by Learning-Based Visual Saliency Model and Convolutional Neural Network,” Comput. Intell. Neurosci., vol. 2016, pp. 1–12, 2016, doi: 10.1155/2016/7942501.

B. Zhou, A. Lapedriza, J. Xiao, A. Torralba, and A. Oliva, “Learning Deep Features for Scene Recognition using Places Database,” in Advances in Neural Information Processing Systems 27, Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, Eds. Curran Associates, Inc., 2014, pp. 487–495.

X. Cheng, J. Lu, J. Feng, B. Yuan, and J. Zhou, “Scene recognition with objectness,” Pattern Recognit., vol. 74, pp. 474–487, Feb. 2018, doi: 10.1016/j.patcog.2017.09.025.

Y. Seo and K. Shin, “Hierarchical convolutional neural networks for fashion image classification,” Expert Syst. Appl., vol. 116, pp. 328–339, Feb. 2019, doi: 10.1016/j.eswa.2018.09.022.

J. R. Martinez-Rico, J. Martinez-Romo, and L. Araujo, “Can deep learning techniques improve classification performance of vandalism detection in Wikipedia?,” Eng. Appl. Artif. Intell., vol. 78, pp. 248–259, Feb. 2019, doi: 10.1016/j.engappai.2018.11.012.


Refbacks

  • There are currently no refbacks.


Copyright (c) 2020 Abdulrazak Yahya Saleh, Ermawatih Liansitim

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

___________________________________________________________
Science in Information Technology Letters
ISSN 2722-4139
Published by Association for Scientific Computing Electrical and Engineering (ASCEE)
W : http://pubs2.ascee.org/index.php/sitech
E : sitech@ascee.org, andri@ascee.org, andri.pranolo.id@ieee.org

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0

View My Stats