(2) Jyothi Thomas (CHRIST(Deemed to be University), India)
*corresponding author
AbstractArtificial intelligence assisted cancer detection has changed the ream of diagnosis precision. This study aims to propose a segmentation network using artificial intelligence for accurately segmenting the cervix region and acetowhite lesions in cervigram images, addressing the shortage of skilled colposcopists and streamlining the training process. A computational approach is employed to develop and train a deep learning model specifically tailored for cervix region and acetowhite lesion segmentation in cervigram images. A dataset acquired in collaboration with KIDWAI memorial cancer research institute is used for building the model. Cervigram images are collected for training and validation, and a deep learning architecture is constructed and trained using annotated datasets. The segmentation network based on efficientnet architecture and atrous spatial pyramid pooling is designed to accurately identify and delineate the target regions, with performance evaluation conducted using precision, accuracy, recall, dice score, and specificity metrics. The proposed segmentation network achieves a precision of 0.7387±0.1541, accuracy of 0.9291, recall of 0.7912±0.1439, dice score of 0.7431±0.1506, and specificity of 0.9589±0.0131, indicating its reliability and robustness in segmenting cervix regions and acetowhite lesions in cervigram images. This research demonstrates the feasibility and effectiveness of using artificial intelligence-based computational models for cervix region and acetowhite lesion segmentation in cervigram images. It provides a foundation for further investigations into classifying cervix malignancy using AI techniques, potentially enhancing early detection and treatment of cervical cancer while addressing the shortage of skilled professionals in the field
KeywordsSegmentation; Cervical cancer; Colposcope; Artificial intelligence; Deep learning
|
DOIhttps://doi.org/10.31763/aet.v3i1.1345 |
Article metrics10.31763/aet.v3i1.1345 Abstract views : 601 | PDF views : 346 |
Cite |
Full TextDownload |
References
[1] A. D. Shrestha, D. Neupane, P. Vedsted, and P. Kallestrup, “Cervical Cancer Prevalence, Incidence and Mortality in Low and Middle Income Countries: A Systematic Review.,” Asian Pac. J. Cancer Prev., vol. 19, no. 2, pp. 319–324, Feb. 2018, doi: 10.22034/APJCP.2018.19.2.319.
[2] A. Ghoneim, G. Muhammad, and M. S. Hossain, “Cervical cancer classification using convolutional neural networks and extreme learning machines,” Futur. Gener. Comput. Syst., vol. 102, pp. 643–649, 2020, doi: 10.1016/j.future.2019.09.015.
[3] H. Sung et al., “Global Cancer Statistics 2020: GLOBOCAN Estimates of Incidence and Mortality Worldwide for 36 Cancers in 185 Countries,” CA. Cancer J. Clin., vol. 71, no. 3, pp. 209–249, May 2021, doi: 10.3322/caac.21660.
[4] S. Fang, J. Yang, M. Wang, C. Liu, and S. Liu, “An Improved Image Classification Method for Cervical Precancerous Lesions Based on ShuffleNet.,” Comput. Intell. Neurosci., vol. 2022, p. 9675628, 2022, doi: 10.1155/2022/9675628.
[5] Z. Yue et al., “Automatic CIN Grades Prediction of Sequential Cervigram Image Using LSTM with Multistate CNN Features,” IEEE J. Biomed. Heal. Informatics, vol. 24, no. 3, pp. 844–854, 2020, doi: 10.1109/JBHI.2019.2922682.
[6] R. Achalia et al., “A proof of concept machine learning analysis using multimodal neuroimaging and neurocognitive measures as predictive biomarker in bipolar disorder,” Asian J. Psychiatr., vol. 50, p. 101984, 2020, doi: 10.1016/j.ajp.2020.101984.
[7] C. Yang, L. Qin, Y. Xie, and J. Liao, “Deep learning in CT image segmentation of cervical cancer: a systematic review and meta-analysis,” Radiat. Oncol., vol. 17, no. 1, p. 175, Nov. 2022, doi: 10.1186/s13014-022-02148-6.
[8] B. Hunter, S. Hindocha, and R. W. Lee, “The Role of Artificial Intelligence in Early Cancer Diagnosis,” Cancers (Basel)., vol. 14, no. 6, p. 1524, Mar. 2022, doi: 10.3390/cancers14061524.
[9] X. Hou, G. Shen, L. Zhou, Y. Li, T. Wang, and X. Ma, “Artificial Intelligence in Cervical Cancer Screening and Diagnosis.,” Front. Oncol., vol. 12, p. 851367, 2022, doi: 10.3389/fonc.2022.851367.
[10] C. Yuan et al., “The application of deep learning based diagnostic system to cervical squamous intraepithelial lesions recognition in colposcopy images,” Sci. Rep., pp. 1–12, 2020, doi: 10.1038/s41598-020-68252-3.
[11] M. Lalasa and J. Thomas, “A Review of Deep Learning Methods in Cervical Cancer Detection,” in International Conference on Soft Computing and Pattern Recognition, 2022, pp. 624–633, doi: 10.1007/978-3-031-27524-1_60.
[12] A. Fourcade and R. H. Khonsari, “Deep learning in medical image analysis: A third eye for doctors,” J. Stomatol. oral Maxillofac. Surg., vol. 120, no. 4, pp. 279–288, 2019, doi: 10.1016/j.jormas.2019.06.002.
[13] L. Mukku and J. Thomas, “TelsNet: temporal lesion network embedding in a transformer model to detect cervical cancer through colposcope images,” Int. J. Adv. Intell. Informatics, vol. 9, no. 3, pp. 502–523, 2023, doi: 10.26555/ijain.v9i3.1431.
[14] A. Traverso et al., “Sensitivity of radiomic features to inter-observer variability and image pre-processing in Apparent Diffusion Coefficient (ADC) maps of cervix cancer patients,” Radiother. Oncol., vol. 143, pp. 88–94, Feb. 2020, doi: 10.1016/j.radonc.2019.08.008.
[15] S. Ali et al., “A deep learning framework for quality assessment and restoration in video endoscopy,” Med. Image Anal., vol. 68, p. 101900, 2021, doi: 10.1016/j.media.2020.101900.
[16] K. A. Shastry and H. A. Sanjay, “Cancer diagnosis using artificial intelligence: a review,” Artif. Intell. Rev., vol. 55, no. 4, pp. 2641–2673, 2022, doi: 10.1007/s10462-021-10074-4.
[17] K. Sekaran, P. Chandana, N. M. Krishna, and S. Kadry, “Deep learning convolutional neural network (CNN) With Gaussian mixture model for predicting pancreatic cancer,” Multimed. Tools Appl., vol. 79, no. 15–16, pp. 10233–10247, Apr. 2020, doi: 10.1007/s11042-019-7419-5.
[18] H. Greenspan et al., “Automatic detection of anatomical landmarks in uterine cervix images,” IEEE Trans. Med. Imaging, vol. 28, no. 3, pp. 454–468, 2009, doi: 10.1109/TMI.2008.2007823.
[19] J. Liu, X. Sun, R. Li, and Y. Peng, “Recognition of Cervical Precancerous Lesions Based on Probability Distribution Feature Guidance.,” Curr. Med. imaging, vol. 18, no. 11, pp. 1204–1213, 2022, doi: 10.2174/1573405618666220428104541.
[20] B. Bai, P.-Z. Liu, Y.-Z. Du, and Y.-M. Luo, “Automatic segmentation of cervical region in colposcopic images using K-means,” Australas. Phys. Eng. Sci. Med., vol. 41, no. 4, pp. 1077–1085, Dec. 2018, doi: 10.1007/s13246-018-0678-z.
[21] T. Zhang et al., “Cervical precancerous lesions classification using pre-trained densely connected convolutional networks with colposcopy images,” Biomedical Signal Processing and Control, vol. 55. pp. 1-11, 2020, doi: 10.1016/j.bspc.2019.101566.
[22] A. M. Ikotun, A. E. Ezugwu, L. Abualigah, B. Abuhaija, and J. Heming, “K-means clustering algorithms: A comprehensive review, variants analysis, and advances in the era of big data,” Inf. Sci. (Ny)., pp. 178-210, 2022, doi: 10.1016/j.ins.2022.11.139.
[23] M. Tan and Q. Le, “Efficientnet: Rethinking model scaling for convolutional neural networks,” in International conference on machine learning, 2019, pp. 6105–6114. [Online]. Available at: https://arxiv.org/pdf/1905.11946.pdf.
[24] M. Tan and Q. V Le, “EfficientNet: Improving Accuracy and Efficiency through AutoML and Model Scaling,” latest from Google Res., pp. 2–5, 2019. [Online]. Available at: https://blog.research.google/2019/05/efficientnet-improving-accuracy-and.html.
[25] C. Buiu, V.-R. Dănăilă, and C. N. Răduţă, “MobileNetV2 ensemble for cervical precancerous lesions classification,” Processes, vol. 8, no. 5, p. 595, 2020, doi: 10.3390/pr8050595.
[26] Y. Qiu, Y. Liu, Y. Chen, J. Zhang, J. Zhu, and J. Xu, “A2SPPNet: attentive atrous spatial pyramid pooling network for salient object detection,” IEEE Trans. Multimed., pp. 1-16, 2022, doi: 10.1109/TMM.2022.3141933.
[27] L.-C. Chen, G. Papandreou, F. Schroff, and H. Adam, “Rethinking Atrous Convolution for Semantic Image Segmentation.” pp. 1-14, 2017. [Online]. Available at: https://arxiv.org/abs/1706.05587.
[28] Z. Huang, X. Wang, L. Huang, C. Huang, Y. Wei, and W. Liu, “Ccnet: Criss-cross attention for semantic segmentation,” in Proceedings of the IEEE/CVF international conference on computer vision, 2019, pp. 603–612, doi: 10.1109/ICCV.2019.00069.
[29] J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 3431–3440, doi: 10.1109/CVPR.2015.7298965.
[30] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and computer-assisted intervention, 2015, pp. 234–241, doi: 10.1007/978-3-319-24574-4_28.
[31] V. Badrinarayanan, A. Kendall, and R. Cipolla, “Segnet: A deep convolutional encoder-decoder architecture for image segmentation,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 39, no. 12, pp. 2481–2495, 2017, doi: 10.1109/TPAMI.2016.2644615.
Refbacks
- There are currently no refbacks.
Copyright (c) 2024 Lalasa Mukku, Jyothi Thomas
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Applied Engineering and Technology
ISSN: 2829-4998
Email: aet@ascee.org | andri.pranolo.id@ieee.org
Published by: Association for Scientic Computing Electronics and Engineering (ASCEE)
Organized by: Association for Scientic Computing Electronics and Engineering (ASCEE), Universitas Negeri Malang, Universitas Ahmad Dahlan
View My Stats AET
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.