Dental segmentation via enhanced YOLOv8 and image processing techniques
Main Article Content
Abstract
By blending computer-aided medical systems with cutting-edge privacy technologies, healthcare providers can deliver more personalized, effective care while maintaining the highest data security standards and patient trust. The challenge of dental segmentation in computer vision, a task focused on accurately outlining dental structures in images, traditional methods, particularly convolution neural networks (CNNs), didn't reach high accuracy in this area due to suboptimal performance and computational inefficiency. The goal of image segmentation is to group pixels on the basis of their visual properties, such as color, texture, intensity, or spatial proximity, to identify and delineate the boundaries of distinct objects or regions within the image. In this paper, You Only Look Once (YOLOv8) algorithm is improved to segment teeth with high accuracy and high execution speed. The increase in the number of layers of YOLOv8 relied upon, as the accuracy of the algorithm segmentation depends on the number of layers used to extract features from the image (backbone) and the number of layers of the head (prediction). In addition, the size of the layers is decreased to increase the execution speed. The novelty of this paper is the proposed YOLOv8 model in addition to the Proposed Activation Function (PAF). The dataset (top view) used was taken from a dental clinic where 526 images were taken of dental and different patients. The best accuracy reached 99.561% when the enhanced YOLOv8 segmentation model was applied to the dental dataset. It can be concluded that the improved model of the YOLOv8 algorithm has increased the accuracy of dental segmentation compared to previous research because it relies on a proposed PAF that increases the difference between the features extracted from the layers of the proposed model which makes it able to distinguish between teeth and surrounding parts significantly.
Downloads
Article Details
This work is licensed under a Creative Commons Attribution 4.0 International License.
References
A. M. Khan, “Image Segmentation Methods: A Comparative Study,” Int. J. Soft Comput. Eng., vol. 3, no. 4, pp. 84–92, 2013.
T. Bonny et al., “Dental bitewing radiographs segmentation using deep learning-based convolutional neural network algorithms,” Oral Radiol., vol. 40, no. 2, pp. 165–177, 2024, doi: 10.1007/s11282-023-00717-3.
G. Rubiu et al., “Teeth Segmentation in Panoramic Dental X-ray Using Mask Regional Convolutional Neural Network,” Appl. Sci., vol. 13, no. 13, 2023, doi: 10.3390/app13137947.
X. Xu, C. Liu, and Y. Zheng, “3D Tooth Segmentation and Labeling Using Deep Convolutional Neural Networks,” IEEE Trans. Vis. Comput. Graph., vol. PP, p. 1, May 2018, doi: 10.1109/TVCG.2018.2839685.
F. R. S. Teles et al., “Tooth Detection and Numbering in Panoramic Radiographs Using YOLOv8-Based Approach BT - Wireless Mobile Communication and Healthcare,” 2024, pp. 239–253.
A. Fatima et al., “Deep Learning-Based Multiclass Instance Segmentation for Dental Lesion Detection,” Healthcare, vol. 11, no. 3. 2023, doi: 10.3390/healthcare11030347.
S. Vinayahalingam et al., “Intra-oral scan segmentation using deep learning,” BMC Oral Health, vol. 23, no. 1, p. 643, 2023, doi: 10.1186/s12903-023-03362-8.
Z. Kong et al., “Automated maxillofacial segmentation in panoramic dental x-ray images using an efficient encoder-decoder network,” IEEE Access, vol. 8, pp. 207822–207833, 2020, doi: 10.1109/ACCESS.2020.3037677.
E. Shaheen et al., “A novel deep learning system for multi-class tooth segmentation and classification on cone beam computed tomography. A validation study,” J. Dent., vol. 115, p. 103865, 2021, doi: https://doi.org/10.1016/j.jdent.2021.103865.
E. Kaya, H. G. Gunec, S. S. Gokyay, S. Kutal, S. Gulum, and H. F. Ates, “Proposing a CNN Method for Primary and Permanent Tooth Detection and Enumeration on Pediatric Dental Radiographs,” J. Clin. Pediatr. Dent., vol. 46, no. 4, pp. 293–298, 2022, doi: 10.22514/1053-4625-46.4.6.
S. HELLİ and A. HAMAMCI, “Tooth Instance Segmentation on Panoramic Dental Radiographs Using U-Nets and Morphological Processing,” Düzce Üniversitesi Bilim ve Teknol. Derg., vol. 10, no. 1, pp. 39–50, 2022, doi: 10.29130/dubited.950568.
Z. Chen, S. Chen, and F. Hu, “CTA-UNet: CNN-transformer architecture UNet for dental CBCT images segmentation,” Phys. Med. Biol., vol. 68, no. 17, pp. 0–13, 2023, doi: 10.1088/1361-6560/acf026.
T. H. Farook, F. H. Saad, S. Ahmed, and J. Dudley, “Clinical Annotation and Segmentation Tool (CAST) Implementation for Dental Diagnostics,” Cureus, vol. 15, no. 11, 2023, doi: 10.7759/cureus.48734.
Deepho, Chutamas, et al. "Toward the Development of an Oral-diagnosis Framework: A Case Study of Teeth Segmentation and Numbering in Bitewing Radiographs via YOLO Models." 2024 IEEE International Conference on Cybernetics and Innovations (ICCI). IEEE, 2024.
M. K. Dhar, M. Deb, D. Madhab, and Z. Yu, “A Deep Learning Approach to Teeth Segmentation and Orientation from Panoramic X-rays,” 2023, [Online]. Available: https://github.com/mrinal054/Instance_teeth_segmentation.
É. da S. Rocha and P. T. Endo, “A Comparative Study of Deep Learning Models for Dental Segmentation in Panoramic Radiograph,” Appl. Sci., vol. 12, no. 6, 2022, doi: 10.3390/app12063103.
Ali, Ali Abdullah, Mohammed Khaleel Hussein, and Mohammed Ahmed Subhi. "A Classifier-Driven Deep Learning Clustering Approach to Enhance Data Collection in MANETs." Mesopotamian Journal of CyberSecurity 4.3 (2024): 36-45.
W. K. Jummar, A. M. Sagheer, and H. M. Saleh, “Authentication System Based on Fingerprint Using a New Technique for ROI selection ”, Babylonian Journal of Artificial Intelligence, vol. 2024, pp. 102–117, Aug. 2024.
H. M. S. SALEEH, H. Marouane, and A. Fakhfakh, “A Novel Deep Learning Approach for Detecting Types of Attacks in the NSL-KDD Dataset”, BJN, vol. 2024, pp. 171–181, Sep. 2024.
O. M. Hammad, I. Smaoui, A. Fakhfakh, and M. M. Hashim, “Recent advances in digital image masking techniques Future challenges and trends: a review”, SHIFRA, vol. 2024, pp. 67–73, May 2024, doi: 10.70470/SHIFRA/2024/008.