Preprint
Article

A Deep-Learning Approach to Automatic Tooth Caries Segmentation on Panoramic Radiographs of Children in Primary Dentition, Mixed Dentition, and Permanent Dentition

Altmetrics

Downloads

143

Views

95

Comments

0

A peer-reviewed article of this preprint also exists.

Submitted:

01 April 2024

Posted:

01 April 2024

You are already at the latest version

Alerts
Abstract
Objectives: The purpose of the study was to evaluate the effectiveness of dental caries segmentation on the panoramic radiographs taken from children in primary dentition, mixed dentition, and permanent dentition with an Artificial Intelligence (AI) models developed using the deep learning method. Methods: This study used 6075 panoramic radiographs taken from children aged between 4 and 14 to develop the AI model. The radiographs included in the study were divided into three groups as primary dentition (n: 1857), mixed dentition (n: 1406), and permanent dentition (n: 2812). U-Net model implemented with PyTorch library was used for segmentation of caries lesions. Confusion matrix was used to evaluation of model performance. Results: In the primary dentition group, the sensitivity, precision and F1 scores calculated using the confusion matrix were found to be 0.8525, 0.9128, and 0.8816, respectively. In the mixed dentition group, the sensitivity, precision, and F1 scores calculated using the confusion matrix were found as 0.7377, 0.9192, and 0.8185, respectively. In the permanent dentition group, the sensitivity, precision, and F1 scores calculated using the confusion matrix were found as 0.8271, 0.9125, and 0.8677, respectively. In the total group including primary, mixed, and permanent dentition, the sensitivity, precision and F1 score calculated using the confusion matrix were 0.8269, 0.9123 and 0.8675, respectively. Conclusion: Deep learning-based AI models are promising tools for the detection and diagnosis of caries in panoramic radiographs taken from children in different dentitions.
Keywords: 
Subject: Public Health and Healthcare  -   Other

1. Introduction

Dental caries is a common chronic infectious condition that affects many children, young and adult individuals in the worldwide [1,2]. Although dental caries usually progress slowly, in the absence of appropriate early intervention, they can become a serious health issue causing pain, infection and tooth loss [3]. In clinical dentistry, caries detection involves determination of treatment, assessment of the level of caries risk and application of preventive methods, and is very important in guiding clinical planning [4]. Successful treatment requires timely and accurate diagnosis. Various diagnostic methods are used, including digital subtraction radiography (DSR), optical coherence tomography (OCT), electrical conductivity measurement (ECM), ultrasonic imaging, fibre-optic transillumination (FOTI), laser fluorescence and quantitative light-induced fluorescence (QLF) 5. The interpretation of the images acquired by these methods is limited by inter-rater disagreement, and no single method alone can diagnose caries on the entire tooth surface. The ideal method for diagnosing dental caries has not yet been found. In this quest, interest in caries detection with computer-aided image analysis is increasing.
The favour of panoramic radiography as an extraoral method has increased owing to its low radiation dose, less time necessity, ease of application, and more patient comfort [6]. However, extraoral imaging methods are associated with distortion and magnification of images [7]. Panoramic radiography singly is inferior to bitewing radiography in the diagnosis of caries [6,8]. However, with the technological developments in panoramic radiography devices, it has now become competitive with intraoral imaging in the diagnosis of caries in panoramic radiographs [9]. Intraoral radiography necessitates more patient cooperation in comparison with extraoral techniques. Hence, pediatric, and handicapped patients would be advantageous greatly from an extraoral imaging system.
Artificial Intelligence (AI) methodologies, specifically, deep learning based convolutional neural networks (CNN), have shown good performance in computer communication including object, face and activity tracking, recognition, three-dimensional mapping and localisation [10]. Image processing and image recognition procedures have been applied in medical image segmentation and diagnosis. The U-Net is a convolutional network architecture used for fast and precise segmentation of biomedical images, and the U-Net architecture has been reported to achieve successful results in medical image datasets. The U-Net architecture can run on a trained dataset with fewer images and provide precise segmentation. However, research on the application of deep CNN infrastructure and studies on caries diagnostic methods in dentistry has not yet reached a common conclusion. [11]. This study was performed to evaluate the efficacy of an AI application developed using deep learning methods for dental caries diagnosis on panoramic radiographs of children in primary, mixed, and permanent dentition.

2. Materials and Methods

Patient Selection

This study was approved by XXXX University Medical Faculty Clinical Research Ethics Committee with the decision no. 04/30. Panoramic radiographs of 6075 paediatric patients aged 5–14 years that were available in the radiology archive of XXXXXXX were included in the presented study. Since this study is an archival study, consent was not obtained from the patients retrospectively. Panoramic radiographs containing any artefacts were excluded from study dataset. Panoramic radiographs with caries lesions deep enough to be visible on the radiograph were selected due to visuality. Panoramic radiographs with orthodontic appliances, types of restorations (stainless steel crowns, space maintainer), and containing periodontal and periapical lesions were also included in the dataset. The panoramic radiographs were divided into three groups: primary dentition, mixed dentition, and permanent dentition. In addition, all radiographs were evaluated in a single category.

Radiographic Data

All images used in this study were acquired at 65 kVp, 8 mA and 16 s using the Planmeca Promax 2D Panoramic system (Planmeca, Helsinki, Finland).

Image Evaluation

Each caries label on panoramic images was annotated with polygonal tool by a research assistant (E.A.) with 3 years of experience and a pedodontist (M.K.) with 10 years of experience using the Colabeler labeling software (MacGenius, Blaze Software, Californation, USA.) (Figure 1). In the study, all panoramic radiographs were evaluated by two specialists separately, initially. Then these radiographs were evaluated together again by two specialists and last common decision were taken. All panoramic radiographs in which the specialists did not agree were excluded from the dataset to minimize the possibility of missing caries lesion on panoramic radiographs.
Deep Convolutional Neural Network Architecture U-Net architecture was used as Deep Convolutional Neural Network Architecture. The U-Net architecture is used for semantic segmentation tasks. Our encoder-decoder type consisted of four block levels, including two convolutional layers, with a max-pooling layer in the encoding part and up convolutional layers in the decoding part. Each block had 32, 64, 128 or 256 convolutional filters. Besides the bottleneck, the layer contained 512 convolutional filters [12].

Model Pipeline

The Python open-source programming language (v.3.6.1; Python Software Foundation, Wilmington, DE, USA) and the PyTorch library (version 1.4.0) were used for model development. Model training was conducted on a computer equipped with 16 GB RAM and an NVIDIA GeForce GTX 1060Ti graphics card. Prior to training, all panoramic radiographs were resized from 2943 x 1435 to 1024 x 512 pixels.

Primary Dentition:

The dataset consisted of 1857 images, with 1497 images (12203 labels) in the training set, 180 images (1276 labels) in the validation set and 180 images (1551 labels) in the testing set. More than one labeling was done on a tooth. Caries in separate areas was evaluated as separately. The panoramic radiographs were randomly distributed. Two hundred epochs were trained with the PyTorch U-Net model; epoch 149 showed the best performance and was therefore used in the model (Figure 2).

Mixed Dentition:

The dataset consisted of 1406 images, with 1126 images (6252 labels) in the training set, 140 images (674 labels) in the validation set and 140 images (760 labels) in the testing set. The images were randomly distributed. Two hundred epochs were trained with the PyTorch U-Net model; epoch 176 showed the best performance and was therefore used in the model (Figure 2).

Permanent Dentition:

The dataset consisted of 2812 images, with 2242 images (10152 labels) in the training set, 285 images (1130 labels) in the validation set and 285 images (1102 labels) in the testing set. The images were randomly distributed. Two hundred epochs were trained with the PyTorch U-Net model; epoch 155 showed the best performance and was therefore used in the model (Figure 2).

Total (Primary Dentition + Mixed Dentition + Permanent Dentition)

The dataset consisted of 4875 images, with 2242 images (28014 labels) in the training set, 600 images (3567 labels) in the validation set and 600 images (3463 labels) in the testing set. The images were randomly distributed. One hundred epochs were trained with the PyTorch U-Net model; epoch 75 showed the best performance and was therefore used in the model (Figure 2).

Statistical Analysis

A confusion matrix was used to assess the model performance. The metrics used in this matrix were as follows: TP (true positive), the rate of positive cases correctly predicted; FN (false negative), the ratio of negative values incorrectly classified as positive; and FP (false positive), the rate of negative cases correctly classified. The metrics used to evaluate the model success were as follows: precision, a measure of how many correct predictions are made out of all classes (TP/TP + FP); sensitivity, an indicator of the model efficacy in predicting the positive class label from the inputs (TP/TP + FN); F1 score, the harmonic mean of precision and sensitivity.

3. Results

The success of the AI model in caries diagnosis was evaluated in each of the groups.

Primary Dentition:

Among the 1276 caries labels on 180 images in the testing set, the AI system evaluated 1006 as TP, 96 as FP and 174 as FN (Figure 3). The sensitivity, precision and F1 score calculated using the confusion matrix were 0.8525, 0.9128 and 0.8816, respectively (Table 1).

Mixed Dentition:

Among the 674 caries labels on 140 images in the testing set, the AI system evaluated 467 as TP, 41 as FP and 166 as FN (Figure 3). The sensitivity, precision and F1 score calculated using the confusion matrix were 0.7377, 0.9192 and 0.8185, respectively (Table 1).

Permanent Dentition:

Among the 1130 caries labels on 285 images in the testing set, the AI system evaluated 866 as TP, 83 as FP and 181 as FN (Figure 3). The sensitivity, precision and F1 score calculated using the confusion matrix were 0.8271, 0.9125 and 0.8677, respectively (Table 1).

Total (Primary Dentition + Mixed Dentition + Permanent Dentition):

Among the 3463 caries labels on 600 images in the testing set, the AI system evaluated 2653 as TP, 255 as FP and 555 as FN (Figure 3). The sensitivity, precision and F1 score calculated using the confusion matrix were 0.8269, 0.9123 and 0.8675, respectively (Table 1). Area Under Curve (AUC) value was found as 0.76. (Figure 4).

4. Discussion

If dental caries is not detected correctly and early, the lesion may gradually extend into the dentin, enamel and even the tooth pulp, resulting in severe pain and consequently the loss of dental function. Artificial intelligence-based systems are often used in dentistry for the design of automated software to facilitate diagnosis and data management [13]. These are often clinical decision support systems that help and guide professionals to make better decisions. These systems have been used to improve diagnosis, treatment planning and prediction of prognosis [14]. This study was performed to examine the success of an artificial intelligence application developed using deep learning in the diagnosis of dental caries on panoramic radiographs of primary, mixed and permanent dentition.
Various diagnostic methods are being developed and improved to overcome clinical and radiographic diagnostic limitations [5]. The techniques now used in clinical settings include digital subtraction radiography (DSR), optical coherence tomography (OCT), laser fluorescence, electrical conductivity measurement (ECM), ultrasonic imaging methods, digital imaging fibre-optic transillumination (DIFOTI) and quantitative light-induced fluorescence (QLF) [15,16]. Takeshita et al. demonstrated that DSR had high sensitivity and specificity in diagnosing interproximal caries [17]. In this method, however, it is important to acquire standard and good quality radiographs via film holders. The use of artificial intelligence has great potential for eliminating errors that may not be noticed or may be overlooked by the human eye [18]. Laitala et al. evaluated the validity of the DIFOTI method by comparison with visual inspection and bitewing radiography, but found that the method had low sensitivity and was subjective [19]. Subjectivity in a method prevents the application of a standard procedure for that method. In the present study, we reduced subjectivity through an artificial intelligence system developed using deep learning on standardised panoramic radiographs. DIAGNOdent Pen, a laser fluorescence (LF) device with no X-ray exposure, is used for caries detection [20]. However, it has been reported that LF-derived scores are weakly associated with caries histology [21]. In addition, this LF device can produce FP responses as it is affected by discolouration of the tooth surface and dental plaque [22,23]. Radiographs reflect structural changes in the tooth without being affected by discolouration or plaque. This feature can increase the reliability of the results achieved on panoramic images. The study by Mansour et al. using LF and OCT diagnostic methods established that LF could detect caries at restoration margins, but not underneath restorations [24]. These differences among caries detection methods suggest that the reliability of a method alone is not sufficient [25].
Panoramic radiography is one of the most preferred methods for patient evaluation in a routine pediatric examination, as it is well tolerated by children and gives an image area that dominates all mouth [26]. Panoramic radiography can increase the accuracy and reliability of caries diagnosis through artificial intelligence applications compared to bitewings as these radiographs provide the data needed by deep learning methods as a whole.
A review by Schwendicke et al. reported that classification and segmentation could be performed using CNNs on periapical, bitewing, CBCT, and panoramic radiographs for detection of caries and anatomical structures and that the most used method was panoramic radiography [27]. Although radiographic methods, such as bitewing radiography, are commonly used in caries detection, these methods only detect caries in a certain area and are therefore insufficient for assessment of caries for all teeth, as is the case with panoramic radiography [28]. Vinayahalingam et. al.[29], obtained demonstrable accuracy in their study named the classification of caries in third molars on panoramic radiographs using deep learning. The present study evaluated caries detection via application of artificial intelligence in panoramic radiography that provided information about all teeth for caries risk assessment.
In the area of machine learning and, especially, the problem of statistical category, the confusion matrix, also known as an error matrix, is a specific table layout that allows visualisation of the performance of an algorithm by summarising predicted and actual instances [30]. Yasa et al. used a confusion matrix in their study, and evaluated the performance of a model using TP, FP and FN, but not true negative (TN), as metrics [31]. The present study also employed the confusion matrix using TP, FP and FN to evaluate the performance for caries detection. TN could not to counted, because of the presented AI model was developed to segment caries lesion. Only decayed teeth were labelled on panoramic images. Healthy teeth were not labelled in any way. In future studies, AI models should be developed to classify teeth that have caries or not have caries. Cascade networks should be developed to classify teeth and segment caries lesion. U-Net is a convolutional network architecture used for fast and precise segmentation of biomedical images [32]. Nishitani et al. reported that the U-Net deep learning algorithm is suitable for segmentation of teeth on panoramic images [33]. Therefore, in the present study, the U-Net model, which has a high rate of success in medical image segmentation, was preferred for segmentation in the deep learning model.
Major deep learning libraries consist of layer-based frameworks, such as Caffe, and graph-based frameworks, such as PyTorch, TensorFlow and MXNet [34]. Torch is an open source library developed to support deep learning and machine learning [35]. This library is used frequently in image processing [36] and has been shown to simplify complex operations [37]. Therefore, the present study used the Python open-source programming language and PyTorch deep learning library, which were shown to be successful in the development of artificial intelligence models.
There are studies in the literature in which AI is used in the detection of dental caries. However, it is necessary to increase the number of these studies in order to reach a common conclusion. Lee et al. reported that dental caries could be detected with deep learning-based CNN applications on 3000 periapical images [38]. They stated that the diagnostic accuracy was 82.0%, sensitivity 81.0%, specificity 83.0% in premolars and molars. Schwendicke et al. used DIAGNOcam and detected caries on 217 images by deep CNNs [39]. Devito et al. applied a multilayer artificial neural network for proximal caries diagnosis on bitewing radiographs of 160 extracted teeth [40]. The present study used 6057 panoramic images. This high number of images in our dataset increases the reliability of our results compared to previous studies.
In the present study, the sensitivity, precision and F1 score were high for primary and permanent dentition, while these scores were lower for mixed dentition. High scores for permanent and primary dentition may have resulted from a clearer reading of images due to the uniform dentition in permanent dentition and the smaller size of the permanent tooth germs in primary dentition compared to the germs in mixed dentition. In mixed dentition, developing permanent tooth germs and root resorption in primary teeth may have adversely affected the image clarity. This may explain the higher sensitivity rate for primary and permanent dentition than for mixed dentition.
This study had some limitations. Application of a method in clinical procedures requires achieving results of ≥ 90% [41]. Our AI method needs to be improved to achieve such results. In addition, our findings were not compared with different radiographic caries detection methods. Therefore, the use of more cases to train deep learning-based CNN systems as well as more advanced algorithms will increase the success of caries detection on panoramic radiographs and ensure a place for these systems in routine clinical practice. Because of the lack of comparisons in AI applied in dentistry, comparative studies in the latter are required. In the presented study, cascade network was not developed. To remove the limitation, cascade AI networks should be developed to classify teeth and segment caries lesion in the future studies. Beside, histological confirmations of caries and further extension of labelled data are required, to tide over the model’s limits in the presented study. Again, comparing this study with a clinical caries detection method may provide clearer results.

5. Conclusions

The deep learning-based artificial intelligence algorithm reported here showed average performance in detecting dental caries on panoramic radiographs. Prospective studies should focus on caries staging. The promising results of this study on the use of artificial intelligence to interpret dental radiographic images will encourage further studies of this issue.

Main Points

This study sheds light on the use of artificial intelligence, which is a current topic, in dentistry. it is also one of the first studies on children's OPTs. Gives the results of a large number of OPT scans to the literature. In this study, the dental structure of the children was evaluated as a whole as well as separately. (permanent dentition, mixed dentition and primary dentition) There are also promising results in the use of artificial intelligence in dentistry. This study, in which panoramic films are evaluated, can minimize the problems that will occur during the examination in pediatric dentistry through artificial intelligence.

Author Contributions

Methodology, Esra Asci, İbrahim Sevki Bayrakdar, Ozer Celik and Hasan Basri Bircan; Software, Munevver Kilic; Validation, Kenan Cantekin; Writing – review & editing, Kaan Orhan. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Featherstone, JD. The science and practice of caries prevention. J Am Dent Assoc. 2000, 131, 887–899. [Google Scholar] [CrossRef] [PubMed]
  2. Robert H Selwitz AII, Nigel B Pitts. Dental Caries. The Lancet. 2007, 359, 51–59. [Google Scholar]
  3. Mortensen D, Dannemand K, Twetman S, Keller MK. Detection of non-cavitated occlusal caries with impedance spectroscopy and laser fluorescence: an in vitro study. Open Dent J. 2014, 8, 28–32. [Google Scholar] [CrossRef] [PubMed]
  4. 4. Baelum V, Heidmann J, Nyvad B. Dental caries paradigms in diagnosis and diagnostic research. Eur J Oral Sci, 2006; 114, 263–277.
  5. Korkut B, Tagtekin DA, Yanikoglu F. Early diagnosis of dental caries and new diagnostic methods: QLF, Diagnodent, Electrical Conductance and Ultrasonic System. EÜ Dişhek Fak Derg. 2011, 32, 55–67. [Google Scholar]
  6. Akkaya N, Kansu O, Kansu H, Cagirankaya L, Arslan U. Comparing the accuracy of panoramic and intraoral radiography in the diagnosis of proximal caries. Dentomaxillofacial Radiology. 2006, 35, 170–174. [Google Scholar] [CrossRef] [PubMed]
  7. Kamburoğlu K, Kolsuz E, Murat S, Yüksel S, Özen T. Proximal caries detection accuracy using intraoral bitewing radiography, extraoral bitewing radiography and panoramic radiography. Dentomaxillofacial Radiology. 2012, 41, 450–459. [Google Scholar] [CrossRef] [PubMed]
  8. Flint DJ, Paunovich E, Moore WS, Wofford DT, Hermesch CB. A diagnostic comparison of panoramic and intraoral radiographs. Oral Surgery, Oral Medicine, Oral Pathology, Oral Radiology, and Endodontology. 1998, 85, 731–735. [Google Scholar] [CrossRef] [PubMed]
  9. Farman, AG. There are good reasons for selecting panoramic radiography to replace the intraoral full-mouth series. Oral surgery, oral medicine, oral pathology, oral radiology, and endodontics. 2002, 94, 653–655. [Google Scholar] [CrossRef]
  10. 10. Sklan JE, Plassard AJ, Fabbri D, Landman BA. Toward content-based image retrieval with deep convolutional neural networks. Proc SPIE Int Soc Opt Eng, 2015; 9417.
  11. Lee J-H, Kim D-h, Jeong S-N, Choi S-H. Diagnosis and prediction of periodontally compromised teeth using a deep learning-based convolutional neural network algorithm. J Periodontal Implant Sci. 2018, 48, 114–123. [Google Scholar] [CrossRef] [PubMed]
  12. Orhan K, Yazici G, Kolsuz ME, Kafa N, Bayrakdar IS, Çelik Ö. An Artificial Intelligence Hypothetical Approach for Masseter Muscle Segmentation on Ultrasonography in Patients With Bruxism. Journal of Advanced Oral Research. 2021.
  13. Schleyer TK, Thyvalikakath TP, Spallek H, Torres-Urquidy MH, Hernandez P, Yuhaniak J. Clinical computing in general dentistry. J Am Med Inform Assoc. 2006, 13, 344–352. [Google Scholar] [CrossRef] [PubMed]
  14. Mendonça, EA. Clinical decision support systems: perspectives in dentistry. J Dent Educ, 5: 68(6), 2004; 68, 589–597. [Google Scholar]
  15. Sridhar N, Tandon S, Rao N. A comparative evaluation of DIAGNOdent with visual and radiography for detection of occlusal caries: an in vitro study. Indian J Dent Res. 2009, 20, 326. [Google Scholar] [CrossRef] [PubMed]
  16. Schneiderman A, Elbaum M, Shultz T, Keem S, Greenebaum M, Driller J. Assessment of dental caries with digital imaging fiber-optic translllumination (DIFOTITM): in vitro Study. Caries Res. 1997, 31, 103–110. [Google Scholar] [CrossRef] [PubMed]
  17. Takeshita WM, Iwaki LCV, Da Silva MC, Iwaki Filho L, Queiroz ADF, Geron LBG. Comparison of the diagnostic accuracy of direct digital radiography system, filtered images, and subtraction radiography. Contemp Clin Dent. 2013, 4, 338–342. [Google Scholar] [CrossRef] [PubMed]
  18. Park WJ, Park J-B. History and application of artificial neural networks in dentistry. Eur J Dent. 2018, 12, 594. [Google Scholar] [CrossRef]
  19. Laitala M-L, Piipari L, Sämpi N, et al. Validity of digital imaging of fiber-optic transillumination in caries detection on proximal tooth surfaces. Journal Of Dentistry I. 2017, 2017, 6. [Google Scholar]
  20. Diniz MB, Leme AFP, de Sousa Cardoso K, Rodrigues JdA, Cordeiro RdCL. The efficacy of laser fluorescence to detect in vitro demineralization and remineralization of smooth enamel surfaces. Photomed Laser Surg. 2009, 27, 57–61. [Google Scholar] [CrossRef] [PubMed]
  21. Jablonski-Momeni A, Ricketts DN, Rolfsen S, et al. Performance of laser fluorescence at tooth surface and histological section. Lasers Med Sci. 2011, 26, 171–178. [Google Scholar] [CrossRef] [PubMed]
  22. Rodrigues JA, Diniz MB, Josgrilberg ÉB, Cordeiro RC. In vitro comparison of laser fluorescence performance with visual examination for detection of occlusal caries in permanent and primary molars. Lasers Med Sci 2009, 24, 501–506. [Google Scholar] [CrossRef] [PubMed]
  23. Lussi A, Reich E. The influence of toothpastes and prophylaxis pastes on fluorescence measurements for caries detection in vitro. Eur J Oral Sci. 2005, 113, 141–144. [Google Scholar] [CrossRef] [PubMed]
  24. Mansour S, Ajdaharian J, Nabelsi T, Chan G, Wilder-Smith P. Comparison of caries diagnostic modalities: A clinical study in 40 subjects. Lasers Surg Med 2016, 48, 924–928. [Google Scholar] [CrossRef] [PubMed]
  25. Lussi, A. Comparison of different methods for the diagnosis of fissure caries without cavitation. Caries Res. 1993, 27, 409–416. [Google Scholar] [CrossRef] [PubMed]
  26. Bayram M, Yıldırım M, Adnan K, Seymen F. Pedodonti Anabilim Dali'nda Başlangiç Muayenesinde Alinan Panoramik Radyografilerin Değerlendirilmesi. Istanbul Univ Dishekim Fak Derg. 2011, 45, 41–47. [Google Scholar]
  27. Schwendicke F, Golla T, Dreher M, Krois J. Convolutional neural networks for dental image diagnostics: A scoping review. J Dentistry. 2019, 91, 103226. [Google Scholar] [CrossRef] [PubMed]
  28. Kamburoğlu K, Kolsuz E, Murat S, Yüksel S, Özen T. Proximal caries detection accuracy using intraoral bitewing radiography, extraoral bitewing radiography and panoramic radiography. Dentomaxillofac Radiol 2012, 41, 450–459. [Google Scholar] [CrossRef] [PubMed]
  29. Vinayahalingam S, Kempers S, Limon L, et al. Classification of caries in third molars on panoramic radiographs using deep learning. Sci Rep. 2021, 11, 1–7. [Google Scholar]
  30. Stehman, SV. Selecting and interpreting measures of thematic classification accuracy. Remote Sens Environ. 1997, 62, 77–89. [Google Scholar] [CrossRef]
  31. Yasa Y, Çelik Ö, Bayrakdar IS, et al. An artificial intelligence proposal to automatic teeth detection and numbering in dental bite-wing radiographs. Acta Odontol Scand 2020, 79, 275–281. [Google Scholar]
  32. Ozturk O, Sarıtürk B, Seker DZ. Comparison of Fully Convolutional Networks (FCN) and U-Net for Road Segmentation from High Resolution Imageries. IJEGEO. 2020, 7, 272–279. [Google Scholar] [CrossRef]
  33. Nishitani Y, Nakayama R, Hayashi D, Hizukuri A, Murata K. Segmentation of teeth in panoramic dental X-ray images using U-Net with a loss function weighted on the tooth edge. Radiol Phys Technol. 2021, 14, 1–6. [Google Scholar]
  34. Gao X, Ramezanghorbani F, Isayev O, Smith JS, Roitberg AE. TorchANI: A free and open source PyTorch-based deep learning implementation of the ANI neural network potentials. J Chem Inf Model. 2020, 60, 3408–3415. [Google Scholar] [CrossRef] [PubMed]
  35. Kızrak MA, Bolat B. Derin öğrenme ile kalabalık analizi üzerine detaylı bir araştırma. Bilişim Teknolojileri Dergisi. 2018, 11, 263–286. [Google Scholar] [CrossRef]
  36. Collobert R, Kavukcuoglu K, Farabet C. Torch7: A matlab-like environment for machine learning. 2011.
  37. Yoo SH, Geng H, Chiu TL, et al. Deep learning-based decision-tree classifier for COVID-19 diagnosis from chest X-ray imaging. Front Med (Lausanne) 2020, 7, 427. [Google Scholar]
  38. Lee J-H, Kim D-H, Jeong S-N, Choi S-H. Detection and diagnosis of dental caries using a deep learning-based convolutional neural network algorithm. Journal of dentistry. 2018, 77, 106–111. [Google Scholar] [CrossRef]
  39. Schwendicke F, Elhennawy K, Paris S, Friebertshäuser P, Krois J. Deep learning for caries lesion detection in near-infrared light transillumination images: A pilot study. J Dentistry. 2020, 92, 103260. [Google Scholar] [CrossRef] [PubMed]
  40. Devito KL, de Souza Barbosa F, Felippe Filho WN. An artificial multilayer perceptron neural network for diagnosis of proximal dental caries. Oral Surg Oral Med Oral Pathol Oral Radiol Endod 2008, 106, 879–884. [Google Scholar] [CrossRef] [PubMed]
  41. Özgür B, Ünverdi GE, Çehreli Z. Diş Çürüğünün Tespitinde Geleneksel ve Güncel Yaklaşımlar. Turkiye Klinikleri Journal of Pediatric Dentistry-Special Topics. 2018, 4, 1–9. [Google Scholar]
Figure 1. Caries labeling on panoramic images with polygonal tool using the Colabeler labeling software (MacGenius, Blaze Software, Californation, USA.).
Figure 1. Caries labeling on panoramic images with polygonal tool using the Colabeler labeling software (MacGenius, Blaze Software, Californation, USA.).
Preprints 102786 g001
Figure 2. Diagram of the development stages of the AI models for primary dentition, mixed dentition and permanent dentition.
Figure 2. Diagram of the development stages of the AI models for primary dentition, mixed dentition and permanent dentition.
Preprints 102786 g002
Figure 3. Caries segmentation using AI model on panoramic radiographs of children in primary dentition, mixed dentition, and permanent dentition.
Figure 3. Caries segmentation using AI model on panoramic radiographs of children in primary dentition, mixed dentition, and permanent dentition.
Preprints 102786 g003
Figure 4. ROC and Precision-Recall Curve for total caries segmentation model including primary dentition, mixed dentition and permanent dentition.
Figure 4. ROC and Precision-Recall Curve for total caries segmentation model including primary dentition, mixed dentition and permanent dentition.
Preprints 102786 g004
Figure 5. Diagram of the development stages of the AI models for total.
Figure 5. Diagram of the development stages of the AI models for total.
Preprints 102786 g005
Table 1. Estimated caries segmentation performance measurements of the AI model on panoramic radiographs of children in primary dentition, mixed dentition, and permanent dentition using confusion matrix in primary dentition, mixed dentition, and permanent dentition.
Table 1. Estimated caries segmentation performance measurements of the AI model on panoramic radiographs of children in primary dentition, mixed dentition, and permanent dentition using confusion matrix in primary dentition, mixed dentition, and permanent dentition.
Metrics and Measurements Primary Dentition Mixed Dentition Permanent Dentition Total (Primary+Mixed+Permanent)
True positive (TP) 1006 467 866 2653
False positive (FP) 96 41 83 255
False negative (FN) 174 166 181 555
Sensitivity 0,8525 0,7377 0,8271 0.8269
Precision 0,9128 0,9192 0,9125 0.9123
F1 score 0,8816 0,8185 0,8677 0.8675
Note: Two professional editors, both native speakers of English. For a certificate, please see: http://www.textcheck.com/certificate/83DGHG
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated