Preprint
Article

Development of The AI Pipeline for Corneal Opacity Detection

Altmetrics

Downloads

87

Views

35

Comments

0

A peer-reviewed article of this preprint also exists.

Submitted:

20 February 2024

Posted:

20 February 2024

You are already at the latest version

Alerts
Abstract
Ophthalmological services face global inadequacies, especially in low- and middle-income countries, which are marked by a shortage of both practitioners and equipment. This study employed a portable slit-lamp microscope with video capabilities and cloud storage for a more equitable global diagnostic resource distribution. To enhance accessibility and quality of care, this study targeted corneal opacity, a global cause of blindness. To boost the online diagnosis efficiency, an AI pipeline was developed using anterior segment videos to detect corneal opacity. First, we extracted image frames from videos and learned them using a Convolutional Neural Network(CNN) model. Second, we manually annotated to extract only the corneal margins, adjusted the contrast with CLAHE, and learned using the CNN model. Finally, we performed semantic segmentation of the cornea using the annotated data. The results showed an accuracy of 0.8 for image frames, and 0.96 for corneal margins. Dice and IoU were 0.94 for semantic segmentation of corneal margins. While corneal opacity detection from video frames seemed challenging in the early stages of this study, manual annotation, corneal extraction, and CLAHE contrast adjustment significantly improved accuracy. Integrating manual annotation into the AI pipeline through semantic segmentation achieved a high accuracy in detecting corneal opacity.
Keywords: 
Subject: Medicine and Pharmacology  -   Ophthalmology

1. Introduction

Despite the global increase in the number of ophthalmologists, a significant shortage remains in developing countries [1].This shortage is compounded by the limited access to appropriate surgical technologies and diagnostic tools [2,3]. Deployment of local ophthalmologists is considered a cost-effective solution. However, the scarcity of professionals in developing countries remains a challenge [4].
The Smart Eye Camera (SEC) [5] used in this study to photograph the anterior segment of the eye was invented and developed by an active ophthalmologist to solve the problems encountered in ophthalmology treatment in Japan and developing countries, an ophthalmic medical device that has been successfully put into practical used as a medical device. SEC is a smartphone attachment that enables observation of various anterior segment structures of the eyes, including the eyelid, conjunctiva, cornea, anterior chamber, iris, lens, and anterior vitreous. This device mirrors the functionalities of conventional slit-lamp microscopy [6,7]. Furthermore, SEC facilitates the preliminary estimation and identification of several anterior segment pathologies such as cataracts [8], primary angle closure [9], allergic conjunctivitis [10], and dry eye disease [11,12]. Its integration with smartphone technology not only enhances accessibility, but also potentially expands the scope of ophthalmologic diagnostics in various settings. Additionally, an image-filing system using a dedicated application was used to enable remote ophthalmology treatment. The development of SEC has made it possible for anyone to perform eye examinations at any time regardless of location. We are diagnosing videos of the anterior segment of the eye sent via the cloud, and we are conducting research and development to perform the diagnosis using AI to support the diagnosis of ophthalmologists.
Deep learning has been applied in various ways to diagnose conditions that affect the anterior segment of the eye. Applications range from detecting angle-closure in anterior segment optical coherence tomography (AS-OCT) images to diagnosing dry eye disease (DED) and identifying peripheral anterior synechia (PAS). For instance, a deep learning system was developed for angle-closure detection in AS-OCT images, which surpassed previous methods by utilizing a multilevel deep network that captured subtle visual cues from the global anterior segment structure, local iris region, and anterior chamber angle (ACA) patch [13]. Another study evaluated a deep learning-based method to autonomously detect DED in AS-OCT images, which showed promising results compared to standard clinical dry eye tests [14]. Deep learning classifiers have also been used to measure peripheral anterior synechia based on swept-source optical coherence tomography (SS-OCT) images, demonstrating good diagnostic performance for gonioscopic angle closure and moderate performance for PAS detection [15]. In addition, deep learning classifiers have been developed to detect gonioscopic angle closure and primary angle closure disease (PACD) based on a fully automated analysis of AS-OCT images, showing effective detection capabilities [16]. Another study focused on the diagnostic performance of deep learning for predicting plateau iris in patients with primary angle-closure disease using AS-OCT images, which revealed a high performance in predicting plateau iris [17]. Finally, a deep learning model was developed for automated detection of eye laterality in anterior segment photographs, which achieved high accuracy and outperformed human experts [18]. In summary, deep learning has shown significant potential in the diagnosis of various anterior eye conditions, offering automated, accurate, and noninvasive methods that could enhance clinical evaluations and improve access to eye care in high-risk populations [13,14,15,16,17,18]. Deep learning models have shown significant promise in the field of biomedicine, particularly for the diagnosis of systemic diseases. However, there are several challenges associated with their application. One of the primary concerns is the need to guarantee the performance of deep-learning systems once they are deployed in a clinical setting. The inherent flexibility and strength of deep learning also present difficulties in ensuring consistent and reliable outcomes [19]. Moreover, there is a critical need to establish trust among stakeholders, including clinicians and regulators who require transparent and interpretable decision-making processes. The complexity of deep learning models often leads to a ’black box’ scenario, where the rationale behind their predictions is not easily understood or explained. This lack of transparency can hinder the adoption of deep learning in clinical practice [19]. In ophthalmology, deep learning has demonstrated potential in automated image analysis for detecting diseases, such as diabetic retinopathy, age-related macular degeneration, and glaucoma. Despite the high accuracy in the initial studies, further testing and research are necessary to validate these technologies clinically. This highlights the challenge of moving from research and development to practical and clinical applications [20]. A systematic review and meta-analysis comparing the diagnostic accuracy of deep learning algorithms and healthcare professionals found that, while deep learning models can match the performance of healthcare professionals, there is a scarcity of studies that provide externally validated results. Additionally, the review identified the prevalent issue of poor reporting in deep learning studies, which undermines the ability to reliably interpret diagnostic accuracy. The establishment of new reporting standards that address the unique challenges of deep learning is essential for improving the quality of future studies and fostering greater confidence in technology [21]. In summary, the challenges of applying deep learning models to diagnose systemic diseases include ensuring reliable performance, establishing trust through transparency and interpretability, clinically validating technology, and improving the quality of reporting in deep learning studies [19,20,21]. Deep learning models have shown significant promise in the field of ophthalmology, particularly for the detection and diagnosis of ocular diseases. However, these models have several limitations that must be considered. One of the primary limitations of this study was the need for further testing and clinical validation. Although deep learning models have demonstrated high accuracy in automated image analysis of fundus photographs and optical coherence tomography images, additional research is required to validate these technologies in clinical settings [22]. Another limitation is the lack of disease specificity and the public generalizability of the models. Despite the decent performance reported in previous studies, most deep learning models developed for identifying systemic diseases based on ocular data lack the specificity required for individual diseases and are not yet generalizable to the broader public for real-world applications [23]. Furthermore, deep-learning models can be computationally expensive, and deploying them on edge devices may pose a challenge. This is particularly relevant when considering the variety of available models and the potential need for a combination of models to solve a given task. The computational demands of these models may limit their practicality in certain clinical settings [24]. Lastly, while deep learning models can predict the development of diseases such as glaucoma with reasonable accuracy, they may miss certain cases, especially those with visual field abnormalities, but not glaucomatous optic neuropathy. This indicates that although DL models are powerful tools, they may not yet be able to fully replace the nuanced judgment of trained medical professionals [25]. In summary, while deep learning models hold great potential for revolutionizing the diagnosis of ocular diseases, their limitations in terms of clinical validation, disease specificity, computational demands, and potential to miss certain cases must be addressed before they can be fully integrated into clinical practice.
The purpose of this study was to develop an AI pipeline to determine the presence of corneal opacity using anterior segment videos captured using a portable sitting microscope and deep learning techniques.

2. Materials and Methods

This study was conducted in strict accordance with the principles of the Declaration of Helsinki. Ethical approval for the study protocol was obtained from the Institutional Ethics Review Board of the Minamiaoyama Eye Clinic, Tokyo, Japan (IRB No. 15000127. Approved No. 202101). Owing to the retrospective design of the study and the use of anonymized data, the board waived the requirement for written informed consent from the participants.
Anterior segment videos were captured using a portable slit-lamp microscope (Smart Eye Camera; SEC. SLM-i07/SLM-i08SE, OUI Inc., Tokyo, Japan; 13B2X10198030 101/13B2X10198030201) (Figure 1). By attaching this device to a smartphone, it will be possible to perform eye examinations in the same way as with existing slit-lamp microscopes. There is evidence that they do not require battery replacement or charging, are easy to carry, and exhibit the same performance and safety as existing medical devices in several regions [6,7].
Data acquisition for this study was centralized at a singular ophthalmological facility, namely, the Yokohama Keiai Eye Clinic. The recordings, systematically obtained between July 2020 and December 2021, were subsequently collated on a dedicated cloud server, thus constituting the dataset for this study. The recording process entailed skilled ophthalmologists directing SEC towards the cornea of the patients, leveraging the device’s feature of emitting a white diffused light to facilitate clear visualization. The video capture protocol was aligned with the methodologies conventionally associated with slit-lamp microscopes, ensuring standardization of the visual data. To further emulate the conditions of routine clinical assessments, patients were advised to avoid blinking during the video recording phase to enhance the consistency and clinical relevance of the collected video data.
First, 30 diffuse light videos were decomposed into images, and the images that could be used as validation data were selected. A total of 5996 images, 1617 positive frames and 4379 negative frames were used to detect corneal opacity. The resolution of all images was 1280 pixels horizontally and 720 pixels vertically. Using these verified images, we attempted to detect corneal opacity by performing image classification. EfficientNet-B4 [26] was used as the convolutional neural network (CNN) model, and cross-validation was performed, but almost no correct positive frames could be detected.
Therefore, to improve the detection accuracy, we manually annotated the cornea, which is the region of interest (ROI), extracted an image of only the cornea, adjusted the contrast using contrast-limited adaptive histogram equalization (CLAHE) [27], and used a convolutional neural network. Image classification was performed again using EfficientNet-B4 as a (CNN) model. The data structures used for the learning and prediction are presented in Table 1. The proportion of underlying diseases in the data is shown in the pie chart in Figure 2, the cornea extraction procedure is shown in Figure 3, and the contrast changes before and after CLAHE image processing are shown in Figure 4.
This confirmed that corneal opacity could be detected in the anterior segment images. However, to incorporate it into an AI automated prediction system, which we call an AI pipeline, it is necessary to automate the manual annotation of the process of extracting the cornea and ROI from the anterior eye image.
The hyperparameters at training were 30 for the number of epochs, 8 for the batch size, and 0.0001 for the learning rate, with default values for EfficientNet-B4 for the rest. Data augmentation during training included resizing (512 × 512), flipping up and down with a 1/2 probability, and flipping left and right with a 1/2 probability.
Therefore, we performed semantic segmentation learning to segment the cornea from the anterior segment image by reusing the annotation mask used to extract the cornea from the anterior segment image and original extracted image as learning data. U-Net [28]/EfficientNet-B4 was adopted as the semantic segmentation model.
The hyperparameters for learning the semantic segmentation were 40 epochs and 10 batch sizes. The learning rate started at 0.001 and decreased to 0.0001 after 25 epochs. All other values were set to the U-Net/EfficientNet-B4 default. Data Augmentation during training included resizing (256x256), flipping left/right with 1/2 probability, Affin transformation, padding the edges according to image size, cropping at random, Gaussian noise with a probability of 1/5, and perspective transformation with a probability of 1/2. In addition, one set of augmentations among the following three was performed with a probability of 9/10:1st set includes CLAHE, brightness adjustment, and gamma transformation, 2nd set includes sharpening, blur, and motion blur, 3rd set includes contrast adjustment and hue, saturation, and luminance change.
This study was conducted on a Windows 11 system with the following specifications: CPU: Intel Core i7-11700KF, memory: 128GB, and GPU: RTX 4070.
In this way, an AI pipeline was completed that uses semantic segmentation to extract the ROI and cornea from anterior segment images, and uses deep learning to classify images to detect corneal opacity.

3. Results

Table 2 shows the results of manually annotating the cornea as a region of interest (ROI), extracting only the cornea, adjusting the contrast with CLAHE, and learning with CNN (EfficientNet-B4).
Table 3 lists the metrics derived from the outcomes predicted by the model. The evaluation yielded commendable results across several key indicators: sensitivity, specificity, accuracy, and the Area Under the Curve (AUC). The values obtained were as follows: sensitivity of 0.96 (95% Confidence Interval [CI]: 0.97–0.99), specificity of 0.96 (95% CI: 0.97–0.99), accuracy of 0.96 (95% CI: 0.97–0.99), and an AUC of 0.98 (95% CI: 0.98–0.99).
Figure 5 depicts the Receiver Operating Characteristic (ROC) curve, illustrating the diagnostic ability of the model across various threshold settings.
Table 4 shows the outcomes of the corneal semantic segmentation, as predicted by the model. The Dice coefficient, also referred to as the F1 score, had a substantial value of 0.94. Furthermore, Intersection over Union (IoU), another critical metric for segmentation performance, similarly registered a notable value of 0.94.
The Dice coefficient is called the "Sørensen-Dice index" or the "Sørensen-Dice coefficient.” The Dice coefficient DSC(A,B) for set A and set B is defined by the following equation:
D S C ( A , B ) = 2 | A B | | A | + | B |
The Dice coefficient represents the ratio of the average number of elements in the two sets to the number of elements they have in common and is a value between zero and one. The larger the Dice coefficient, the more similar the two sets are.
The Intersection over Union (IoU) is an evaluation metric used in object detection and represents the percentage of image overlap. Specifically, it has a maximum value of 1 when the detected and true areas completely overlap and a minimum value of zero when there is no overlap at all. The IoU for Regions A and B can be calculated using the following formula:
I o U = | A B | | A B |

4. Discussions

We believe that these three approaches contributed to the improved prediction accuracy for corneal opacity. First, the cornea, the ROI, was extracted from the anterior segment image; second, the image resolution was reduced by changing the input image from the entire anterior segment image to the corneal image, the ROI, thereby reducing the reduction in image features; and third, CLAHE was applied as a contrast optimization.
It was also a good idea to reuse the anterior segment image used in the model training phase of corneal opacity prediction and the mask image used to extract the cornea to perform corneal semantic segmentation. The corneal semantic segmentation model eliminates the need to manually extract the cornea and allows it to be integrated into the AI pipeline. The corneal opacity prediction AI pipeline begins with the selection of anterior segment image frames from the video that were deemed suitable for diagnosis, followed by the extraction of the cornea through semantic segmentation, resulting in an accurate diagnosis of corneal opacity.
Despite the constraints presented by the limited size of the sampling dataset (comprising 5996 frames, with 1617 positive and 4379 negative frames), this study successfully developed a model with high diagnostic accuracy for corneal opacity. It is noteworthy that previous research in the domain of ocular image analysis often utilized datasets that exceeded thousands of annotated cases [29,30]. Conversely, the current study leveraged video data as the primary raw material, capitalizing on the potential to extract multiple image frames from a single video sequence. This methodology aligns with the techniques employed in prior research focused on the development of automated diagnostic AI systems [31,32], wherein methods such as cropping, flipping, and other forms of data augmentation are utilized to effectively expand the dataset from a single image. The implementation of these techniques, particularly the strategic use of video data for frame extraction and image amplification [33], is posited as a pivotal factor contributing to the development of a high-performance model despite the relatively modest size of the dataset.
The current study had several limitations. First, limited scale of the sample size was small. Despite the retrospective nature of the study, wherein the use of video recordings served to augment the dataset, the scope of the data remained relatively constrained. To develop robust and adaptable AI models, particularly those pertinent to imaging analysis, there is a significant need for more extensive datasets. Therefore, the limited sample size in this study may represent a significant impediment to the generalizability and comprehensive applicability of the derived AI models. In addressing the aforementioned limitation, this study drew inspiration from prior literature, which demonstrated enhanced detection of keratitis through the augmentation of single anterior segment images [34]. Such augmentation involves a six-fold increase in data quantity achieved by methods such as flipping, rotating, and cropping [34]. Similarly, our approach involved meticulous recording of digital anterior segment videos, thereby facilitating amplification of the volume of raw data [31,32]. Second, the dataset in this study was exclusively sourced from a single medical institution, which may limit the external validity and generalizability of the findings. To ensure broader clinical applicability and substantiate the robustness of the conclusions, it is essential for future research endeavors to incorporate and validate the models against external datasets. Ideally, this validation process should involve a large-scale cohort comprising data from multiple medical facilities. This comprehensive approach will be instrumental in enhancing the reliability and relevance of AI models in diverse clinical settings.
In the context of developing diagnostic AI programs for medical applications, determining the optimal performance benchmarks, particularly for diagnostic goals, presents a substantial challenge. This is exemplified in the realm of ophthalmology, where certain diseases are leading causes of blindness globally. Previous investigations, including our own, have underscored the potential of AI to achieve diagnostic accuracies comparable to, if not surpassing, those of human specialists. For instance, our prior research indicated that AI-based diagnostics could achieve over 95% accuracy in comparison with evaluations conducted by ophthalmologists in the context of a disease with a significant worldwide blindness burden [29]. Furthermore, Hu et al. reported an impressive diagnostic accuracy of 93.5% with an AUC of 0.9198 [35], indicating a high level of diagnostic precision. Additionally, a cross-sectional study by Son et al. demonstrated AI’s robust diagnostic performance, with an accuracy of 90.26% and an AUC of 0.9465 [36], further evidencing AI’s capability to accurately diagnose medical conditions. Moreover, recent studies provide compelling evidence on the efficacy of AI algorithms in distinguishing between infectious keratitis and immunological keratitis through image analysis. A notable report highlights the exceptional performance of the AI algorithm, as evidenced by AUC values of 0.986 for infectious keratitis and 0.960 for immunological keratitis [37]. These findings underscore the algorithm’s broad applicability not only in the identification of keratitis subtypes, but also in its performance across a range of ocular conditions, including corneal scars, ocular surface tumors, corneal deposits, acute angle-closure glaucoma, cataracts, and bullous keratopathy [37]. The deployment of this technology in ophthalmology clinics for professional use signifies a significant advancement in the field. It enables healthcare providers to more accurately identify the underlying causes of ocular diseases, thereby facilitating the determination of appropriate differential treatment methods. This development represents a pivotal step toward integrating AI into clinical practice, offering a promising tool for enhancing diagnostic accuracy and improving patient outcomes in ophthalmology. These findings collectively suggest that high diagnostic accuracy should be a key consideration in establishing performance benchmarks for AI systems aimed at diagnosing corneal opacity. Such evidence supports the argument for setting ambitious yet achievable accuracy goals in the development and evaluation of AI diagnostics, thereby enhancing their utility and reliability in clinical settings.
In the existing literature, there is a scarcity of studies employing deep learning methodologies for the identification of corneal opacity through images acquired via slit-lamp microscopy. Consequently, this study is pioneering in its endeavor to develop a highly accurate model for the detection of corneal opacity. Moreover, the application of AI to the diagnosis of ocular pathologies from medical examination videos remains a nascent field. This research, therefore, holds significance because of its innovative approach to both the development of a precision model for corneal opacity detection and its exploration of AI-based diagnostic methodologies in ophthalmology. In addition, research on using AI to diagnose eye diseases from medical examination videos is new, and we believe that this research is significant in these two respects.

5. Conclusions

The accuracy of the prediction model was improved by manually extracting only the cornea, which is the ROI, from the images of the anterior segment of the eye to train the model to predict the presence or absence of corneal opacity.
Furthermore, to incorporate it into the AI pipeline, rather than manually extracting the cornea, we succeeded in performing semantic segmentation using the original and mask images that were used to train a model to predict the presence or absence of corneal opacity.
The corneal opacity detection AI pipeline can seamlessly execute the process of extracting still images from videos, extracting corneas from still images using semantic segmentation, and classifying whether the cornea has opacity.
Additionally, if we develop and replace the corneal opacity detection module in this AI pipeline with a detection module for other anterior segment diseases, there is potential for the future construction of a general-purpose anterior segment diagnosis AI pipeline. If this can be achieved, it will be possible to triage large numbers of anterior eye segment videos taken during health checkups in a short period of time, which we believe will lead to a reduction in the burden on ophthalmologists.
To improve the completeness of this corneal opacity detection AI pipeline, we believe that the following three issues need to be addressed. These include increasing the amount of training data to improve the classification accuracy of the corneal opacity detection model, selecting images suitable for diagnosis from images decomposed from videos, and classifying whether the anterior eye in the image suitable for diagnosis is the right eye or the left eye.
We plan to continue this research to complete an anterior segment diagnostic AI pipeline. References

References

  1. Resnikoff S, Felch W, Gauthier T, et al, ’The number of ophthalmologists in practice and training worldwide: a growing gap despite more than 200,000 practitioners’, British Journal of Ophthalmology 2012,96,783-787. [CrossRef]
  2. Larry Schwab, MD and Randolph Whitfield, Jr, MD, ’Appropriate ophthalmic surgical technology in developing nations.’, Ophthalmic Surgery, Lasers and Imaging Retina, 2013,13(12),991–993. [CrossRef]
  3. J. Singh, S. Kabbara, M. Conway, G. Peyman, and R. D. Ross, ’Innovative Diagnostic Tools for Ophthalmology in Low-Income Countries’, Novel Diagnostic Methods in Ophthalmology. IntechOpen, Sep. 04, 2019. [CrossRef]
  4. Chirambo, M.C. ’The role of Western ophthalmologists in dealing with cataract blindness in developing countries.’, Doc Ophthalmol 81, 349–350 (1992). [CrossRef]
  5. ’Smart Eye Camera: Ophthalmic Exams Via Your Smartphone Anywhere, Anytime’, https://ouiinc.jp, (accessed on 10th January 2024).
  6. Handayani AT, Valentina C, Suryaningrum IGAR, Megasafitri PD, Juliari IGAM, Pramita IAA, Nakayama S, Shimizu E, Triningrat AAMP. Interobserver Reliability of Tear Break-Up Time Examination Using “Smart Eye Camera” in Indonesian Remote Area. Clin Ophthalmol. 2023 Jul 24;17:2097-2107. [CrossRef]
  7. Andhare P, Ramasamy K, Ramesh R, Shimizu E, Nakayama S, Gandhi P. A study establishing sensitivity and accuracy of smartphone photography in ophthalmologic community outreach programs: Review of a smart eye camera. Indian J Ophthalmol. 2023 Jun;71(6):2416-2420. [CrossRef]
  8. Yazu H, Shimizu E, Okuyama S, Katahira T, Aketa N, Yokoiwa R, Sato Y, Ogawa Y, Fujishima H. Evaluation of Nuclear Cataract with Smartphone-Attachable Slit-Lamp Device. Diagnostics (Basel). 2020 Aug 9;10(8):576. [CrossRef]
  9. Shimizu E, Yazu H, Aketa N, Yokoiwa R, Sato S, Yajima J, Katayama T, Sato R, Tanji M, Sato Y, Ogawa Y, Tsubota K. A Study Validating the Estimation of Anterior Chamber Depth and Iridocorneal Angle with Portable and Non-Portable Slit-Lamp Microscopy. Sensors (Basel). 2021 Feb 19;21(4):1436. [CrossRef]
  10. Yazu H, Shimizu E, Sato S, Aketa N, Katayama T, Yokoiwa R, Sato Y, Fukagawa K, Ogawa Y, Tsubota K, Fujishima H. Clinical Observation of Allergic Conjunctival Diseases with Portable and Recordable Slit-Lamp Device. Diagnostics (Basel). 2021 Mar 17;11(3):535. [CrossRef]
  11. Shimizu E, Yazu H, Aketa N, Yokoiwa R, Sato S, Katayama T, Hanyuda A, Sato Y, Ogawa Y, Tsubota K. Smart Eye Camera: A Validation Study for Evaluating the Tear Film Breakup Time in Human Subjects. Transl Vis Sci Technol. 2021 Apr 1;10(4):28. [CrossRef]
  12. Shimizu E, Ogawa Y, Yazu H, Aketa N, Yang F, Yamane M, Sato Y, Kawakami Y, Tsubota K. “Smart Eye Camera”: An innovative technique to evaluate tear film breakup time in a murine dry eye disease model. PLoS One. 2019 May 9;14(5):e0215130. [CrossRef]
  13. Sengupta, S., Singh, A., Leopold, H., Gulati, T., Lakshminarayanan, V. (2020). Ophthalmic diagnosis using deep learning with fundus images - A critical review. Artificial intelligence in medicine, 102, 101758. [CrossRef]
  14. Xu, B., Chiang, M., Chaudhary, S., Kulkarni, S., Pardeshi, A., Varma, R. (2019). Deep Learning Classifiers for Automated Detection of Gonioscopic Angle Closure Based on Anterior Segment OCT Images.. American journal of ophthalmology. [CrossRef]
  15. Christopher, M., Bowd, C., Belghith, A., Goldbaum, M., Weinreb, R., Fazio, M., Girkin, C., Liebmann, J., Zangwill, L. (2019). Deep Learning Approaches Predict Glaucomatous Visual Field Damage from OCT Optic Nerve Head En Face Images and Retinal Nerve Fiber Layer Thickness Maps.. Ophthalmology. [CrossRef]
  16. Wanichwecharungruang, B., Kaothanthong, N., Pattanapongpaiboon, W., Chantangphol, P., Seresirikachorn, K., Srisuwanporn, C., Parivisutt, N., Grzybowski, A., Theeramunkong, T., Ruamviboonsuk, P. (2021). Deep Learning for Anterior Segment Optical Coherence Tomography to Predict the Presence of Plateau Iris. Translational Vision Science & Technology, 10. [CrossRef]
  17. Zheng, C., Xie, X., Wang, Z., Li, W., Chen, J., Qiao, T., Qian, Z., Liu, H., Liang, J., Chen, X. (2021). Development and validation of deep learning algorithms for automated eye laterality detection with anterior segment photography. Scientific Reports, 11. [CrossRef]
  18. Chase, C., Elsawy, A., Eleiwa, T., Ozcan, E., Tolba, M., Shousha, M. (2021). Comparison of Autonomous AS-OCT Deep Learning Algorithm and Clinical Dry Eye Tests in Diagnosis of Dry Eye Disease. Clinical Ophthalmology (Auckland, N.Z.), 15, 4281 - 4289. [CrossRef]
  19. Wainberg, M., Merico, D., Delong, A., Frey, B. (2018). Deep learning in biomedicine. Nature Biotechnology, 36, 829-838. [CrossRef]
  20. Liu, X., Faes, L., Kale, A., Wagner, S., Fu, D., Bruynseels, A., Mahendiran, T., Moraes, G., Shamdas, M., Kern, C., Ledsam, J., Schmid, M., Balaskas, K., Topol, E., Bachmann, L., Keane, P., Denniston, A. (2019). A comparison of deep learning performance against health-care professionals in detecting diseases from medical imaging: a systematic review and meta-analysis.. The Lancet. Digital health, 1 6, e271-e297. [CrossRef]
  21. Rahimy, E. (2018). Deep learning applications in ophthalmology. Current Opinion in Ophthalmology, 29, 254–260. [CrossRef]
  22. Thakur, A., Goldbaum, M., Yousefi, S. (2020). Predicting Glaucoma before Onset Using Deep Learning.. Ophthalmology. Glaucoma, 3 4, 262-268. [CrossRef]
  23. Iao, W., Zhang, W., Wang, X., Wu, Y., Lin, D., Lin, H. (2023). Deep Learning Algorithms for Screening and Diagnosis of Systemic Diseases Based on Ophthalmic Manifestations: A Systematic Review. Diagnostics, 13. [CrossRef]
  24. M, M., Prasad, D., Kulkarni, M., K, S., S, V. (2022). A Systematic Study of Deep Learning Architectures for Analysis of Glaucoma and Hypertensive Retinopathy. International Journal of Artificial Intelligence & Applications. [CrossRef]
  25. Rahimy, E. (2018). Deep learning applications in ophthalmology. Current Opinion in Ophthalmology, 29, 254–260. [CrossRef]
  26. Tan, Mingxing, Quoc V. Le. "EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks." arXiv preprint arXiv:1905.11946 (2019). [CrossRef]
  27. K. Zuiderveld: “Contrast limited adaptive histogram equalization”, In:Academic PressGraphics Gems Series: Graphics Gems, Vol.IV, pp.474- 485 (1994). [CrossRef]
  28. Ronneberger, O., Fischer, P., Brox, T. (2015). U-Net: Convolutional Networks for Biomedical Image Segmentation. In: Navab, N., Hornegger, J., Wells, W., Frangi, A. (eds) Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015. MICCAI 2015. Lecture Notes in Computer Science(), vol 9351. Springer, Cham. [CrossRef]
  29. Ueno Y, Oda M, Yamaguchi T, Fukuoka H, Nejima R, Kitaguchi Y, Miyake M, Akiyama M,Miyata K, Kashiwagi K, Maeda N, Shimazaki J, Noma H, Mori K, Oshika T. Deep learning model for extensive smartphone-based diagnosis and triage of cataracts and multiple corneal diseases. Br J Ophthalmol. 2024 Jan 19:bjo-2023-324488. [CrossRef]
  30. Wu X, Huang Y, Liu Z, Lai W, Long E, Zhang K, Jiang J, Lin D, Chen K, Yu T, Wu D, Li C, Chen Y, Zou M, Chen C, Zhu Y, Guo C, Zhang X, Wang R, Yang Y, Xiang Y, Chen L, Liu C, Xiong J, Ge Z, Wang D, Xu G, Du S, Xiao C, Wu J, Zhu K, Nie D, Xu F, Lv J, Chen W, Liu Y, Lin H. Universal artificial intelligence platform for collaborative management of cataracts. Br J Ophthalmol. 2019 Nov;103(11):1553-1560. [CrossRef]
  31. Shimizu E, Tanji M, Nakayama S, Ishikawa T, Agata N, Yokoiwa R, Nishimura H, Khemlani RJ, Sato S, Hanyuda A, Sato Y. AI-based diagnosis of nuclear cataract from slit-lamp videos. Sci Rep. 2023 Dec 12;13(1):22046. [CrossRef]
  32. Shimizu E, Ishikawa T, Tanji M, Agata N, Nakayama S, Nakahara Y, Yokoiwa R, Sato S, Hanyuda A, Ogawa Y, Hirayama M, Tsubota K, Sato Y, Shimazaki J, Negishi K. Artificial intelligence to estimate the tear film breakup time and diagnose dry eye disease. Sci Rep. 2023 Apr 10;13(1):5822. [CrossRef]
  33. Li Z, Jiang J, Chen K, Chen Q, Zheng Q, Liu X, Weng H, Wu S, Chen W. Preventing corneal blindness caused by keratitis using artificial intelligence. Nat Commun. 2021 Jun 18;12(1):3738. [CrossRef]
  34. Li Z, Jiang J, Chen K, Chen Q, Zheng Q, Liu X, Weng H, Wu S, Chen W. Preventing corneal blindness caused by keratitis using artificial intelligence. Nat Commun. 2021 Jun 18;12(1):3738. [CrossRef]
  35. S. Hu et al., "Unified Diagnosis Framework for Automated Nuclear Cataract Grading Based on Smartphone Slit-Lamp Images," in IEEE Access, vol. 8, pp. 174169-174178, 2020. [CrossRef]
  36. Son, Ki Young et al. “Deep Learning-Based Cataract Detection and Grading from Slit-Lamp and Retro-Illumination Photographs: Model Development and Validation Study.” Ophthalmology science vol. 2,2 100147. 18 Mar. 2022. [CrossRef]
  37. Ueno, Y., Oda, M., Yamaguchi, T., Fukuoka, H., Nejima, R., Kitaguchi, Y., Miyake, M., Akiyama, M., Miyata, K., Kashiwagi, K., Maeda, N., Shimazaki, J., Noma, H., Mori, K., Oshika, T. (2024). Deep learning model for extensive smartphone-based diagnosis and triage of cataracts and multiple corneal diseases. The British journal of ophthalmology, bjo-2023-324488. Advance online publication. [CrossRef]
Figure 1. Smart Eye Camera.
Figure 1. Smart Eye Camera.
Preprints 99379 g001
Figure 2. Ratio of underlying deseases. Bullous keratopathy and senilis account for the majority of the underlying diseases.
Figure 2. Ratio of underlying deseases. Bullous keratopathy and senilis account for the majority of the underlying diseases.
Preprints 99379 g002
Figure 3. The upper left panel shows the original image frame extracted from the movie. The upper right panel shows the annotated mask of the cornea. The lower left corner is the cornea extracted using an annotated mask. The lower right corner is the extracted cornea (ROI only) and is an input image for training. The size of the input image is smaller than that of the original image.
Figure 3. The upper left panel shows the original image frame extracted from the movie. The upper right panel shows the annotated mask of the cornea. The lower left corner is the cornea extracted using an annotated mask. The lower right corner is the extracted cornea (ROI only) and is an input image for training. The size of the input image is smaller than that of the original image.
Preprints 99379 g003
Figure 4. The upper left panel shows the extracted cornea image. The upper right is after CLAHE processing of the left image. The lower left panel shows the other extracted cornea image. The lower right is after CLAHE processing of the left image. It can be observed that the contrast of both images was improved.
Figure 4. The upper left panel shows the extracted cornea image. The upper right is after CLAHE processing of the left image. The lower left panel shows the other extracted cornea image. The lower right is after CLAHE processing of the left image. It can be observed that the contrast of both images was improved.
Preprints 99379 g004
Figure 5. Receiver operating characteristic curve for prediction.
Figure 5. Receiver operating characteristic curve for prediction.
Preprints 99379 g005
Table 1. Data structure.
Table 1. Data structure.
negative positive total
train/val 188 188 376
test 47 47 94
Table 2. Confusion Matrix.
Table 2. Confusion Matrix.
True positive(45) False negative(2)
False positive(2) True negative(45)
Table 3. Performance of the model.
Table 3. Performance of the model.
sensitivity 0.96 (95%CI 0.97-0.99)
specificity 0.96 (95%CI 0.97-0.99)
accuracy 0.96 (95%CI 0.97-0.99)
AUC 0.98 (95%CI 0.98-0.99)
Table 4. Dice and IoU of semantic segmentation.
Table 4. Dice and IoU of semantic segmentation.
dice 0.94
IoU 0.94
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated