1. Introduction
Diabetic retinopathy (DR) is the leading cause of blindness in working-age American adults [
1,
2]. While early detection and treatment can prevent over 90% of vision loss among patients with diabetes [
3,
4], less than half of diabetic patients receive the recommended annual DR screening, with even lower adherence among safety-net patients [
5,
6]. Barriers to screening compliance include long wait times for an appointment [
7] and a lack of proximity to a screening site [
6].
These barriers can be addressed by teleretinal DR screenings (TDRS) during primary care visits, wherein a photographer (typically a medical assistant or licensed vocational nurse) captures retinal images on-site and digitally transmits them to a certified image reader for remote evaluation [
1,
8]. By capturing images at the primary care site, rather than having patients travel to a separate site to be screened by an eye care provider, TDRS can both shorten wait times and eliminate geographic barriers to screening [
7].
Yet despite the convenience of TDRS, the high-resolution desktop cameras that are typically used are bulky and costly, which is prohibitive for clinics that lack the space or resources [
9]. Additionally, desktop cameras are stationary, which presents challenges for certain patient populations such as those with mobility issues.
Handheld cameras could help ameliorate these issues and increase equitable access to screenings, as they are more economical and nimbler than desktop cameras [
10,
11]. However, their use in safety-net clinical settings has not been well documented: little is known about the feasibility or potential use in safety-net sites.
To assess the feasibility of using handheld cameras at primary care clinics within a major safety-net system, we conducted a pilot of handheld cameras, integrating their use into an established teleretinal screening program in Los Angeles County. Primary care clinic sites within this program were already using desktop cameras. We comparatively assessed screening metrics (e.g., image quality) from both handheld and desktop cameras, and interviewed TDRS program coordinators and photographers to identify key facilitators and barriers to handheld camera use.
2. Methods
2.1. Setting
The Los Angeles County Department of Public Health (DPH) collaborated with the Los Angeles County Department of Health Services (DHS) to increase access to DHS’s TDRS program by piloting the use of handheld cameras, as part of the Centers for Disease Control and Prevention-supported
Solutions for Healthier Communities initiative in Los Angeles County. DHS is the second largest safety-net system in the country, serving approximately 750,000 unique patients annually across 26 health centers and four hospitals. The DHS TDRS program has been described in detail elsewhere [
7]; briefly, it has operated for over 10 years and screens approximately 2,000 patients each month, employing 56 certified medical assistant photographers using 17 desktop cameras across 16 primary care clinics. The program uses a single-drop dilation protocol (Mydriacil 0.5-1%) to minimize ungradable images.
DHS leadership, including the TDRS program’s executive director, selected the DHS sites to participate in this project. They chose both sites that were and were not already participating in the TDRS program using desktop cameras. Other considerations included whether the site had patient populations that may especially benefit from handheld cameras (e.g., patients with mobility issues), whether there was site buy-in and capacity to participate, and whether the sites were geographically distributed across Los Angeles County. Among sites already participating in the TDRS program, those with high screening volumes were prioritized.
Four brick-and-mortar clinics and two mobile van clinics were initially selected. However, the mobile van clinics, which were intended to serve patients experiencing homelessness, encountered administrative delays during the project’s timeframe. Thus, only the four brick-and-mortar clinics participated. Three of these four sites were already participating in the TDRS program.
2.2. Cameras
All four sites received the same handheld camera model: the Horus 200 Handheld Fundus Camera. The model was selected because it was the only camera that met grant funding requirements: available in the United States at a per-camera cost not to exceed approximately $5,000. Photographers at the three sites who were proficient with the use of desktop cameras for TDRS were asked to conduct desktop screenings per usual care protocols, and additionally conduct handheld camera screenings if time permitted and if the patient agreed to a second set of images being taken. The desktop camera that photographers used was either the Canon CR-2 AF Non-Mydriatic Retinal Camera or the Topcon TRC-NW400 Non-Mydriatic Retinal Camera.
2.3. Training Process
Through self-training and trial and error, the DHS TDRS program coordinator and assistant coordinator familiarized themselves with the handheld camera model. After reading the user manual, they developed a training protocol for use in the field. Because the original purchase agreement did not require the camera vendor to provide training or technical support, this training protocol relied heavily on the experiences of the DHS coordinators and on the inner workings of the existing TDRS program. After developing the protocol, the coordinators trained two photographers on the operation of the handheld camera model; these two photographers were from the four brick-and-mortar clinic sites that subsequently hosted the pilot. At the time of the training, both photographers were already proficient with desktop cameras and were conducting screenings for the TDRS program. The training largely focused on learning the handheld camera features, observing how the two coordinators were using the handheld device in the field, and performing test screenings.
2.4. Inclusion/Exclusion Criteria
Patients were eligible for the TDRS program based on the following criteria: diagnosed with diabetes, able to sit up and remain still for retinal photography, no eye exams or not actively followed by an ophthalmologist/optometrist within the last year, and no acute vision loss or major eye complaints. Patients were included in this analysis if they were at least 18 years old, had agreed to be screened with handheld cameras, and ultimately were screened with these cameras. Patients screened with only desktop cameras were excluded from the analysis.
2.5. Data Collection
Screenings occurred between January and September 2023. Screenings were conducted by either one of the two photographers, or one of the TDRS program coordinators. Evaluators from DPH, with support from TDRS staff, extracted patient-level data from EyePACS (EyePACS, Inc., Santa Cruz, CA), an online platform that the DHS TDRS program uses to capture, store, and transmit retinal images and associated patient information for readers. Information extracted from this platform included: patient demographics (e.g., race/ethnicity, date of birth), health characteristics based on the photographer’s review of the patient’s chart (e.g., years with diabetes, A1C), patient’s last eye exam (based on photographer’s chart review and confirmation by patient), and the reader’s evaluation of the images (e.g., diagnosis, image quality). To assess interrater reliability, the project’s three readers independently evaluated the handheld versus desktop image sets from the same three randomly selected patients.
Key informant interviews were conducted with the coordinators and photographers in September 2023. DPH evaluators initially interviewed the two program coordinators jointly in the same session. The TDRS assistant program coordinator then interviewed the two photographers separately at each of their regular clinic locations. Based on a guide developed internally, interview questions explored facilitators and barriers to using handheld cameras in the field, coordinating and training to become more familiar with the handheld camera model, and other aspects related to the general usage of these devices.
The program coordinator met monthly with DPH evaluators to discuss implementation progress, identify and troubleshoot challenges, and monitor data collection.
2.6. Data Analysis
After performing data cleaning to ensure that the data were complete and accurate, descriptive statistics were generated to describe patient characteristics and screening results. Reader ratings for the three randomly selected handheld and desktop image sets were assessed for interrater reliability using the Kappa statistic. Thematic analysis of the three interviews were carried out using a deductive coding process. Protocols and materials for this feasibility assessment and pilot of handheld cameras were reviewed and approved by the Institutional Review Board for the Los Angeles County Departments of Public Health and Health Services.
3. Results
3.1. Patient Participants of the Feasibility Assessment and Pilot of Handheld Cameras
In total, 69 patients from the four brick-and-mortar clinics participated in the feasibility assessment and pilot of handheld cameras (see
Table 1). Their average age was 57 years. Most (84.1%) were Latino, and over half (57.0%) were female. The average A1C was 7.6%. Half (47.8%) were diagnosed with diabetes six or more years ago. Nearly half (43.5%) had excellent control of their diabetes. Nearly three-fourths (71.0%) also had controlled hypertension. A fifth (21.7%) had never had an eye exam previously.
3.2. Image Quality and Diagnosis
All 69 patients in the sample were screened with handheld cameras. There were 69 handheld image sets included in the overall analysis. For comparison, there were also 57 desktop image sets taken of the same patients who were imaged at the three participating TDRS sites that were already a part of the DHS program prior to the pilot.
Overall, handheld cameras produced lower image quality ratings than desktop cameras (
Figure 1). Only 1% of handheld image sets received a rating higher than adequate, compared to 43% of desktop image sets. Nearly two-thirds (64%) of handheld image sets were rated insufficient for either full or any interpretation, compared to 6% of desktop image sets.
Sample retinal images taken by the TDRS program coordinators of their own eyes are shown in
Figure 2. The difference in image quality is noticeable: the handheld image sets are blurrier, darker, and less detailed than the desktop image sets. However, the second set of images from handheld cameras showed improvement, suggesting a need to learn how to adjust and position these portable devices so they can capture gradable images that are comparable to the image quality of desktop cameras.
Image readers were less likely to arrive at a diagnosis based on handheld camera image sets compared to desktop camera image sets (
Figure 3). A quarter (25%) of handheld image sets were given no diagnosis or were deemed ungradable, compared to 2% of desktop image sets. The diagnoses of no apparent DR, mild non-proliferative DR (NPDR), severe NPDR, and proliferative DR were all lower in percentage among the handheld image sets than for the desktop image sets. However, the diagnosis of moderate NPDR was slightly higher among handheld image sets than among desktop image sets.
To determine if screening metrics differed by camera type, image sets from handheld and desktop cameras were compared against one another. Most handheld-desktop image pairs (79%) had different image quality ratings, though only a third of handheld-desktop image pairs (35%) had a different diagnosis. Among these, there was no discernable pattern in terms of which camera type actually resulted in an interpretation of a more severe diagnosis—i.e., handheld image sets yielded a diagnosis that was: (a) more severe than for the desktop image sets in five of the image pairs; (b) less severe in five other image pairs; and (c) neither more or less severe in the remaining ten image pairs (e.g., “ungradable,” “no diagnosis,” or “missing”).
The Kappa statistic revealed that readers typically had a lower level of agreement regarding diagnoses or image quality for the handheld image sets versus the desktop image sets (
Table 2).
3.3. Facilitators
Results from the feasibility assessment and pilot of handheld cameras identified several facilitators that may help DHS expand its TDRS program. For instance, the pilot was integrated into an existing TDRS program infrastructure that already has many operational supports in place, including an established workflow and a data platform. Additionally, the program coordinators and photographers who participated in the pilot were experienced camera users, having already been trained on desktop cameras. This field experience helped to ease the transition from desktop to handheld cameras—the latter, as it turned out, took greater skill to operate. Staff familiarity with the general screening process and workflow likely reduced the training time needed to learn the policies, protocols, and approaches to capturing/handling images taken by the handheld cameras. The pilot also followed an existing DHS TDRS protocol that dilates patients’ eyes before screening. This expected DHS practice, developed to help photographers capture higher quality images of the retina, likely improved the visualization of the vascular structure and vessel feastures and made it easier for the readers to issue a diagnosis. Without dilating the eyes, the image quality from handheld cameras would have been lower.
3.4. Barriers
Results from the feasibility assessment and pilot of handheld cameras also revealed several barriers that the TDRS program had to overcome, generating lessons learned that could and should be considered as DHS looks to expand its screening program. First, the handheld camera model lacked a detailed instruction manual or technical support from the vendor, leading to some difficulties and delays in gaining familiarity with these cameras. The manual provided by the manufacturer contained limited information and did not include guidelines or techniques for capturing high-quality images in the field.
Second, handheld cameras took longer to set up and use than expected. For example, photographers had to repeatedly take the same image because the camera model frequently did not operate as anticipated—e.g., the screen sometimes remained dark despite the camera being turned on.
Third, while the camera model has an automatic function (i.e., once focused on a fixated target, it can capture the image clearly), photographers had a hard time maneuvering the device accurately or quickly enough for the function to work. Thus, most images from the pilot were captured manually; this, unfortunately, came at a cost of lower quality images, as even the slightest movement of the handheld model, such as manually pressing the button to capture the image, led to loss of focus and a suboptimal image. Paradoxically, these suboptimal images undermine the intended convenience of the handheld cameras—i.e., because of the poor image quality of the handheld image sets, many patients at the site where only the handhelds were used were later asked to return for a follow-up screening using desktop cameras.
Fourth, handheld cameras had a short battery life, resulting in photographers needing to charge these devices frequently, inadvertently disrupting the clinic workflow.
Fifth, significant staff time was required to coordinate and implement the pilot, particularly given the challenges with operating handheld cameras. This extra time added to staff’s existing responsibilities. However, this appears to be more of an issue with program start-up than with program maintenance or sustainability, as the amount of staff time required to operate the cameras is expected to diminish as the program matures and evolves.
Finally, the intended flexibility of handheld cameras did not come to fruition during the pilot. Their unpredictable performance ultimately required patients to sit still for extended periods of time to capture readable images. This inconvenience did not benefit those with mobility issues, as they already struggle to position correctly for desktop cameras.
4. Discussion
Handheld cameras performed less optimally on key screening metrics compared to desktop cameras. Among patients who were screened with both camera types, there was substantial divergence in image quality and, to a lesser extent, diagnosis.
The limited performance of handheld cameras appeared to be associated with the model that was selected, rather than with all handheld cameras. The specific camera model lacked a detailed manual or vendor support, required substantial setup time, operated unpredictably, and had suboptimal image quality. Indeed, recent studies using other handheld camera models saw better results [
12]: in one study, image quality was insufficient in only approximately 15% of cases, compared to 64% in the present feasibility assessment and pilot [
13]. However, this 15% is still significantly higher than what we are seeing for desktop cameras in the DHS TDRS program.
The feasibility of using handheld cameras largely depends on selecting the right model. The results of our pilot highlight several criteria that should be considered in selecting a camera model. For example, there should be adequate funding to purchase high-quality cameras, keeping in mind that the technology of handheld cameras is steadily improving and the cost of these various models should become lower. Another consideration is the staff who are being asked to operate these cameras. A handheld camera program can achieve greater success if the intended users of these cameras (e.g., program coordinators and/or photographers) are given the opportunity to weigh in on camera maneuverability and which model to select, as they are the most familiar with the demands of imaging their defined patient population(s). Lastly, cameras should be purchased from a vendor (or manufacturer) that can provide guidance on optimal equipment use—i.e., has a detailed manual, provides on-site or virtual training, technical support that is readily available.
4.1. Lessons Learned
The lessons learned from this feasibility assessment and pilot of handheld cameras underscore several administrative and logistical factors that healthcare organizations interested in using these devices should consider. First, organizations should determine whether there is already a TDRS program in place, with existing infrastructure and staff who are experienced with teleretinal screening, or if such a program needs to be developed. Second, organizations should ensure there are enough program coordinators with sufficient dedicated time to oversee implementation, lead trainings, and provide ongoing support for handheld camera use, given the different skillset that is required to use these devices compared to desktop cameras. Third, organizations should give photographers adequate time to train and conduct screenings using handheld cameras. Fourth, organizations should confirm there is buy-in among all those involved – including clinic leadership, program coordinators, and photographers – to run a handheld camera program. Finally, organizations should establish an implementation and quality improvement plan to roll out the cameras, assessing for progress and impact of these devices on teleretinal screenings in the field. Continuous quality monitoring of camera performance could help identify further gaps in program implementation and operation, helping to facilitate corrections or mid-course adjustments when needed.
4.2. Limitations
The sample size of our feasibility assessment and pilot was relatively small and consisted of only safety-net patients in Los Angeles County. Thus, the results may not be generalizable outside of DHS or the county. Also, multiple delays during the pilot—related to camera acquisition, training, and site readiness—led to fewer screenings and participating sites than originally anticipated.
4.3. Conclusion
Handheld cameras may increase equitable access to DR screenings due to their portability, small size, and lower cost, compared to desktop cameras. In particular, they may benefit hard-to-reach communities where access to TDRS is harder to come by. The key challenges encountered in our pilot, such as suboptimal image quality, appeared to be attributable to the specific camera model we used rather than to all handheld cameras available in the market. Future assessments and program refinements should focus on health system strategies that can mitigate many of these implementation challenges, and on demonstrating the value of investing in both higher-quality handheld cameras and an operational infrastructure that can support the population health goals of the intended TDRS program.
Author Contributions
Conceptualization, G.G., T.K., and L.D.; methodology, G.G., R.F., T.K., and L.D.; formal analysis, G.G. and R.F.; project implementation, E.F.; writing—original draft preparation, G.G. and R.F.; writing—review and editing, G.G., R.F., E.F., T.K., and L.D.; funding acquisition, T.K. All authors have read and agreed to the published version of this article.
Funding
This work was supported in part by a cooperative agreement from the Centers for Disease Control and Prevention, Award No. NU58DP006619. The findings and conclusions presented in this article are those of the authors and do not necessarily represent the views of the Centers for Disease Control and Prevention or other organizations mentioned in the text.
Institutional Review Board Statement
Protocols and materials for this feasibility assessment and pilot of handheld cameras were reviewed and approved by the Institutional Review Board for the Los Angeles County Departments of Public Health and Health Services.
Informed Consent Statement
Photographers asked patients for verbal consent before using the handheld cameras for retinal imaging.
Data Availability Statement
Restricted access of de-identified data may be available upon reasonable request.
Acknowledgments
The authors thank Nancy Ramirez for her assistance in implementing the project and collecting data. The authors also thank the photographers, readers, and patients who participated in this project.
Conflicts of Interest
The authors declare no conflicts of interest.
References
- Lee R, Wong TY, Sabanayagam C. Epidemiology of diabetic retinopathy, diabetic macular edema and related vision loss. Eye and Vision 2015, 2, 17. [CrossRef]
- Bastos de Carvalho A, Ware SL, Lei F, et al. Implementation and sustainment of a statewide telemedicine diabetic retinopathy screening network for federally designated safety-net clinics. PLOS ONE 2020, 15, e0241767. [CrossRef]
- Diabetes Control and Complications Trial Research Group. The effect of intensive diabetes treatment on the progression of diabetic retinopathy in insulin-dependent diabetes mellitus: the Diabetes Control and Complications Trial. Arch Ophthalmol 1993, 329, 977-86. [CrossRef]
- Cuadros J, Bresnick G. Can commercially available handheld retinal cameras effectively screen diabetic retinopathy? Journal of Diabetes Sci Technol. 2017, 11, 135-137. [CrossRef]
- American Diabetes Association. Standards of medical care in diabetes—2013. Diabetes Care 2013, 36, S11–66. [CrossRef]
- Lee DJ, Kumar N, Feuer WJ, et al. Dilated eye examination screening guideline compliance among patients with diabetes without a diabetic retinopathy diagnosis: the role of geographic access. BMJ Open Diabetes Res Care 2014, 2, e000031. [CrossRef]
- Daskivich LP, Vasquez C, Martinez Jr C, et al. Implementation and evaluation of a large-scale teleretinal diabetic retinopathy screening program in the Los Angeles County Department of Health Services. JAMA Intern Med 2017, 177, 642-649. [CrossRef]
- Gibson DM. Estimates of the percentage of US adults with diabetes who could be screened for diabetic retinopathy in primary care settings. JAMA Ophthalmo 2019, 137, 440–444. [CrossRef]
- Tan CH, Kyaw BM, Smith H, et al. Use of smartphones to detect diabetic retinopathy: scoping review and meta-analysis of diagnostic test accuracy studies. J Med Internet Res 2020, 22, e16658. [CrossRef]
- Bruce BB, Newman NJ, Pérez MA, et al. Non-mydriatic ocular fundus photography and telemedicine: past, present, and future. J Neuroophthalmol 2013, 37, 51–7. [CrossRef]
- Jin K, Lu H, Su Z, et al. Telemedicine screening of retinal diseases with a handheld portable non-mydriatic fundus camera. BMC Ophthalmol 2017, 17, 89. [CrossRef]
- Queiroz MS, de Carvalho JX, Bortoto SF, et al. Diabetic retinopathy screening in urban primary care setting with a handheld smartphone-based retinal camera. Acta Diabetol 2020, 57, 1493–1499. [CrossRef]
- Kubin A-M, Wirkkala J, Keskitalo A, et al. Handheld fundus camera performance, image quality and outcomes of diabetic retinopathy grading in a pilot screening study. Acta Ophthalmol 2021, 99, e1415-e1420. [CrossRef]
|
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).