Preprint
Review

Deep Learning for Point-of-Care Ultrasound Image Quality Enhancement: A Review

Altmetrics

Downloads

103

Views

57

Comments

0

A peer-reviewed article of this preprint also exists.

Submitted:

31 July 2024

Posted:

01 August 2024

You are already at the latest version

Alerts
Abstract
The popularity of handheld devices for point-of-care ultrasound (POCUS) has increased in recent years due to their portability and cost-effectiveness. However, POCUS has the drawback of lower imaging quality compared to conventional ultrasound, because of hardware limitations. Improving the quality of POCUS through post-image processing would therefore be beneficial, with deep learning approaches showing promise in this regard. This review investigates the state-of-the-art progress of image enhancement using deep learning suitable for POCUS applications. A systematic search was conducted from January 2024 to February 2024 on PubMed and Scopus. From the 457 articles that were found, the full text was retrieved for 69 articles. From this selection, 15 articles were identified addressing multiple quality enhancement aspects. A disparity in the baseline performance of the low-quality input images was seen across these studies, ranging between 8.65–29.24 dB for the Peak Signal-to-Noise Ratio (PSNR) and 0.03-0.71 for the Structural Similarity Index Measure (SSIM). In six studies, where both PSNR and the SSIM metrics were reported for the baseline and the generated images a mean difference of 6.60 (SD ± 2.99) and 0.28 (SD ± 0.15) was observed for the PSNR and SSIM, respectively. The reported performances demonstrate the potential of deep-learning-based image enhancement for POCUS. However, variability in the extent of performance gain across datasets and articles was notable, and the heterogeneity across articles makes quantifying the exact improvements challenging.
Keywords: 
Subject: Computer Science and Mathematics  -   Artificial Intelligence and Machine Learning

1. Introduction

The use of handheld devices suitable for point-of-care ultrasound (POCUS) has been on the rise in recent years. This increase in popularity can be attributed to some key characteristics of these devices. Firstly, their portability makes them more convenient compared to conventional cart-based devices. Moreover, these handheld POCUS devices are more affordable than traditional ultrasound machines [1,2,3,4,5], making ultrasound technology more accessible and expanding its application beyond the radiology departments. This is particularly useful in situations where larger, more expensive ultrasound equipment is impractical, such as in bedside emergency settings, general practitioner offices, home care environments, and rural medicine facilities [6,7,8,9,10,11,12].
However, one of the primary drawbacks of ultrasound examination with a handheld device is the reduced imaging quality due to hardware limitations and the absence of sophisticated post-processing algorithms. These limitations can potentially lead to less accurate diagnoses [2,4]. Compared to conventional high-end ultrasound systems, handheld POCUS devices typically exhibit reduced resolution and contrast, less distinct texture or edges of structures, and increased noise levels [6,13,14,15]. Despite the advancements in POCUS technology in recent years, a trade-off remains between imaging quality and the benefits of cost and portability [14,16,17].
Efforts to enhance the quality of POCUS can be categorized into three main approaches. The first approach involves advancements in hardware. However, this approach is constrained by rising costs or compromised portability. Another option for quality improvement involves refinements in the ultrasound beam-forming algorithm [18,19]. Nevertheless, the accessibility of raw radio frequency (RF) signals required for these improvements is limited in most commercial ultrasound systems. Therefore, this systematic review opts to center its focus on a third alternative: modifications to the image post-processing methods, eliminating the need for hardware remodeling or operations on the raw RF signal.
Traditional post-processing techniques, such as filtering and deconvolution, have been employed for ultrasound image enhancement for some time, as described in the review by Ortiz et al. [20]. Over the last few years, deep learning has emerged as a powerful tool, achieving state-of-the-art performance in various image processing tasks, including image quality enhancement [21,22,23]. Lepcha et al. recently conducted a systematic survey on existing state-of-the-art image enhancement techniques, including deep learning [24]. However, to the best of the authors’ knowledge, there has been no recent literature review on the current status of deep learning-based image enhancement specifically focusing on ultrasound. This gap in the literature presents a compelling area for investigation, particularly given the affordability and flexibility of POCUS, alongside its inherent challenges related to image quality.
The aim of this systematic review is, therefore, to explore the current state-of-the-art progress in ultrasound image enhancement using deep learning for point-of-care ultrasound applications. In this review, we will categorize the quality enhancement methods used in the selected articles, provide an overview of the improvements in performance achieved by these methods, and assess the practical benefits and limitations of these deep learning algorithms in enhancing ultrasound image quality for clinical practice.

2. Materials and Methods

This systematic review was performed according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines [25].

2.1. Literature Search

2.1.1. Search Strategy

A literature search was conducted in PubMed and Scopus on January 29, 2024, covering publications from 2018 onwards. This decision was made due to the substantial growth of deep learning applications in medical contexts observed only in more recent years, especially in tasks related to image generation [21,26]. The search strategy was formulated to encompass three primary concepts: “Quality enhancement”, “Ultrasound” and “Deep learning”. It should be noted that even though this review focuses on the POCUS applications, the search included methods developed for general ultrasound imaging to ensure a comprehensive coverage of relevant algorithms. Synonyms and related keywords for each concept were identified and included in the search string, such as specific aspects of quality enhancement like denoising or specific types of deep learning networks like a convolutional neural network. The complete search strings for both databases are reported in Appendix A. Additionally, a snowballing search was conducted to identify related articles, and duplicates were removed during the screening process.

2.1.2. Eligibility Criteria

Studies were included if they met the following criteria: 1) focused on medical image quality enhancement, defined as increased spatial resolution, contrast enhancement, denoising, or enhancement of structure boundaries; 2) used post-image processing techniques on B-mode ultrasound images, i.e. image-to-image methods; 3) proposed a deep learning algorithm; 4) proposed algorithm was specifically developed for ultrasound images; 5) full-text original article was written in English.
Exclusion criteria included: 1) other quality enhancement methods like restoration and inpainting, or different study aims like domain conversion and 3D reconstruction; 2) hardware changes were required; 3) RF ultrasound data was used as input; 4) research on (microbubble) contrast-enhanced ultrasound, ultrasound computed tomography, elastography, color Doppler ultrasound, quantitative ultrasound, and high-intensity focused ultrasound; 5) non-journal publications (e.g. reviews, comments, dissertations, newspapers, and books); 6) non-accessible full-text publications.

2.1.3. Selection Procedure

The title and abstract of all studies were screened. Studies were excluded if they did not meet the eligibility criteria. For the remaining studies, the full text was retrieved and evaluated comprehensively. Each study was classified according to the quality enhancement aspects it addressed, which is explained in more depth in the paragraph below. This classification resulted in a final selection of articles for further assessment and quantification.

2.2. Categorization by Quality Enhancement Aspects

Papers published on quality enhancement in ultrasound were further grouped based on the specific distortions addressed, which are particularly relevant to POCUS imaging, namely: 1) spatial resolution; 2) contrast; 3) texture or detail enhancement; and 4) noise. The definitions of these quality enhancement aspects as implemented in this review are further specified in Table 1.
Given the multifaceted nature of distortions in handheld ultrasound and the necessity for real-time quality enhancement, this review focused on deep learning algorithms simultaneously addressing multiple quality enhancement aspects. However, these quality enhancement aspects can be closely related. For instance, the presence of noise reduces image contrast and resolution, thereby affecting edges and fine details [27,28]. Therefore, improving the quality of the ultrasound image by addressing one or more of the quality enhancement aspects should be specifically described and evaluated through a suitable performance metric.
Furthermore, studies were also included if they reported on the process of mapping low-quality images to high-quality reference images. This had to be achieved by obtaining ultrasound images that naturally showed a disparity in quality as a result of differences in the capture process, such as a different number of piezoelectric elements or the number of plane waves used, and not by artificially inducing quality reduction or improvement. Consequently, this led to the identification of a final category: 5) general quality improvement. The articles addressing either multiple quality enhancement aspects or quality improvement, in general, were selected for further descriptive and quantitative assessment.
Table 1. Definitions of quality enhancement aspects.
Table 1. Definitions of quality enhancement aspects.
Quality enhancement aspect Definition
1. Spatial resolution The ability of differentiating two adjacent structures as being distinct from one another, either parallel (axial resolution) or perpendicular (lateral resolution) to the direction of the ultrasound beam [29].
2. Contrast resolution The ability to distinguish between different echo amplitudes of adjacent structures through image intensity variations [29].
3. Detail enhancement of structures Enhancement of texture, edges, or boundaries between structures.
4. Noise Minimization of random variability that is not part of the desired signal.
5. General quality improvement Mapping low-quality images to high-quality reference images, where the quality disparities are inherent to differences in the capture process and not artificially induced.

2.3. Data Extraction

Two performance metrics were evaluated; the Peak Signal-to-Noise Ratio (PSNR) and the Structural Similarity Index Measure (SSIM). The PSNR and SSIM are commonly used metrics for assessing image quality, which can quantitatively show the effectiveness of the proposed networks. Both are full-reference metrics, which evaluate the quality of an image, by comparing it to a high-quality reference image. Data was extracted if the article reported either the PSNR or SSIM with standard deviation for both the low-quality input images as well as for the generated images by the proposed algorithm.
The PSNR is defined as the ratio of the maximum power of a signal and the power of the distorting noise [30]. It reflects the pixel-based similarity between the reconstructed image and the corresponding high-quality reference [14]. This ratio between two images is expressed in decibels (dB), thus following a l o g 10 scale. A higher value indicates that the reconstructed image contains more details and provides a higher image quality [6,31]. And vice versa, a small value of the PSNR implies high numerical differences between images [31]. Given a reference image f and a test image g, both of size MxN with maximum intensity M A X I and the Mean Squared Error ( M S E ) between f and g, the PSNR is calculated as follows:
PSNR = 10 · log 10 MAX I 2 MSE ( f , g )
where MSE represents,
MSE ( f , g ) = 1 M N i = 1 M j = 1 N ( f i j g i j ) 2
The SSIM evaluates the perceived quality and assesses the perceptual-based similarity between paired images [14,30]. It is considered to be correlated with the quality perception of the human visual system. Instead of using traditional error summation methods, the SSIM is designed by modeling any image distortion as a combination of three factors that are loss of correlation, luminance distortion, and contrast distortion. The SSIM index ranges between -1 and 1, with a value of 1 indicating perfect correlation, 0 indicating no correlation, and -1 indicating anti-correlation between the images [31].
For a reference image f and a test image g, where μ denotes the mean, σ f 2 denotes the variance of f, σ f g denotes the covariance of f and g and C 1 and C 2 are two positive constants used to avoid a null denominator, the SSIM is defined as:
SSIM ( f , g ) = ( 2 μ f μ g + C 1 ) ( 2 σ f g + C 2 ) ( μ f 2 + μ g 2 + C 1 ) ( σ f 2 + σ g 2 + C 2 ) .
Additionally, the extracted data was grouped by set type (in vivo, phantom, or simulation).

2.4. Statistical Analysis

Statistical analysis were performed using IBM SPSS Statistics, Version 29.0.2 (Released 2023; IBM Corp., Armonk, New York, United States), using a random effects model. Statistical heterogeneity was evaluated by calculating I2 statistics, with high heterogeneity defined as >75% and statistical significance defined as p < 0.05. Mean differences were calculated by subtracting the quality performance values of the low-quality input image from the values of the generated image by the proposed algorithm. The results were summarized in forest plots.

3. Results

3.1. Study Selection

The systematic literature research identified 457 articles from the two databases after duplicate removal. Snowballing did not identify any additional relevant articles. The initial screening based on title and abstract resulted in 69 articles being selected for full-text review [6,14,15,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97]. Following an initial analysis focusing on quality enhancement aspects, 15 articles were identified that addressed multiple quality enhancement aspects. These articles were selected for further descriptive and quantitative assessment. Finally, 6 articles were selected for a meta-analysis based on their reported outcomes. The study selection process is illustrated in Figure 1.

3.2. Quality Enhancement Aspects

The distribution of quality enhancement aspects across the 69 included articles was analyzed, for which the findings are depicted in Figure 2. This figure shows a predominant focus on denoising, followed by resolution enhancement. Of the 69 articles, the majority (n=54) focused on a single quality enhancement aspect, while a smaller part (n=15) addressed a combination of aspects or quality enhancement in general. Consequently, these 15 articles [6,14,15,32,33,34,35,36,37,38,39,40,41,42,43,46,47,86,87,88,89] were selected for further analysis.

3.3. Study Characteristics

The datasets used in the 15 selected articles for further assessment were categorized into three types: 1) in vivo data; 2) phantom data (including in vitro, ex vivo, and tissue-mimicking datasets, both self-made and commercial); and 3) simulation data. For studies comparing multiple deep learning algorithms or loss functions, the best-performing algorithm or loss function and their corresponding performances were reported. Characteristics of the included articles are shown in detail in Table 2. Most articles [33,37,38,41,42,43] used plane wave imaging (PWI) for data collection, by mapping low-quality images from one or a few angles to high-quality compounded images from multiple angles. The next most common ultrasound mode for data collection involved low-quality input data from handheld POCUS and high-quality reference images from high-end ultrasound devices [6,14,15,35]. Additionally, all articles used (a combination of) a CNN and/or a GAN as a deep learning network. More variety in loss functions was observed, even though often (albeit in combination with other loss functions) the MSE and SSIM loss were used.

3.4. Study Outcomes

The outcomes of the selected studies are reported in Table 3, including the computation time, source code availability, number of images in test set, performance metrics for low-quality input images, and performance metrics for enhanced generated images. The performance of the proposed algorithms were categorized by dataset type (in vivo, phantom or simulation). Both full-reference metrics and non-reference metrics are reported.
Additionally, the quantitative outcomes for the most commonly reported performance metrics (PSNR and SSIM) are visualized in Figure 3 and Figure 4. Both the baseline performance metrics for the low-quality input images and the obtained performance metrics for the images generated by the proposed algorithm are shown. In these figures, each bar represents a dataset and the color represents the corresponding study. The baseline PSNR low-quality input images ranged from 8.65 to 29.24, while the generated images had PSNR values ranging from 13.99 to 36.59. Similarly, SSIM values ranged from 0.03 to 0.71 for low-quality input images and from 0.30 to 0.93 for the enhanced images.

3.5. Meta-Analysis Results

Six studies reported PSNR values[6,14,32,34,37,39] and five reported SSIM values[6,14,34,37,39] with standard deviations for both the low-quality input images and the generated images. Consequently, these studies were included in a meta-analysis. The meta-analysis revealed a mean increase in PSNR between the generated and low-quality input images of 6.60 ± 2.99 (Figure 5). The mean increase in SSIM was 0.28 ± 0.15 (Figure 6). Both increases were statistically significant (p=0.00). However, high heterogeneity was observed in both meta-analyses (I2 = 100%), indicating substantial variability among the included studies.

4. Discussion

Point-of-care ultrasound (POCUS) is recognized for its affordability and convenience, but it suffers from a lower image quality compared to conventional high-end, cart-based ultrasound systems. Recent advances in deep learning have achieved state-of-the-art performance in various problems of image processing, including the enhancement of image quality. This systematic review provides a overview of research focused on ultrasound image enhancement using deep learning methods, suitable for real-time POCUS applications. A comprehensive description of the methods used, as well as a further analysis of the performance of the proposed algorithms, is given.
It was observed that the majority of studies utilized GANs incorporating CNNs in both the generator and discriminator networks. The emergence of GANs in the medical imaging field, as described by Liu et al. [23], is noteworthy as these models are capable of generating highly realistic medical images, effectively bridging the gap between supervised learning and image generation. Despite this trend, there was considerable variation in GAN and CNN architectures, loss functions, and evaluation methods across studies. Some studies compared their proposed network with existing networks to benchmark quality enhancement, while others compared the generated images to the original low-quality input images and/or paired high-quality reference images. For the methods that quantitatively assessed the networks’ performance, a variety of image quality metrics was reported. In addition to metric-based assessments, some studies incorporated visual assessments or tested the effect of quality enhancement on downstream tasks such as segmentation or diagnosis. Often, a combination of these evaluation methods was utilized to provide a more comprehensive overview of the proposed algorithm’s performance.
Variability in evaluation methods and performance metrics poses a challenge for direct comparisons among all articles. However, those reporting the most commonly reported performance metrics (PSNR and SSIM) for the low-quality input images or generated images allowed for some comparisons. Figure 3 and Figure 4 reveal substantial disparities in low-quality input image performance, indicating varying baseline qualities across studies. These differences may be explained by heterogeneity in ultrasound devices and dataset types. Nevertheless, consistent improvements in image quality were observed when comparing enhanced images to the original inputs, as shown by Figure 3 and Figure 4. The meta-analysis further supports these findings, showing a statistically significant increase in PSNR and SSIM. This indicates the potential of the proposed deep learning algorithms for enhancing the quality of ultrasound images. However, variability in the extent of performance gain across datasets and articles is notable. This is further supported by the I2 score of both meta-analyses (I2 = 100%), indicating high heterogeneity. This variability complicates the determination of achievable quality gain. Notably, simulated datasets generally exhibited higher performance gains compared to in vivo and phantom datasets, suggesting that simulation results may not fully represent clinical scenarios.
This review focused on ultrasound enhancement for POCUS applications but included studies for ultrasound in general as well to ensure comprehensive coverage of relevant algorithms. A key selection criterion was the simultaneous addressing of multiple distortion types, which led to the inclusion of 15 articles. Interestingly, despite the importance of computation time for real-time applications, most articles did not report this aspect. Furthermore, the lack of source code availability hinders the reproducibility of the conducted research. Studies focusing specifically on enhancing POCUS images commonly paired low-quality POCUS images with high-quality images from high-end machines. Although these image pairs are expected to contain the same locational information, they often suffer from locational differences due to acquisition challenges, which can only be partially mitigated by registration methods and consequently impact network training. In contrast, studies using Plane Wave Imaging (PWI) did not encounter this issue as they used the same device with different numbers of angles, resulting in nearly identical locational information. Future research could benefit from developing more accurately paired datasets, particularly using ex vivo data, to improve image-to-image translation techniques for POCUS.
Several limitations of this review should be noted. First, the selection of articles that addressed multiple quality enhancement aspects might have excluded relevant studies focusing on single aspects. Second, the heterogeneous nature of the included studies, with varying datasets and ultrasound devices, complicates direct and fair comparisons. Although we attempted to group datasets into in vivo, phantom, and simulation categories, diversity remained within these subgroups. Lastly, performing a meta-analysis for machine learning-based research presents unique challenges, as this method was originally designed for comparing cases and controls in medical treatments. Aspects such as the number of images in "case" and "control" groups, use of cross-validation, and dataset similarities due to augmentation were not consistently accounted for. Therefore, the meta-analysis should be seen primarily as an illustrative tool, and caution is needed when drawing firm conclusions about the precise effects of ultrasound image enhancement in terms of expected PSNR and SSIM gain.

5. Conclusions

This review thoroughly examined the progress in ultrasound image quality enhancement using deep learning, with a focus on applications suitable for POCUS. Ultrasound image enhancement through deep learning is a vibrant research field. However, the majority of performed studies focus on single aspects of quality enhancement, which is less effective for POCUS that suffers from multiple distortion types. Studies addressing multiple quality aspects demonstrate the potential for substantial image quality improvements across various ultrasound devices. PSNR values for low-quality input images ranged from 8.65 to 29.24, improving to 13.99 to 36.59 for the enhanced images. Similarly, SSIM values ranged from 0.03 to 0.71 and 0.30 to 0.93 for the low-quality input images and the enhanced images, respectively. However, quantifying the expected performance gain precisely remains challenging due to the heterogeneous nature of the studies. It is important to note that studies often neglect to report computation times, a factor crucial for enabling real-time applications. Future research should prioritize the development of standardized evaluation metrics, report computational efficiency, and ensure reproducibility by sharing source code. Additionally, creating accurate paired datasets with POCUS and high-end US images is essential for advancing this field and achieving reliable real-time image enhancement.

Author Contributions

Conceptualization, F.G. and B.D.; methodology, H. G.A. P., L.M.K., M.W., F.G. and B.D.; software, H. G.A. P.; validation, H. G.A. P., L.M.K., M.W., F.G. and B.D.; formal analysis, H. G.A. P., F.G. and B.D.; investigation, H. G.A. P. and F.G.; resources, B.D.; data curation, H. G.A. P. and F.G.; writing—original draft preparation, H. G.A. P. and F.G.; writing—review and editing, H. G.A. P., L.M.K., M.W., F.G. and B.D.; visualization, H. G.A. P. and F.G.; supervision, F.G. and B.D.; project administration, B.D. All authors have read and agreed to the published version of the manuscript.

Funding

Research at the Netherlands Cancer Institute is supported by institutional grants of the Dutch Cancer Society and of the Dutch Ministry of Health, Welfare and Sport.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data sharing is not applicable.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Search Strings

PubMed:
("Ultrasonography"[Mesh] OR "ultraso*"[tiab]) AND ("Deep Learning"[Mesh] OR "deep learning"[tiab] OR "deep-learning"[tiab] OR "neural network*"[tiab] OR "generative adversarial*"[tiab] OR "ANN"[tiab] OR "CNN"[tiab] OR "RNN"[tiab] OR "LSTM"[tiab] OR "DNN"[tiab]) AND ("Image Enhancement"[Mesh] OR "enhancement"[tiab] OR "quality improving"[tiab: 2] OR "quality improvement"[tiab: 2] OR "quality improved"[tiab: 2] OR "quality enhanced"[tiab: 2] OR "quality enhancing"[tiab: 2] OR "resolution"[tiab] OR "Reconstruction"[tiab] OR "denoising"[tiab] OR "noise"[tiab] OR "despeckling"[tiab]) AND (2017:2024[pdat]) NOT "segmentation"[tiab] NOT "classification"[tiab] NOT "detection"[tiab] NOT "quantification"[tiab] NOT "detection"[tiab] NOT "localization microscopy"[tiab] NOT "microvessel"[tiab] NOT "microbubble"[tiab] NOT "tomography"[ti] NOT "ultrasound comp* tomography"[tiab: 0] NOT "raw"[tiab] NOT "radio-frequency"[tiab] NOT "beamforming"[tiab] NOT "sparse"[tiab] NOT "photoacoustic"[tiab] NOT "veloc*"[tiab] NOT "elastograph*"[tiab] NOT "diagnos*"[ti] NOT "review"[ti]
Scopus:
(( TITLE-ABS-KEY ( "deep learning" ) OR TITLE-ABS-KEY ( "deep-learning" ) OR TITLE-ABS-KEY ( "neural network" ) OR TITLE-ABS-KEY ( "generative adversarial" ) OR TITLE-ABS-KEY ( "ANN" ) OR TITLE-ABS-KEY ( "CNN" ) OR TITLE-ABS-KEY ( "RNN" ) OR TITLE-ABS-KEY ( "LSTM" ) OR TITLE-ABS-KEY ( "DNN" ) ) AND ( TITLE-ABS-KEY ( "ultrasound" ) OR TITLE-ABS-KEY ( "ultrasonography" ) ) AND ( TITLE-ABS-KEY ( "enhancement" ) OR ( TITLE-ABS-KEY ( "quality" ) W/2 TITLE-ABS-KEY ( "improv*" ) ) OR ( TITLE-ABS-KEY ( "quality " ) W/2 TITLE-ABS-KEY ( "enhanc*" ) ) OR TITLE-ABS-KEY ( "resolution" ) OR TITLE-ABS-KEY ( "reconstruction" ) OR TITLE-ABS-KEY ( "denoising") OR TITLE-ABS-KEY ( "noise") OR TITLE-ABS-KEY ("despeckling")) AND PUBYEAR > 2017 AND PUBYEAR < 2025 AND NOT TITLE-ABS-KEY ( "segmentation" ) AND NOT TITLE-ABS-KEY ( "classification" ) AND NOT TITLE-ABS-KEY ( "detection" ) AND NOT TITLE-ABS-KEY ( "quantification" ) AND NOT TITLE-ABS-KEY ( "localization microscopy" ) AND NOT TITLE-ABS-KEY ( "microvessel*" ) AND NOT TITLE-ABS-KEY ( "microbubble*" ) AND NOT TITLE ( "tomography" ) AND NOT TITLE-ABS-KEY ( "ultrasound comp* tomography" ) AND NOT TITLE-ABS-KEY ( "raw" ) AND NOT TITLE-ABS-KEY ( "radio-frequency" ) AND NOT TITLE-ABS-KEY ( "beamforming" ) AND NOT TITLE-ABS-KEY ( "sparse" ) AND NOT TITLE-ABS-KEY ( "photoacoustic" ) AND NOT TITLE-ABS-KEY ( "veloc*" ) AND NOT TITLE-ABS-KEY ( "elastograph*" ) AND NOT TITLE ( "diagnos*" ) AND NOT TITLE ( "review*" )

References

  1. Hashim, A.; Tahir, M.J.; Ullah, I.; Asghar, M.S.; Siddiqi, H.; Yousaf, Z. The utility of point of care ultrasonography (POCUS). Ann Med Surg (Lond) 2021, 71, 102982. [Google Scholar] [CrossRef] [PubMed]
  2. Riley, A.; Sable, C.; Prasad, A.; Spurney, C.; Harahsheh, A.; Clauss, S.; Colyer, J.; Gierdalski, M.; Johnson, A.; Pearson, G.D.; et al. Utility of hand-held echocardiography in outpatient pediatric cardiology management. Pediatr Cardiol 2014, 35, 1379–86. [Google Scholar] [CrossRef] [PubMed]
  3. Gilbertson, E.A.; Hatton, N.D.; Ryan, J.J. Point of care ultrasound: the next evolution of medical education. Ann Transl Med 2020, 8, 846. [Google Scholar] [CrossRef] [PubMed]
  4. Stock, K.F.; Klein, B.; Steubl, D.; Lersch, C.; Heemann, U.; Wagenpfeil, S.; Eyer, F.; Clevert, D.A. Comparison of a pocket-size ultrasound device with a premium ultrasound machine: diagnostic value and time required in bedside ultrasound examination. Abdom Imaging 2015, 40, 2861–6. [Google Scholar] [CrossRef] [PubMed]
  5. Han, P.J.; Tsai, B.T.; Martin, J.W.; Keen, W.D.; Waalen, J.; Kimura, B.J. Evidence basis for a point-of-care ultrasound examination to refine referral for outpatient echocardiography. The American Journal of Medicine 2019, 132, 227–233. [Google Scholar] [CrossRef] [PubMed]
  6. Zhou, Z.; Wang, Y.; Guo, Y.; Qi, Y.; Yu, J. Image Quality Improvement of Hand-Held Ultrasound Devices With a Two-Stage Generative Adversarial Network. IEEE Trans Biomed Eng 2020, 67, 298–311. [Google Scholar] [CrossRef] [PubMed]
  7. Nelson, B.P.; Sanghvi, A. Out of hospital point of care ultrasound: current use models and future directions. Eur J Trauma Emerg Surg 2016, 42, 139–50. [Google Scholar] [CrossRef] [PubMed]
  8. Kolbe, N.; Killu, K.; Coba, V.; Neri, L.; Garcia, K.M.; McCulloch, M.; Spreafico, A.; Dulchavsky, S. Point of care ultrasound (POCUS) telemedicine project in rural Nicaragua and its impact on patient management. J Ultrasound 2015, 18, 179–85. [Google Scholar] [CrossRef] [PubMed]
  9. Stewart, K.A.; Navarro, S.M.; Kambala, S.; Tan, G.; Poondla, R.; Lederman, S.; Barbour, K.; Lavy, C. Trends in Ultrasound Use in Low and Middle Income Countries: A Systematic Review. Int J MCH AIDS 2020, 9, 103–120. [Google Scholar] [CrossRef]
  10. Becker, D.M.; Tafoya, C.A.; Becker, S.L.; Kruger, G.H.; Tafoya, M.J.; Becker, T.K. The use of portable ultrasound devices in low- and middle-income countries: a systematic review of the literature. Trop Med Int Health 2016, 21, 294–311. [Google Scholar] [CrossRef]
  11. McBeth, P.B.; Hamilton, T.; Kirkpatrick, A.W. Cost-effective remote iPhone-teathered telementored trauma telesonography. Journal of Trauma and Acute Care Surgery 2010, 69, 1597–1599. [Google Scholar] [CrossRef] [PubMed]
  12. Evangelista, A.; Galuppo, V.; Méndez, J.; Evangelista, L.; Arpal, L.; Rubio, C.; Vergara, M.; Liceran, M.; López, F.; Sales, C. Hand-held cardiac ultrasound screening performed by family doctors with remote expert support interpretation. Heart 2016, 102, 376–382. [Google Scholar] [CrossRef] [PubMed]
  13. Salimi, N.; Gonzalez-Fiol, A.; Yanez, N.D.; Fardelmann, K.L.; Harmon, E.; Kohari, K.; Abdel-Razeq, S.; Magriples, U.; Alian, A. Ultrasound Image Quality Comparison Between a Handheld Ultrasound Transducer and Mid-Range Ultrasound Machine. Pocus j 2022, 7, 154–159. [Google Scholar] [CrossRef] [PubMed]
  14. Zhou, Z.; Guo, Y.; Wang, Y. Handheld Ultrasound Video High-Quality Reconstruction Using a Low-Rank Representation Multipathway Generative Adversarial Network. IEEE Transactions on Neural Networks and Learning Systems 2020, 32, 575–588. [Google Scholar] [CrossRef] [PubMed]
  15. Khan, S.; Huh, J.; Ye, J.C. Contrast and resolution improvement of pocus using self-consistent cyclegan. In Proceedings of the MICCAI Workshop on Domain Adaptation and Representation Transfer. Springer; pp. 158–167.
  16. Jafari, M.H.; Girgis, H.; Van Woudenberg, N.; Moulson, N.; Luong, C.; Fung, A.; Balthazaar, S.; Jue, J.; Tsang, M.; Nair, P. Cardiac point-of-care to cart-based ultrasound translation using constrained CycleGAN. International journal of computer assisted radiology and surgery 2020, 15, 877–886. [Google Scholar] [CrossRef] [PubMed]
  17. Henderson, R.; Murphy, S. Portability enhancing hardware for a portable ultrasound system, 2017. US Patent No. 9,629, 606.
  18. Lockwood, G.R.; Talman, J.R.; Brunke, S.S. Real-time 3-D ultrasound imaging using sparse synthetic aperture beamforming. IEEE transactions on ultrasonics, ferroelectrics, and frequency control 1998, 45, 980–988. [Google Scholar] [CrossRef] [PubMed]
  19. Matrone, G.; Savoia, A.S.; Caliano, G.; Magenes, G. The delay multiply and sum beamforming algorithm in ultrasound B-mode medical imaging. IEEE transactions on medical imaging 2014, 34, 940–949. [Google Scholar] [CrossRef] [PubMed]
  20. Ortiz, S.H.C.; Chiu, T.; Fox, M.D. Ultrasound image enhancement: A review. Biomedical Signal Processing and Control 2012, 7, 419–428. [Google Scholar] [CrossRef]
  21. Anaya-Isaza, A.; Mera-Jiménez, L.; Zequera-Diaz, M. An overview of deep learning in medical imaging. Informatics in medicine unlocked 2021, 26, 100723. [Google Scholar] [CrossRef]
  22. Zhang, H.M.; Dong, B. A review on deep learning in medical image reconstruction. Journal of the Operations Research Society of China 2020, 8, 311–340. [Google Scholar] [CrossRef]
  23. Liu, J.; Li, K.; Dong, H.; Han, Y.; Li, R. Medical Image Processing based on Generative Adversarial Networks: A Systematic Review. Curr Med Imaging 2023. [Google Scholar] [CrossRef] [PubMed]
  24. Lepcha, D.C.; Goyal, B.; Dogra, A.; Sharma, K.P.; Gupta, D.N. A deep journey into image enhancement: A survey of current and emerging trends. Information Fusion 2023, 93, 36–76. [Google Scholar] [CrossRef]
  25. Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. Bmj 2021, 372, n71. [Google Scholar] [CrossRef] [PubMed]
  26. Chakraborty, C.; Bhattacharya, M.; Pal, S.; Lee, S.S. From machine learning to deep learning: An advances of the recent data-driven paradigm shift in medicine and healthcare. Current Research in Biotechnology 2023, 100164. [Google Scholar] [CrossRef]
  27. Makwana, G.; Yadav, R.N.; Gupta, L. , 2022; pp. 303–313.Enhancement. In Internet of Things and Its Applications: Select Proceedings of ICIA 2020; pp. 2022303–313.
  28. Michailovich, O.V.; Tannenbaum, A. Despeckling of medical ultrasound images. ieee transactions on ultrasonics, ferroelectrics, and frequency control 2006, 53, 64–78. [Google Scholar] [CrossRef]
  29. Ng, A.; Swanevelder, J. Resolution in ultrasound imaging. Continuing Education in Anaesthesia, Critical Care & Pain 2011, 11, 186–192. [Google Scholar]
  30. Sara, U.; Akter, M.; Uddin, M.S. Image quality assessment through FSIM, SSIM, MSE and PSNR—a comparative study. Journal of Computer and Communications 2019, 7, 8–18. [Google Scholar] [CrossRef]
  31. Hore, A.; Ziou, D. Image quality metrics: PSNR vs. In SSIM. In Proceedings of the 2010 20th international conference on pattern recognition. IEEE; pp. 2366–2369.
  32. Awasthi, N.; van Anrooij, L.; Jansen, G.; Schwab, H.M.; Pluim, J.P.; Lopata, R.G. Bandwidth Improvement in Ultrasound Image Reconstruction Using Deep Learning Techniques. In Proceedings of the Healthcare. MDPI, Vol. 11; p. 123.
  33. Gasse, M.; Millioz, F.; Roux, E.; Garcia, D.; Liebgott, H.; Friboulet, D. High-quality plane wave compounding using convolutional neural networks. IEEE transactions on ultrasonics, ferroelectrics, and frequency control 2017, 64, 1637–1639. [Google Scholar] [CrossRef]
  34. Goudarzi, S.; Asif, A.; Rivaz, H. Fast multi-focus ultrasound image recovery using generative adversarial networks. IEEE Transactions on Computational Imaging 2020, 6, 1272–1284. [Google Scholar] [CrossRef]
  35. Guo, B.; Zhang, B.; Ma, Z.; Li, N.; Bao, Y.; Yu, D. High-quality plane wave compounding using deep learning for hand-held ultrasound devices. In Proceedings of the Advanced Data Mining and Applications: 16th International Conference, ADMA 2020, Foshan, China, 2020, Proceedings 16. Springer, November 12–14; pp. 547–559.
  36. Huang, C.Y.; Chen, O.T.C.; Wu, G.Z.; Chang, C.C.; Hu, C.L. Ultrasound imaging improved by the context encoder reconstruction generative adversarial network. In Proceedings of the 2018 IEEE International Ultrasonics Symposium (IUS). IEEE; pp. 1–4.
  37. Lu, J.; Millioz, F.; Garcia, D.; Salles, S.; Liu, W.; Friboulet, D. Reconstruction for diverging-wave imaging using deep convolutional neural networks. IEEE transactions on ultrasonics, ferroelectrics, and frequency control 2020, 67, 2481–2492. [Google Scholar] [CrossRef]
  38. Lyu, Y.; Jiang, X.; Xu, Y.; Hou, J.; Zhao, X.; Zhu, X. ARU-GAN: U-shaped GAN based on Attention and Residual connection for super-resolution reconstruction. Computers in Biology and Medicine 2023, 164, 107316. [Google Scholar] [CrossRef] [PubMed]
  39. Moinuddin, M.; Khan, S.; Alsaggaf, A.U.; Abdulaal, M.J.; Al-Saggaf, U.M.; Ye, J.C. Medical ultrasound image speckle reduction and resolution enhancement using texture compensated multi-resolution convolution neural network. Frontiers in Physiology 2022, 2326. [Google Scholar] [CrossRef] [PubMed]
  40. Monkam, P.; Lu, W.; Jin, S.; Shan, W.; Wu, J.; Zhou, X.; Tang, B.; Zhao, H.; Zhang, H.; Ding, X. US-Net: A lightweight network for simultaneous speckle suppression and texture enhancement in ultrasound images. Computers in Biology and Medicine 2023, 152, 106385. [Google Scholar] [CrossRef] [PubMed]
  41. Tang, J.; Zou, B.; Li, C.; Feng, S.; Peng, H. Plane-Wave Image Reconstruction via Generative Adversarial Network and Attention Mechanism. IEEE Transactions on Instrumentation and Measurement 2021, 70. [Google Scholar] [CrossRef]
  42. Zhang, X.; Li, J.; He, Q.; Zhang, H.; Luo, J. High-quality reconstruction of plane-wave imaging using generative adversarial network. In Proceedings of the 2018 IEEE International Ultrasonics Symposium (IUS). IEEE; pp. 1–4.
  43. Zhou, Z.; Wang, Y.; Yu, J.; Guo, W.; Fang, Z. Super-resolution reconstruction of plane-wave ultrasound imaging based on the improved CNN method. In Proceedings of the VipIMAGE 2017: Proceedings of the VI ECCOMAS Thematic Conference on Computational Vision and Medical Image Processing Porto, Portugal, October 18-20, 2017.; pp. 111–120.
  44. Zhou, Z.; Wang, Y.; Yu, J.; Guo, Y.; Guo, W.; Qi, Y. High Spatial-Temporal Resolution Reconstruction of Plane-Wave Ultrasound Images With a Multichannel Multiscale Convolutional Neural Network. IEEE Trans Ultrason Ferroelectr Freq Control 2018, 65, 1983–1996. [Google Scholar] [CrossRef] [PubMed]
  45. Goudarzi, S.; Asif, A.; Rivaz, H. High Frequency Ultrasound Image Recovery Using Tight Frame Generative Adversarial Networks. Annu Int Conf IEEE Eng Med Biol Soc 2020, 2020, 2035–2038. [Google Scholar] [CrossRef] [PubMed]
  46. Goudarzi, S.; Asif, A.; Rivaz, H. Multi-focus ultrasound imaging using generative adversarial networks. In Proceedings of the 2019 IEEE 16th international symposium on biomedical imaging (ISBI 2019). IEEE; pp. 1118–1121.
  47. Lu, J.; Liu, W. Unsupervised super-resolution framework for medical ultrasound images using dilated convolutional neural networks. In Proceedings of the 2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC). IEEE; pp. 739–744.
  48. Li, Y.; Lu, W.; Monkam, P.; Wang, Y. IA-Noise2Noise: An Image Alignment Strategy for Echocardiography Despeckling. In Proceedings of the 2023 IEEE International Ultrasonics Symposium (IUS). IEEE; pp. 1–3.
  49. Mansouri, N.J.; Khaissidi, G.; Despaux, G.; Mrabti, M.; Clézio, E.L. Attention gated encoder-decoder for ultrasonic signal denoising. IAES International Journal of Artificial Intelligence 2023, 12, 1695–1703. [Google Scholar] [CrossRef]
  50. Basile, M.; Gibiino, F.; Cavazza, J.; Semplici, P.; Bechini, A.; Vanello, N. Blind Approach Using Convolutional Neural Networks to a New Ultrasound Image Denoising Task. In Proceedings of the 2023 IEEE International Workshop on Biomedical Applications, Technologies and Sensors, BATS 2023 - Proceedings; pp. 68–73. [Google Scholar] [CrossRef]
  51. Shen, Z.; Tang, C.; Xu, M.; Lei, Z. Removal of Speckle Noises from Ultrasound Images Using Parallel Convolutional Neural Network. Circuits, Systems, and Signal Processing 2023, 42, 5041–5064. [Google Scholar] [CrossRef]
  52. Gan, J.; Wang, L.; Liu, Z.; Wang, J. Multi-scale ultrasound image denoising algorithm based on deep learning model for super-resolution reconstruction. In Proceedings of the ACM International Conference Proceeding Series; pp. 6–11. [CrossRef]
  53. Asgariandehkordi, H.; Goudarzi, S.; Basarab, A.; Rivaz, H. Deep Ultrasound Denoising Using Diffusion Probabilistic Models. In Proceedings of the IEEE International Ultrasonics Symposium, IUS. [CrossRef]
  54. Liu, J.; Li, C.; Liu, L.; Chen, H.; Han, H.; Zhang, B.; Zhang, Q. Speckle noise reduction for medical ultrasound images based on cycle-consistent generative adversarial network. Biomedical Signal Processing and Control 2023, 86. [Google Scholar] [CrossRef]
  55. Mahmoudi Mehr, O.; Mohammadi, M.R.; Soryani, M. Deep Learning-Based Ultrasound Image Despeckling by Noise Model Estimation. Iranian Journal of Electrical and Electronic Engineering 2023, 19. [Google Scholar] [CrossRef]
  56. Senthamizh Selvi, R.; Suruthi, S.; Samyuktha Shrruthi, K.R.; Varsha, B.; Saranya, S.; Babu, B. Ultrasound Image Denoising Using Cascaded Median Filter and Autoencoder. In Proceedings of the Proceedings of the 4th International Conference on Smart Electronics and Communication, ICOSEC 2023, pp.; pp. 296–302. [CrossRef]
  57. Mikaeili, M.; Bilge, H.S. Evaluating Deep Neural Network Models on Ultrasound Single Image Super Resolution. In Proceedings of the TIPTEKNO 2023 - Medical Technologies Congress, Proceedings. [CrossRef]
  58. Liu, H.; Liu, J.; Hou, S.; Tao, T.; Han, J. Perception consistency ultrasound image super-resolution via self-supervised CycleGAN. Neural Computing and Applications 2023, 35, 12331–12341. [Google Scholar] [CrossRef]
  59. Vetriselvi, D.; Thenmozhi, R. Advanced Image Processing Techniques for Ultrasound Images using Multiscale Self Attention CNN. Neural Processing Letters 2023, 55, 11945–11973. [Google Scholar] [CrossRef]
  60. Gomez, Y.Z.O.; Costa, E.T. Ultrasound Speckle Filtering Using Deep Learning. In Proceedings of the IFMBE Proceedings, Vol. 99; pp. 283–289. [CrossRef]
  61. Li, Y.; Zeng, X.; Dong, Q.; Wang, X. RED-MAM: A residual encoder-decoder network based on multi-attention fusion for ultrasound image denoising. Biomedical Signal Processing and Control 2023, 79. [Google Scholar] [CrossRef]
  62. Yang, T.; Wang, W.; Cheng, G.; Wei, M.; Xie, H.; Wang, F.L. FDDL-Net: frequency domain decomposition learning for speckle reduction in ultrasound images. Multimedia Tools and Applications 2022, 81, 42769–42781. [Google Scholar] [CrossRef]
  63. Karaoğlu, O.; Bilge, H.S.; Uluer, I. Removal of speckle noises from ultrasound images using five different deep learning networks. Engineering Science and Technology, an International Journal 2022, 29. [Google Scholar] [CrossRef]
  64. Markco, M.; Kannan, S. Texture-driven super-resolution of ultrasound images using optimized deep learning model. Imaging Science Journal 2023. [Google Scholar] [CrossRef]
  65. Karthiha, G.; Allwin, S. Speckle Noise Suppression in Ultrasound Images Using Modular Neural Networks. Intelligent Automation and Soft Computing 2023, 35, 1753–1765. [Google Scholar] [CrossRef]
  66. Kalaiyarasi, M.; Janaki, R.; Sampath, A.; Ganage, D.; Chincholkar, Y.D.; Budaraju, S. Non-additive noise reduction in medical images using bilateral filtering and modular neural networks. Soft Computing 2023. [Google Scholar] [CrossRef]
  67. Sawant, A.; Kasar, M.; Saha, A.; Gore, S.; Birwadkar, P.; Kulkarni, S. Medical Image De-Speckling Using Fusion of Diffusion-Based Filters And CNN. In Proceedings of the 8th International Conference on Advanced Computing and Communication Systems, ICACCS 2022; pp. 1197–1203. [Google Scholar] [CrossRef]
  68. Dutta, S.; Georgeot, B.; Kouame, D.; Garcia, D.; Basarab, A. Adaptive Contrast Enhancement of Cardiac Ultrasound Images using a Deep Unfolded Many-Body Quantum Algorithm. In Proceedings of the IEEE International Ultrasonics Symposium, IUS, Vol. 2022-October. [Google Scholar] [CrossRef]
  69. Sanjeevi, G.; Krishnan Pathinarupothi, R.; Uma, G.; Madathil, T. Deep Learning Pipeline for Echocardiogram Noise Reduction. In Proceedings of the 2022 IEEE 7th International conference for Convergence in Technology, I2CT 2022. [Google Scholar] [CrossRef]
  70. Makwana, G.; Yadav, R.N.; Gupta, L. Analysis of Various Noise Reduction Techniques for Breast Ultrasound Image Enhancement. In Proceedings of the Lecture Notes in Electrical Engineering, Vol. 825; pp. 303–313. [CrossRef]
  71. Suseela, K.; Kalimuthu, K. An efficient transfer learning-based Super-Resolution model for Medical Ultrasound Image. In Proceedings of the Journal of Physics: Conference Series, Vol. 1964. [Google Scholar] [CrossRef]
  72. Chennakeshava, N.; Luijten, B.; Drori, O.; Mischi, M.; Eldar, Y.C.; Van Sloun, R.J.G. High resolution plane wave compounding through deep proximal learning. In Proceedings of the IEEE International Ultrasonics Symposium, IUS, Vol. 2020-September. [Google Scholar] [CrossRef]
  73. Dong, G.; Ma, Y.; Basu, A. Feature-Guided CNN for Denoising Images from Portable Ultrasound Devices. IEEE Access 2021, 9, 28272–28281. [Google Scholar] [CrossRef]
  74. Jarosik, P.; Lewandowski, M.; Klimonda, Z.; Byra, M. Pixel-Wise Deep Reinforcement Learning Approach for Ultrasound Image Denoising. In Proceedings of the IEEE International Ultrasonics Symposium, IUS. [CrossRef]
  75. Kumar, M.; Mishra, S.K.; Joseph, J.; Jangir, S.K.; Goyal, D. Adaptive comprehensive particle swarm optimisation-based functional-link neural network filtre model for denoising ultrasound images. IET Image Processing 2021, 15, 1232–1246. [Google Scholar] [CrossRef]
  76. Shen, Z.; Li, W.; Han, H. Deep Learning-Based Wavelet Threshold Function Optimization on Noise Reduction in Ultrasound Images. Scientific Programming 2021, 2021. [Google Scholar] [CrossRef]
  77. Kokil, P.; Sudharson, S. Despeckling of clinical ultrasound images using deep residual learning. Computer Methods and Programs in Biomedicine 2020, 194. [Google Scholar] [CrossRef] [PubMed]
  78. Feng, X.; Huang, Q.; Li, X. Ultrasound image de-speckling by a hybrid deep network with transferred filtering and structural prior. Neurocomputing 2020, 414, 346–355. [Google Scholar] [CrossRef]
  79. Ma, Y.; Yang, F.; Basu, A. Edge-guided CNN for denoising images from portable ultrasound devices. In Proceedings of the Proceedings - International Conference on Pattern Recognition; pp. 6826–6833. [CrossRef]
  80. Lan, Y.; Zhang, X. Real-time ultrasound image despeckling using mixed-attention mechanism based residual UNet. IEEE Access 2020, 8, 195327–195340. [Google Scholar] [CrossRef]
  81. Vasavi, G.; Jyothi, S. Noise Reduction Using OBNLM Filter and Deep Learning for Polycystic Ovary Syndrome Ultrasound Images. In Proceedings of the Learning and Analytics in Intelligent Systems, Vol. 16; pp. 203–212. [CrossRef]
  82. Shelgaonkar, S.L.; Nandgaonkar, A.B. Deep Belief Network for the Enhancement of Ultrasound Images with Pelvic Lesions. Journal of Intelligent Systems 2018, 27, 507–522. [Google Scholar] [CrossRef]
  83. Singh, P.; Mukundan, R.; De Ryke, R. Feature Enhancement of Medical Ultrasound Scans Using Multifractal Measures. In Proceedings of the Proceedings - 2019 IEEE International Conference on Signals and Systems, ICSigSys 2019; pp. 85–91. [Google Scholar] [CrossRef]
  84. Choi, W.; Kim, M.; Haklee, J.; Kim, J.; Beomra, J. Deep CNN-Based Ultrasound Super-Resolution for High-Speed High-Resolution B-Mode Imaging. In Proceedings of the IEEE International Ultrasonics Symposium, IUS, Vol. 2018-January. [Google Scholar] [CrossRef]
  85. Ando, K.; Nagaoka, R.; Hasegawa, H. Speckle reduction of medical ultrasound images using deep learning with fully convolutional network. Japanese Journal of Applied Physics 2020, 59. [Google Scholar] [CrossRef]
  86. Temiz, H.; Bilge, H.S. Super Resolution of B-Mode Ultrasound Images with Deep Learning. IEEE Access 2020, 8, 78808–78820. [Google Scholar] [CrossRef]
  87. Liu, J.; Liu, H.; Zheng, X.; Han, J. Exploring multi-scale deep encoder-decoder and patchgan for perceptual ultrasound image super-resolution. In Proceedings of the Communications in Computer and Information Science, Vol. 1265 CCIS; pp. 47–59. [Google Scholar] [CrossRef]
  88. Mishra, D.; Chaudhury, S.; Sarkar, M.; Soin, A.S. Ultrasound image enhancement using structure oriented adversarial network. IEEE Signal Processing Letters 2018, 25, 1349–1353. [Google Scholar] [CrossRef]
  89. Mishra, D.; Tyagi, S.; Chaudhury, S.; Sarkar, M.; Singhsoin, A. Despeckling CNN with Ensembles of Classical Outputs. In Proceedings of the Proceedings - International Conference on Pattern Recognition, Vol. 2018-August; pp. 3802–3807. [Google Scholar] [CrossRef]
  90. Oliveira-Saraiva, D.; Mendes, J.; Leote, J.; Gonzalez, F.A.; Garcia, N.; Ferreira, H.A.; Matela, N. Make It Less Complex: Autoencoder for Speckle Noise Removal-Application to Breast and Lung Ultrasound. J Imaging 2023, 9. [Google Scholar] [CrossRef]
  91. Vimala, B.B.; Srinivasan, S.; Mathivanan, S.K.; Muthukumaran, V.; Babu, J.C.; Herencsar, N.; Vilcekova, L. Image Noise Removal in Ultrasound Breast Images Based on Hybrid Deep Learning Technique. Sensors (Basel) 2023, 23. [Google Scholar] [CrossRef]
  92. Sineesh, A.; Shankar, M.R.; Hareendranathan, A.; Panicker, M.R.; Palanisamy, P. Single Image based Super Resolution Ultrasound Imaging Using Residual Learning of Wavelet Features. Annu Int Conf IEEE Eng Med Biol Soc 2023, 2023, 1–4. [Google Scholar] [CrossRef] [PubMed]
  93. Li, X.; Wang, Y.; Zhao, Y.; Wei, Y. Fast Speckle Noise Suppression Algorithm in Breast Ultrasound Image Using Three-Dimensional Deep Learning. Front Physiol 2022, 13, 880966. [Google Scholar] [CrossRef] [PubMed]
  94. Tamang, L.D.; Kim, B.W. Super-Resolution Ultrasound Imaging Scheme Based on a Symmetric Series Convolutional Neural Network. Sensors (Basel) 2022, 22. [Google Scholar] [CrossRef] [PubMed]
  95. Balamurugan, M.; Chung, K.; Kuppoor, V.; Mahapatra, S.; Pustavoitau, A.; Manbachi, A. USDL: Inexpensive Medical Imaging Using Deep Learning Techniques and Ultrasound Technology. Proc Des Med Devices Conf 2020, 2020. [Google Scholar] [CrossRef]
  96. Yu, H.; Ding, M.; Zhang, X.; Wu, J. PCANet based nonlocal means method for speckle noise removal in ultrasound images. PLoS One 2018, 13, e0205390. [Google Scholar] [CrossRef] [PubMed]
  97. S, L.S.; M, S. Bayesian Framework-Based Adaptive Hybrid Filtering for Speckle Noise Reduction in Ultrasound Images Via Lion Plus FireFly Algorithm. J Digit Imaging 2021, 34, 1463–1477. [Google Scholar] [CrossRef]
  98. Liebgott, H.; Rodriguez-Molares, A.; Cervenansky, F.; Jensen, J.A.; Bernard, O. Plane-wave imaging challenge in medical ultrasound. In Proceedings of the 2016 IEEE International ultrasonics symposium (IUS). IEEE; pp. 1–4.
  99. Yap, M.H.; Pons, G.; Marti, J.; Ganau, S.; Sentis, M.; Zwiggelaar, R.; Davison, A.K.; Marti, R. Automated breast ultrasound lesions detection using convolutional neural networks. IEEE journal of biomedical and health informatics 2017, 22, 1218–1226. [Google Scholar] [CrossRef]
  100. Xia, C.; Li, J.; Chen, X.; Zheng, A.; Zhang, Y. What is and what is not a salient object? learning salient object detector by ensembling linear exemplar regressors. In Proceedings of the Proceedings of the IEEE conference on computer vision and pattern recognition, pp.; pp. 4142–4150.
  101. van den Heuvel, T.L.; de Bruijn, D.; de Korte, C.L.; Ginneken, B.v. Automated measurement of fetal head circumference using 2D ultrasound images. PloS one 2018, 13, e0200412. [Google Scholar] [CrossRef]
Figure 1. Flowchart visualizing the results of the PRISMA-based article selection process.
Figure 1. Flowchart visualizing the results of the PRISMA-based article selection process.
Preprints 113907 g001
Figure 2. Overview of distribution of quality enhancement aspects addressed in included articles.
Figure 2. Overview of distribution of quality enhancement aspects addressed in included articles.
Preprints 113907 g002
Figure 3. Visualization of the obtained PSNR for each dataset in the included studies, for both the low-quality input images and the generated images by the proposed algorithm. The color represents the dataset type (in vivo, phantom, or simulation data). Note that some studies are represented by multiple bars since they evaluated multiple datasets.
Figure 3. Visualization of the obtained PSNR for each dataset in the included studies, for both the low-quality input images and the generated images by the proposed algorithm. The color represents the dataset type (in vivo, phantom, or simulation data). Note that some studies are represented by multiple bars since they evaluated multiple datasets.
Preprints 113907 g003
Figure 4. Visualization of the obtained SSIM for each dataset in the included studies, for both the low-quality input images and the generated images by the proposed algorithm. The color represents the dataset type (in vivo, phantom, or simulation data). Note that some studies are represented by multiple bars since they evaluated multiple datasets.
Figure 4. Visualization of the obtained SSIM for each dataset in the included studies, for both the low-quality input images and the generated images by the proposed algorithm. The color represents the dataset type (in vivo, phantom, or simulation data). Note that some studies are represented by multiple bars since they evaluated multiple datasets.
Preprints 113907 g004
Figure 5. Forest plot of the mean PSNR difference (95% CI).
Figure 5. Forest plot of the mean PSNR difference (95% CI).
Preprints 113907 g005
Figure 6. Forest plot of the mean SSIM difference (95% CI).
Figure 6. Forest plot of the mean SSIM difference (95% CI).
Preprints 113907 g006
Table 2. Characteristics of selected studies.
Table 2. Characteristics of selected studies.
Study Aim Dataset (availability) Ultrasound specifications Deep learning algorithm Loss function
Awasthi et al., 2022 [32] Reconstruction of high-quality high-bandwidth images from low-bandwidth images Phantom: Five separate datasets, tissue-mimicking, commercial and in vitro porcin carotid artery. (private) Verasonics, L11-5v transducer with PWs at range -25 to 25. LQ: limited bandwidth down to 20%, HQ: full bandwidth. Residual encoder decoder Net Scaled MSE
Gasse et al., 2017 [33] Reconstruct high-quality US images from a small number of PW acquisitions In-vivo: carotid, thyroid and liver regions of healthy subjects; Phantom: Gammex. (private) Verasonics, ATL L7-4 probe (5.2 MHz, 128 elements) with range ±15. LQ: 3 PWs. HQ: 31 PWs. CNN L2 loss
Goudarzi et al., 2020 [34] Achieve the quality of multifocus US images by using a mapping function on a single-focus US image. Phantom: CIRS phantom and ex vivo lamb liver;Simulation: Field II software. (private) E-CUBE 12 Alpinion machine, L3-12H transducer (8.5 MHz). LQ: image with single focal point. HQ: multi-focus image with 3 focal points. Boundary-Seeking GAN Binary cross entropy (discriminator), MSE + boundary seeking loss (generator)
Guo et al., 2020 [35] Improve the quality of handheld US devices using a small number of plane waves In vivo: dataset provided by Zhang et al. [42] (carotid artery and brachioradialis images of healthy volunteers);Phantom: PICMUS [98] dataset, CIRS phantom;Simulation: US images from natural images using field-II software (only for pre-training LG_Unet). (private and public) (Derived from dataset sources)In vivo: Verasonics, L10-5 probe (7.5 MHz). LQ: 3 PWs, HQ: Compounded image of 31 PWs with range -15 to 15.Phantom data:Verasonics, L11 probe. (5.2 MHz, 128 elements). Local Global Unet (LG-Unet) + Simplified residual network (S_ResNet) MSE + SSIM (LG_Unet) and L1 (S_Resnet)
Huang et al., 2018 [36] Improve the quality of ultrasonic B-mode images from 32 to that from 128 channels. Simulation: Field-II software. (private) Simulation data set at 5MHz center frequency, 0.308mm pitch, 71% bandwidth. LQ: 32-channel image, HQ: 128-channel image. Context encoder reconstruction GAN Not reported
Khan et al., 2021 [15] Contrast and resolution enhancement of handheld POCUS images In vivo: carotid and thyroid regions;Phantom: ATS-539 phantom;Simulation: intermediate domain images generated by down grading the in vivo and phantom images acquired from high-end system. (private) LQ: NPUS050 portable US system were used as low-quality input. HQ: E-CUBE 12R US, L3–12 transducer. Cascade application of unsupervised self-consistent CycleGAN + supervised super-resolution network. Cycle consistency + adversarial loss (cycleGAN), MAE + SSIM (super-resolution network)
Lu et al., 2020 [37] High-quality reconstruction for DW imaging using a small number (3) of DW transmissions competing with those obtained by compounding with 31 DWS In vivo: Thigh muscle, finger phalanx, and liver regions;Phantom: CIRS and Gammex. (private) Verasonics, ATL P4-2 transducer. LQ: 3 DWs, HQ: Compounded image of 31 DWs. CNN with inception module MSE
Lyu et al., 2023 [38] Reconstruct super-resolution high-quality images from single-beam plane-wave images PICMUS 2016 dataset [98] modulated following the guidelines of CUBDL, consisting ofSimulation: generated with Field II software;Phantom: CIRS;In vivo: carotid artery of healthy volunteer. (public) (Derived from dataset source)Verasonics, L11 probe with range -16, 16. LQ: single PW image. HQ: PW images synthesized from 75 different angles using CPWC U-shaped GAN based on Attention and Residual connection (ARU-GAN) Combination of MS-SSIM, classical adversarial and perceptual loss
Moinuddin et al., 2022 [39] Enhance US images using a network where the task of noise suppression and resolution enhancement are carried out simultaneously. In vivo: breast US (BUS) dataset [99], for which high resolution and low noise label images are generated using NLLR normal filtration;Simulation: Salient object detection (SOD) dataset [100] augmentated using image formaton physics information, divided in two datasets. (public) (Derived from dataset source)Siemens ACUSON Sequoia C512, 17L5 HD transducer (8.5 MHz) Deep CNN MSE
Monkam et al., 2023 [40] Suppress speckle noise and enhance texture and fine-details. Simulation: original low-quality US images of HC18 Challenge fetal data set [101], from which high-quality target images and additional low-quality images are generated (for training and testing);In vivo publicly available datasets: HC18 Challenge (fetal) [101], BUSI (breast), CCA (common carotid artery) (for testing). (public) (Derived from HC18 dataset source)Voluson E8 or the Voluson 730 US device. U-Net with added feature refinement attention block (US-Net) L1 loss
Tang et al., 2021 [41] Reconstruct high-resolution, high-quality plane-wave images from low-quality plane-wave images from different angles. PICMUS 2016 dataset [98] modulated following the guidelines of CUBDL, consisting ofSimulation: generated with Field II software;Phantom: CIRS;In vivo: carotid artery of healthy volunteer. (public) (Derived from dataset source)Verasonics, L11 probe with range -16, 16. LQ: PW image using 3 angles. HQ: PW images synthesized from 75 different angles using CPWC Attention mechanism and Unet-based GAN cross-entropy + MSE + perceptual loss
Zhang et al., 2018 [42] Reconstruct high-quality US images from small number of PWs (3). In vivo: carotid artery and brachioradioalis of heathy volunteer;Phantom: CIRS phantom, ex vivo swine muscles. (private) Verasonics, L10-5 (7.5 MHz) with range -15 to 15. LQ: 3 PWs, HQ: coherent compounding using 31 PWs. GAN, with feed-forward CNN as both generator and discriminator network MSE + adversarial loss (generator), binary cross entropy (discriminator)
Zhou et al., 2018 [43] Improve the image quality of a single angle PW image to that of a PW image synthesized from 75 different angles PICMUS 2016 dataset [98] synthesized by three different beamforming methods:In vivo: 1) thyroid gland and 2) carotid artery of human volunteers. (public)Phantom: CIRS phantom;Simulation: 1) point images and 2) cyst images generated using Field-II software. (Derived from dataset sources)Verasonics, L11 probe with range -16, 16. LQ: single PW image. HQ: PW images synthesized from 75 different angles. Multi-scaled CNN MSE
Zhou et al., 2020 [6] Improve quality of portable US, by mapping low-quality images to corresponding high-quality images. Single-/multiangle PWI simulation, phantom and in vivo data (only used for transfer learning). For training and testing:In vivo: carotid and thyroid images of healthy volunteers;Phantom: CIRS and self-made gelatin and raw pork;Simulation: Field-II software. (private) LQ: mSonics MU1, L10-5v. transducer. HQ: Verasonics, L11-4v transducer (phantom data) and Toshiba Aplio 500, 7.5 MHz (clinical data). Two-stage GAN with U-Net and gradual learning strategy. MSE + SSIM + Conv loss
Zhou et al., 2021 [14] Enhance video quality of handheld US devices. In vivo: single and multiangle PW videos (only for training). Handheld and high-end images and videos of different bodyparts of healthy volunteers (for training and testing). (private) PW videos: Verasonics, L11-4v transducer (6.25MHz, 128-element) with range -16 to 16. High-end US (HQ): Toshiba Aplio 500 device. Handheld US (LQ): mSonics MU1, L10-5 transducer. Low-rank representation multipathway GAN adversarial + MSE + ultrasound specific perceptual loss
US: ultrasound, LQ: low-quality, HQ: high-quality, MSE: Mean Squared Error, CNN: convolutional neural network, GAN: Generative Adversarial Network, DW: diverging wave, PICMUS: Plane-wave Imaging Challenge in Medical UltraSound, CUBDL: Challenge on Ultrasound Beamforming with Deep Learning, NLLR: non-local low-rank, PW: plane wave.
Table 3. Outcomes of selected studies.
Table 3. Outcomes of selected studies.
Study Computation time(source code availability) Number of images in test set Performance (±SD) of low-quality input image Performance (±SD) of generated image
Awasthi et al., 2022 [32] "Light weight" (available) Phantom:dataset 1: n=134dataset 2: n=90dataset 3: n=31dataset 4: n=70dataset 5: n=239 Phantom:dataset 1: PSNR=17.049±1.107, RMSE=0.141±0.016, PC=0.788dataset 2: PSNR=15.768±1.376, RMSE=0.165±0.026dataset 3: PSNR=13.885±1.276, RMSE=0.204±0.032dataset 4: PSNR=16.297±1.212, RMSE=0.155±0.021dataset 5: PSNR=15.487±1.876, RMSE=0.172±0.040 Phantom:dataset 1: PSNR=20.903±1.189, RMSE=0.091±0.012, PC=0.86dataset 2: PSNR=20.523±1.242, RMSE=0.095±0.013dataset 3: PSNR=13.985±1.120, RMSE=0.201±0.025dataset 4: PSNR=21.457±1.238, RMSE=0.085±0.012dataset 5: PSNR=17.654±1.536, RMSE=0.133±0.022
Gasse et al., 2017 [33] Not reported (not available) Mixed test set of in vivo and phantom data:n=1000 Only graphs given, showing CR and LR reached by the proposed model with 3 PWs compared to the standard compounding of an increasingly larger number of PWs. -
Goudarzi et al., 2020 [34] Not reported (available) Phantom (CIRS):n=-Simulation:n=360 Phantom:FWHM=1.52, CNR=9.6Simulation:SSIM=0.622±0.02, PSNR=23.27±1, FWHM=1.3, CNR=7.2 Phantom:FWHM=1.44, CNR=11.1Simulation:SSIM=0.769±0.017, PSNR=25.32±0.919, FWHM=1.09, CNR=8.02
Guo et al., 2020 [35] Not reported (not available) 225 (out of 9225) patch images from the in vivo, phantom and simulation dataset (distribution between datasets not reported) In vivo:PSNR=16.04Phantom:FWHM=1.8 mm, CR=0.36, CNR=24.93 In vivo:PSNR=18.94Phantom:FWHM=1.3 mm, CR=0.79, CNR=32.81
Huang et al., 2018 [36] Not reported (not available) Simulation:n=1 Simulation:CNR: 0.939, PICMUS CNR: 2.381, FWHM: 13.34 Simulation:CNR: 1.508, PICMUS CNR: 6.502, FWHM: 11.15
Khan et al., 2021 [15] 13.18 ms (not available) In vivo:n=43Phantom:n=32 Not reported Gain compared to simulated intermediate quality images of in vivo and phantom data (only measuring fitness of super-resolution network):PSNR=13.58, SSIM=0.63Non-reference metrics for entire proposed method for in vivo and phantom data:CR=14.96, CNR=2.38, GCNR=0.8604 (which is 21.77%, 30.06%, and 44.42% higher than those of the low-quality input images.)
Lu et al., 2020 [37] 0.75 ± 0.03 ms (not available) Mixed in vivo and phantom data:n=1000 Mixed in vivo and phantom data:PSNR=29.24±1.57, SSIM=0.83±0.15, MI=0.51±0.16Non-reference metrics are only shown in graph form for low-quality images. Mixed in vivo and phantom data:PSNR=31.13±1.47, SSIM=0.93±0.06, MI=0.82±0.20,CR (near field)=19.54, CR (far field)=14.95, CNR (near field)=7.63, CNR (far field)=5.21, LR (near field)=0.90, LR (middle field)=1.64, LR (far field)=2.35
Lyu et al., 2023 [38] Not reported (not available) In vivo:n=150Phantom:n=150Simulation:n=150 No performance metrics available for low-quality images, only for other traditional deep learning methods for comparison. In vivo:PSNR=26.508, CW-SSIM=0.876, NCC=0.943Phantom:FWHM=0.424, CR=26.900, CNR=3.693Simulation:FWHM=0.277, CR=39.472, CNR=5.141
Moinuddin et al., 2022 [39] Not reported In vivo:n=33Simulation:SOD-1: n=200SOD-2: n=200Evaluated with 5-fold cross-validation approach. In vivo:PSNR=26.0071±2.3083, SSIM= 0.7098 ± 0.0761Simulation:SOD-1: PSNR=12.1587±0.7839, SSIM=0.5570±0.1205SOD-2: PSNR=12.5272±0.8243, SSIM=0.1556±0.1451,GCNR=0.9936±0.0039 In vivo:PSNR=26.9112±2.3025, SSIM=0.7522±0.0635Simulation:SOD-1: PSNR=25.5275±2.9712, SSIM=0.6946±0.1267SOD-2: PSNR=32.4719±2.6179, SSIM=0.8785±0.0766,GCNR=0.9966±0.0026
Monkam et al., 2023 [40] 52.16 ms (not available) In vivo:HC18: n=30BUSI: n=30CCA: n=30Simulation:HC18: n=335 No performance metrics available for low-quality images, only for other enhancement methods for comparison. In vivo:HC18: SNR=39.32, CNR=1.10, AGM=27.46, ENL=15.71BUSI: SNR=34.54, CNR=4.20, AGM=39.88, ENL=17.04CCA: SNR=40.87, CNR=2.59, AGM=35.92, ENL=23.03Simulation:HC18: SSIM=0.9155, PSNR=32.87, EPI= 0.6371
Tang et al., 2021 [41] Not reported (not available) n=360 (total number of images in test set for the in vivo, phantom and simulation datasets, distribution not reported) Phantom:FWHM=0.5635, CR=8.718, CNR=1.109, GCNR=0.609Simulation:FWHM=0.2808, CR=13.769, CNR=1.576, GCNR=0.735 In vivo:PSNR=28.278, SSIM=0.659, MI=0.9980, NCC=0.963Phantom:FWHM=0.3556, CR=24.571, CNR=2.495, GCNR=0.915Simulation:FWHM=0.2695, CR=39.484, CNR=5.617, GCNR=0.998
Zhang et al., 2018 [42] Not reported (not available) In vivo:n=500phantom:n=30 Mixed in vivo and phantom test set:FWHM=0.50, CR=10.23, CNR=1.30 Mixed in vivo and phantom test set:FWHM=0.53, CR=19.46, CNR=2.25
Zhou et al., 2018 [43] Not reported (not available) In vivo:Thyroid dataset: n=30Simulation:Point dataset: n=30Cyst dataset: n=30Evaluated with 5-fold cross-validation approach. In vivo:Thyroid dataset: PSNR=14.9235, SSIM=0.0291, MI=0.3474Simulation: Point dataset: PSNR=24.1708, SSIM=0.1962, MI=0.4124,FWHM=0.49Cyst dataset: PSNR=15.8860, SSIM=0.5537, MI=1.1976,CR=137.0473 In vivo: Thyroid dataset: PSNR=21.7248, SSIM=0.3034, MI=0.8856Simulation: Point dataset: PSNR=36.5884, SSIM=0.9216, MI=0.4483,FWHM=0.196 Cyst dataset: PSNR=24.0167, SSIM=0.6135, MI=1.5622,CR=184.0432
Zhou et al., 2020 [6] Not reported (not available) In vivo:n=94Phantom:n=40Simulation:n=56Evaluated with 5-fold cross validation approach. In vivo: PSNR=8.65±1.32, SSIM=0.18±0.04, MI=0.22±0.13,BRISQUE=38.91±4.99Phantom: PSNR=15.26±2.91, SSIM=0.12±0.03, MI=0.20±0.11,BRISQUE=24.61±4.50Simulation: PSNR=16.38±2.35, SSIM=0.19±0.06, MI=0.22±0.16,BRISQUE=29.08±3.45 In vivo: PSNR=18.08±1.57, SSIM=0.41±0.05, MI=0.68±0.18,BRISQUE=35.25±4.13Phantom: PSNR=24.70±1.11, SSIM=0.64±0.07, MI=0.26±0.09,BRISQUE=21.68±3.36Simulation: PSNR=28.50±2.01, SSIM=0.59±0.02, MI=0.42±0.04,BRISQUE=23.30±3.09
Zhou et al., 2021 [14] Not reported (not available) In vivo:n=40 videosFor full-reference methods, a single frame in handheld video was used and most similar frame in high-end video was selected. In vivo:PSNR=12.68±3.45, SSIM=0.24±0.06, MI=0.71±0.09,NIQE=19.48±4.66, ultrasound quality score=0.06±0.03 In vivo:PSNR=19.95±3.24, SSIM=0.45±0.06, MI=1.05±0.07,NIQE=6.95±1.97, ultrasound quality score=0.89±0.16
AGM: Average gradient magnitudes, BRISQUE: Blind referenceless image spatial quality evaluator, CNR: Contrast-to-noise ratio, CR: Contrast ratio, ENL: Equivalent number of looks, EPI: Edge preservation index, FWHM: Full width at half maximum, GCNR: Generalized contrast-to-noise ratio, LR: Likelihood ratio, MI: Mutual information, MSE: Mean squared error, MS-SSIM: Multi scale structural similarity index measurement, NCC: Normalized cross-correlation, NIQE: Natural image quality evaluator, PC: Pearson correlation, PSNR: Peak signal-to-noise ratio, RMSE: Root mean squared error, SNR: Signal-to-noise ratio, SSIM: Structural similarity index measurement
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated