Preprint
Review

Artificial Intelligence in Cancer Imaging

Altmetrics

Downloads

217

Views

129

Comments

0

This version is not peer-reviewed

Submitted:

03 January 2024

Posted:

04 January 2024

You are already at the latest version

Alerts
Abstract
This manuscript examines the role of artificial intelligence in cancer imaging. It throws light on how artificial intelligence (AI) can significantly cut the wait time of patients and their clinicians in getting cancer diagnoses and assist health care providers by highlighting areas of an image where the interpreter needs to focus more for better result analysis and quality output. It also provides details on how the rise of AI attempts to standardize imaging results across providers by eliminating inter- and intra-observer variations among health care providers. Finally, it exposes the numerous limitations associated with AI use in cancer imaging, such as the need to digitize pathology laboratories and workflows, as well as transform our century-old tissue processing methods to fit into modern technological standards. In the field of radiology, the challenge of curating the enormous amount of data generated through MRIs, CT scans, PET scans, etc., poses a significant challenge to the ability of researchers to train relevant AI models to high levels of accuracy and reliability. In addition, over-reliance on AI by clinicians can deprive them of a common-sense approach to the health care issues of their patients and negatively impact doctor-patient relationships and confidentiality.
Keywords: 
Subject: Medicine and Pharmacology  -   Oncology and Oncogenics

1. Introduction

With the recent deployment of generative artificial intelligence (AI) algorithmic chatbots like Chat GPT in the public sphere, there is a significant public interest in AI and how it impacts our daily lives. One area where this interest is most felt is in the medical community, especially medical imaging analysis and its potential to transform the landscape of medical practice. [1] Artificial intelligence is the art of perceiving, synthesizing, and inferring information using machines – as opposed to intelligence displayed by non-human animals and humans. [2] AI applications in medicine can be divided into virtual and physical components. Machine learning (ML) and deep learning (DL, a subset of ML) comprise the virtual element of AI. [11] ML algorithms can be classified into supervised, unsupervised, and reinforcement learning. In addition, a convolutional neural network (CNN) is a DL scheme capable of employing a multilayer artificial neural network that is highly efficient for image classification (Figure 6) [12]. .
The physical branch of AI includes medical devices, robots, and nanorobots for targeted drug delivery. [13] Cancer imaging is a tool that can help physicians detect cancer, track its spread, guide treatment deliveries, and monitor treatment outcomes. [3] The potential of AI to cut down the time involved in getting a cancer diagnosis, standardize cancer image reporting, thereby decreasing inter and intra-observer bias, and finally, find complex patterns and relationships between very different kinds of data sets – is significant in cancer care. [4] This article aims to examine the role of AI in enhancing cancer care delivery through cancer imaging and its associated challenges and limitations.

2. Applications of AI in Cancer Imaging

Healthcare provision is increasing in complexity and volume across the globe. Large data volumes are generated daily at different healthcare facilities. AI can be used to improve the efficiency and best utilization of these data in diagnosis and healthcare delivery, as it has proved an essential tool in this regard. AI can elucidate complex patterns in cancer imaging and transform qualitative data into quantitative forms, easy to replicate and grade. AI can also integrate complex data streams, such as images from the radiology, pathology, and electronic health records departments, into one operational diagnostic system. This remarkable feat of AI technology sometimes transcends human abilities and is a valuable tool for diagnosticians. [29]
In this section, we are going to examine AI applications in cancer imaging under the following three headings: 1) Cancer screening and detection. 2) Tumor characterization using Whole slide Imaging in Pathology. 3) Cancer imaging and Tumor treatment Monitoring. [3] See Figure 1. Below;
Figure 1. Artificial Intelligence Applications in Medical Imaging as Applied to Common Cancers. Artificial intelligence tools can be conceptualized to apply to 3 broad categories of image-based clinical tasks in oncology: 1) detection of abnormalities; 2) characterization of a suspected lesion by defining its shape or volume, histopathologic diagnosis, stage of disease, or molecular profile; and 3) determination of prognosis or response to treatment over time during monitoring. 2D indicates 2-dimensional; 3D, 3-dimensional; CNS, central nervous system. Note: From “Artificial intelligence in cancer imaging: Clinical challenges and applications” by Bi et al., 2019, American Cancer Society (ACS) Journals, (https://acsjournals.onlinelibrary.wiley.com/doi/full/10.3322/caac.21552). CC BY-NC.
Figure 1. Artificial Intelligence Applications in Medical Imaging as Applied to Common Cancers. Artificial intelligence tools can be conceptualized to apply to 3 broad categories of image-based clinical tasks in oncology: 1) detection of abnormalities; 2) characterization of a suspected lesion by defining its shape or volume, histopathologic diagnosis, stage of disease, or molecular profile; and 3) determination of prognosis or response to treatment over time during monitoring. 2D indicates 2-dimensional; 3D, 3-dimensional; CNS, central nervous system. Note: From “Artificial intelligence in cancer imaging: Clinical challenges and applications” by Bi et al., 2019, American Cancer Society (ACS) Journals, (https://acsjournals.onlinelibrary.wiley.com/doi/full/10.3322/caac.21552). CC BY-NC.
Preprints 95347 g001

2.1. AI and Radiologic Cancer Screening and Detection

Cancer screening is aimed at detecting cancer before symptoms emerge. [5] Several studies have successfully demonstrated AI’s usefulness in improving certain cancers’ diagnostic value using cancer imaging tools. Using computers to localize areas of interest in radiographs is called computer-aided detection (CADe). AI-powered CADe can quickly screen radiographs to help users avoid errors of omission. [30] CADe can highlight suspicious lesions on radiographs using a pattern-recognition complex, thus enhancing the ability of image readers to identify lesions they might have overlooked at first. Missed cancers in low-dose CT screenings have been eventually identified using CADe. [31] Other uses of CADe include; reduction of image interpretation time in Magnetic Resonance Imaging (MRI) based imaging of brain metastases, [32] identification of microcalcifications in mammograms of early breast cancers, [33] and significant improvements in the sensitivity of radiologists to anomalies on radiographs. [34]
1)
Breast Cancer Imaging: The most diagnosed cancer among women in the United States is breast cancer. It also accounts for the 2nd highest number of cancer-related deaths. [54] The introduction of mammography for breast cancer screening has significantly improved early cancer detection and decreased morbidity and mortality overall. However, therapeutic response to breast cancer is highly variable and depends on the presence or absence of specific receptors on the tumor. These receptors include; estrogen (ER), progesterone (PR), and Human Epidermal Receptor 2 (HER 2) receptors. Triple receptor-negative breast cancers are more difficult to identify on mammography as they lack the typical characteristics of the tumor. [55] Consequently, triple-negative tumors are more likely to be detected later and carry a worse prognosis.
Conventional mammography uses X-rays to look for tumors or suspicious areas in the breasts. Digital mammography also uses X-rays, but the data is stored on a computer instead of a piece of the film, thus enabling computer enhancement of digital mammograms. These computers can further screen digital mammograms and theoretically detect suspicious areas that human error might miss. Digital mammography comes in either 2D or 3D versions. [21] AI has improved radiologists’ performance in reading breast cancer screening mammograms. Studies have shown that up to 30% to 40% of breast cancers can be missed during screening, and on average, only 10% of women recalled from screening for diagnostic workup are ultimately found to have cancer [6,7] AI based algorithms hold promises of improving the accuracy of digital mammography. Scientists can train AI on existing mammogram images enabling it to identify cancer abnormalities and distinguish them from benign findings. An example of this can be found in MammoScreen, an AI tool that improves cancer detection in mammograms. This AI system can identify regions suspicious of breast cancer on 2-D digital mammograms and determine their possibility of malignancy. A study by fourteen radiologists used a dataset of 240 2-D digital mammography images acquired between 2013 and 2016 containing a variety of abnormalities. (Figure 1)
Figure 1. Dataset selection flowchart [8]. Note: From “Improving Breast Cancer Detection Accuracy of Mammography with the Concurrent Use of an Artificial Intelligence Tool” by S. Pacile et al., 2020, Radiology: Artificial Intelligence, (https://pubs.rsna.org/doi/10.1148/ryai.2020190208#pane-pcw-references). CC BY-NC-ND.
Figure 1. Dataset selection flowchart [8]. Note: From “Improving Breast Cancer Detection Accuracy of Mammography with the Concurrent Use of an Artificial Intelligence Tool” by S. Pacile et al., 2020, Radiology: Artificial Intelligence, (https://pubs.rsna.org/doi/10.1148/ryai.2020190208#pane-pcw-references). CC BY-NC-ND.
Preprints 95347 g004
Half of the dataset was read without AI and the other half with the help of AI during the reading session and vice versa during a second reading session which was separated from the first by a washout period. Results of the study revealed that for a low likelihood of malignancy (< 2.5%), the time was about the same in the first reading session and slightly decreased in the second reading session. However, for a higher likelihood of malignancy, the reading time, on average, increased with the use of AI. The AI system also significantly improved the cancer detection rate and the false-positive rate for each reader, as shown below (Figure 2). In one instance, nine of the 14 radiologists detected an invasive ductal carcinoma when reading the case using the AI tool, in contrast to only three radiologists who saw cancer in the unaided reading condition. This study conclusively demonstrated that concurrent use of this AI tool improved the diagnostic performance of radiologists in detecting breast cancer without prolonging their workflow. [8] The U.S. Food and Drug Administration cleared MammoScreen for use in the clinic in 2020. [9]
Figure 2. A, Cancer detection rate and percentage improvement brought by the use of the artificial intelligence (AI) system and, B, false-positive rate and percentage decrease as a result of the use of AI. Green bars indicate the percentage improvement brought by the help of AI, thus an increase in cancer detection rate and a decrease in the false-positive rate. Similarly, red bars indicate a deterioration of performances, thus a, A, decrease in cancer detection rate, B, and an increase in false-positive rate. Note: From “Improving Breast Cancer Detection Accuracy of Mammography with the Concurrent Use of an Artificial Intelligence Tool” by S. Pacile et al., 2020, Radiology: Artificial Intelligence, (https://pubs.rsna.org/doi/10.1148/ryai.2020190208#pane-pcw-references). CC BY-NC-ND.
Figure 2. A, Cancer detection rate and percentage improvement brought by the use of the artificial intelligence (AI) system and, B, false-positive rate and percentage decrease as a result of the use of AI. Green bars indicate the percentage improvement brought by the help of AI, thus an increase in cancer detection rate and a decrease in the false-positive rate. Similarly, red bars indicate a deterioration of performances, thus a, A, decrease in cancer detection rate, B, and an increase in false-positive rate. Note: From “Improving Breast Cancer Detection Accuracy of Mammography with the Concurrent Use of an Artificial Intelligence Tool” by S. Pacile et al., 2020, Radiology: Artificial Intelligence, (https://pubs.rsna.org/doi/10.1148/ryai.2020190208#pane-pcw-references). CC BY-NC-ND.
Preprints 95347 g002
Further advances in AI for use in risk assessment, detection, diagnosis, prognosis and therapeutic response in breast cancer imaging are detailed in Table 2 below.
Table 2. Summary of Key Studies on Imaging Characterization of Breast Lesions, Including Detection, Diagnosis, Biologic Characterization, and Predicting Prognosis and Treatment Response.
Table 2. Summary of Key Studies on Imaging Characterization of Breast Lesions, Including Detection, Diagnosis, Biologic Characterization, and Predicting Prognosis and Treatment Response.
Preprints 95347 i001Preprints 95347 i002Preprints 95347 i003
Abbreviations: 2D, 2-dimensional; 3D, 3-dimensional; ACC, accuracy; ACRIN, American College of Radiology Imaging Network; AUC, area under the curve; CNN, convolutional neural networks; DCE-MRI, dynamic contrast-enhanced magnetic resonance imaging; DCIS, ductal carcinoma in situ; FFDM, full-field digital mammography; HR, hazard ratio; IDC, invasive ductal carcinoma; Sn, sensitivity; Sp, specificity; SVM, support vector machine; TCGA, The Cancer Genome Atlas; t-SNE, t-distributed stochastic neighbor embedding. Note: From “Artificial intelligence in cancer imaging: Clinical challenges and applications” by Bi et al., 2019, American Cancer Society (ACS) Journals (https://acsjournals.onlinelibrary.wiley.com/doi/full/10.3322/caac.21552). CC BY-NC.
2)
Cervical cancer screening: Researchers at Karolinska Institute in Sweden detected precursors to cervical cancer in women in resource-limited settings using artificial intelligence and mobile digital microscopy. [10] In this diagnostic study, cervical smears from 740 HIV-positive women aged between 18 and 64 were collected. The smears were then digitized with a portable slide scanner, uploaded to a cloud server using mobile networks, and used to train and validate a deep learning system (DLS) to detect atypical cervical cells. (Figure 3) Sensitivity for detection of atypia was high (96%-100%), with higher specificity for high-grade lesions (93%-99%) than for low-grade lesions (82%-86%), and no slides manually classified as the high grade was incorrectly classified as negative.
3)
Colorectal cancer (CRC) Screening: Colorectal cancer is the third most common malignancy in men and women. [14] Early-stage detection of CRC may improve patients’ clinical outcomes by avoiding treatment delays and reducing morbidity and mortality. [15]
i. Virtual colonoscopy or computed tomographic colonography (CTC) is a modified computed tomography (CT) examination that presently serves as an alternative screening tool to conventional colonoscopy for CRC patients, especially moderate-risk patients. It was first described in 1994 by Vining et al. [16] Computer-aided AI-based algorithms can achieve optimal diagnostic performance and image quality in CTC. Song et al. [17], conducted a study to differentiate colon lesions according to underlying pathology, e.g., neoplastic and non-neoplastic lesions. By employing the Haralick texture analysis method, and a virtual pathological model, they explored the utility of texture features from high-order differentiations, such as the gradient and curvature, of the image intensity distribution. Results of this investigation revealed that the AUC of classification improved from 0.74 (using the image intensity alone) to 0.85 in differentiating the neoplastic lesions from non-neoplastic ones, thus demonstrating that texture features from higher-order images can significantly improve classification accuracy in the pathological differentiation of colorectal lesions. [18] AI may assist in automatically detecting flat neoplastic lesions, thus reducing their interval cancer risk. These flat colorectal adenomas may demonstrate aggressive tumorigenesis and can be a determining factor in increased adenoma miss rates (AMRs). [20] A computer-aided detection (CADe) model was developed by Taylor et al. to examine the diagnostic capability for flat early-stage CRC (T1) using CTC. [19] The CADe system, which was applied at three settings of sphericity, showed an inverse correlation between adenoma detection sensitivity and sphericity (83.3%, 70.8%, and 54.1% at sphericity of 0, 0.75, and 1, respectively) while also revealing a direct correlation between accuracy and sphericity, thus indicating that novel applications of computer-aided systems through CTC may effectively detect even flat CRC.
ii. Endocytoscopy is an emerging endoscopic imaging modality that allows in vivo microscopic imaging and real-time diagnosis of cellular structures at exceptionally high magnification with up to 400-fold magnification power in endoscope-based and up to 1400-fold magnification power in probe-based endocytoscopy during ongoing colonoscopy. [22] Takeda et al. [23] evaluated a computer-aided diagnosis system using ultra-high (approximately × 400) magnification endocytoscopy (EC-CAD) to distinguish between invasive colorectal cancerous and less-aggressive lesions. Their model achieved high-confidence diagnosis with sensitivity, specificity, accuracy, PPV, and NPV of 98.1 %, 100 %, 99.3 %, 100 %, and 98.8 %, respectively. Their results suggested that EC-CAD may help diagnose invasive colorectal cancer.
iii. Confocal Laser Endomicroscopy (CLE) is a microscopic imaging modality that enables in vivo observation of cellular and subcellular structures (up to 250μm in depth) at 1000-fold magnification power. [24] Using fractal analysis and neural network models of CLE-generated colon mucosa images, Ştefănescu et al. developed an automatic diagnosis algorithm for colorectal cancer (CRC) with an accuracy of 84.5% in differentiating advanced colorectal adenocarcinomas from the normal intestinal mucosa. However, they recommended further assessment of their method with randomized controlled trials. (Figure 5) [25]
4)
Lung cancer screening and detection
The leading cause of cancer-related mortality among men and women in the United States is lung cancer. [35] In addition, the 5-year survival rate for lung cancer is low due to late detection problems. About 70% of lung cancers are detected in their late stages when they have become difficult to treat. [26]
Current AI systems integrated into CT scans have enabled improvements in cancer detection. These AI systems use deep learning (DL) to determine what a tumor is from real-world examples. They are usually fed with thousands of data containing CT scans of the lungs of patients with cancer and those without, enabling the machines to learn how a cancer nodule looks. So far, these AI systems have proven to be accurate compared to non-AI systems and have improved physicians’ clinical decision-making capabilities. [26] If discovered early, lung cancer is curable, overtreatment can be prevented, patients’ quality of life can be significantly improved, and more lives can be saved. [29]
AI can also enhance the staging and treatment selection for lung cancer, as detailed in Table 1 below;
Table 1.  
Table 1.  
Preprints 95347 i004Preprints 95347 i005Preprints 95347 i006Preprints 95347 i007
Abbreviations: ACC, accuracy; ACRIN, American College of Radiology Imaging Network; ALK+, anaplastic lymphoma kinase positive; AUC, area under curve; CANARY, Computer-Aided Nodule Assessment and Risk Yield; CI, concordance index; CT, computed tomography; EGFR+/EGFR−, epidermal growth factor receptor positive/negative; HR, hazard ratio; KRAS, KRAS proto-oncogene, guanosine-triphosphatase; NSCLC, non-small cell lung cancer; Sn, sensitivity; Sp, specificity; SVM, support vector machine. Note: From “Artificial intelligence in cancer imaging: Clinical challenges and applications” by Bi et al., 2019, American Cancer Society (ACS) Journals (https://acsjournals.onlinelibrary.wiley.com/doi/full/10.3322/caac.21552). CC BY-NC.
i) AI and Lung cancer screening:
For a long time, cancer screening was not feasible. However, significant advancements have been made in this regard in recent times. The United States Preventive Services Task Force (USPSTF) in 2021 recommended screening for lung cancer annually using low dose computed tomography (LDCT) in adults aged 50 to 80 years with significant smoking history. [28] Low-dose CT (LDCT) screening for lung cancer reduced mortality in high-risk patients by about 20%, according to the National Lung Screening Trial (NLST). [36] In subsequent sessions, we shall endeavor to analyze the benefits of LDCT screening for lung cancer, its numerous limitations, and possible future improvements. [36]
LDCT screening for lung cancer regularly identifies numerous pulmonary nodules, some of which are subsequently diagnosed as cancer. See Figure 6 below. According to the NLST, most of the pulmonary nodules, up to 96.4%, discovered in LDCT screens were benign. A systematic algorithm that can classify these nodules into benign and malignant lesions is yet to be developed. LDCT screens can also pick up incidental indolent cancers that are not life-threatening if untreated, thus exposing patients to the risk of cancer chemotherapy and its accompanying toxicity. As such, physicians must beware of such potential overdiagnosis and make conscious efforts to reduce it. [36] They can best avoid the above scenarios by following clinical guidelines for pulmonary nodules assessment. [37] These clinical guidelines, however, cannot discriminate between benign and malignant lesions, nor can they successfully predict the future cancer risk of affected patients. Research in AI is currently geared towards identifying biomarkers that can accurately differentiate between benign and malignant lesions to mitigate false-positive results from imaging. Such advancement will allow for a more quantitative prediction of the risk and incidence of lung cancer and improve clinicians’ decision-making guidelines.
Figure 6. Clinical Applications of Artificial Intelligence in Lung Cancer Screening on Detection of Incidental Pulmonary Nodules. Imaging analysis shows promise in predicting the risk of developing lung cancer on initial detection of an incidental lung nodule and in distinguishing indolent from aggressive lung neoplasms. PFS indicates progression-free survival; ROC, receiver operating characteristic. Note: From “Artificial intelligence in cancer imaging: Clinical challenges and applications” by Bi et al., 2019, American Cancer Society (ACS) Journals (https://acsjournals.onlinelibrary.wiley.com/doi/full/10.3322/caac.21552). CC BY-NC.
Figure 6. Clinical Applications of Artificial Intelligence in Lung Cancer Screening on Detection of Incidental Pulmonary Nodules. Imaging analysis shows promise in predicting the risk of developing lung cancer on initial detection of an incidental lung nodule and in distinguishing indolent from aggressive lung neoplasms. PFS indicates progression-free survival; ROC, receiver operating characteristic. Note: From “Artificial intelligence in cancer imaging: Clinical challenges and applications” by Bi et al., 2019, American Cancer Society (ACS) Journals (https://acsjournals.onlinelibrary.wiley.com/doi/full/10.3322/caac.21552). CC BY-NC.
Preprints 95347 g006
The American College of Radiology Lung CT Screening Reporting and Data System (Lung-RADS) and the Fleischner Society recommend that physicians follow up with their patients for 3 to 13 months following the incidental detection of pulmonary nodules before resorting to more invasive tests like biopsy. [38,39]
The National Cancer Institute (NCI), in 2017, provided a community of AI teams with thousands of annotated CT images from their cancer imaging archive. These teams then used convolutional neural networks (CNN) to diagnose lesions. The competition’s winning team recorded a high performance (with a log loss of 0.3999; note that a perfect model would have reported a log loss of 0). However, this excellent achievement by the team was not without flaws; their input data had a cancer prevalence of 50% compared to the 4% prevalence found in the screening population with lung nodules. [40]
Thus, the team needs a clinical setting to train, evaluate and validate their model.
Ardila et al., in a study, proposed a deep learning algorithm that uses patients’ present and past CT volumes to predict the risk of lung cancer. Their model ended up with an area under the curve (AOC) of 94.4% on close to 6,716 National Lung Cancer Screening Trial cases. Their model also performed similarly under an independent clinical validation set of over a thousand cases. They further carried out two reader studies; In the first model, their model outperformed six radiologists when previous CT scans were excluded; with absolute reductions of 11% in false positives and 5 % in false negatives. However, their model was at par with the same radiologists’ when previous CT scans were included in the study. Overall, their study demonstrated the potential for deep learning models to improve the accuracy, consistency, and adoption of lung cancer screening across the globe. [27]
AI, in the future, aims to use machine learning and deep learning to develop clinical approaches to indeterminate lung nodules able to predict future cancer incidence, differentiate benign from malignant nodules, and distinguish indolent tumors from biologically aggressive ones. [29]
ii) Lung cancer characterization using imaging
Lung cancer is sometimes regarded as a “moving target” due to its dynamic nature. It is constantly evolving, modifying its genomic and phenotypic properties, and diversifying, thus making therapy difficult for the oncologist, who is seen to be constantly chasing a constantly changing disease. [41]
Multiple attempts have been made in the past to identify image-based biomarkers. These biomarkers can noninvasively capture the radiographic features that underly the pathophysiology of a tumor. Properties measured include tumor size, e.g., the longest diameter of the tumor, which can be used to stage the tumor and its response to treatment. This method, however, has limitations, one of which is its marked variation in clinical outcomes and treatment responses. Nevertheless, researchers have successfully predicted lung cancer outcomes in patients using tumors’ semantic and automatic radiomic features of tumors. [42,43,44,45,46]. Preliminary studies in this regard include using the Computer Aided Nodule Assessment and Risk Yield (CANARY) tool to carry out semantic-based risk stratification of specific subsets of lung adenocarcinoma. Using this method, AI can automatically quantify the radiographic characteristics of a tumor and even offer prognostic insights into various kinds of cancer, such as Lung cancer (P<3.53 × 10-6) [47] Information generated by this tool can also be used to determine distant metastasis in lung adenocarcinoma, [48] tumor histologic subtypes, [46], biological patterns of tumors such as somatic mutations, [49] and gene expression profiles. [50]
In addition, cancer imaging can quantify the intratumor characteristics of lung cancer, otherwise known as intratumor heterogeneity (ITH). [51] Unlike biomarkers, AI-guided cancer imaging can be available in real-time to the clinician, does not require time-consuming laboratory assay testing, and is non-invasive. Imaging can also present these tumors in 3 dimensions (3D) and not just the portion of the tumor biopsied for further testing. [52]
5)
AI and Prostate cancer screening and detection:
Aside from skin cancers, the most common cancer in men in the United States (US) men is prostate cancer. It also accounts for the 2nd highest cause of cancer-related deaths in the US. [54] Fortunately, mortality from prostate cancers is low relative to other cancers. [56] Several challenges confront the diagnosis and management of prostate cancer, including; A) the Inability to predict if cancer detected on screening will become very aggressive. This problem leads to overdiagnosis and overtreatment of prostate cancer. B) Poor sampling of prostate tissue biopsies leads to misdiagnosis and progression of missed prostate cancers. In some studies, overdiagnosis of indolent prostate cancers is up to 67%, with an increased risk of treatment-related morbidity in these patients. [57] This has led to the development of specific classification systems for prostate cancer, such as the Gleason grading system, aimed at differentiating indolent from aggressive cancers. Prostate cancers with Gleason grade 7 and above or pathologic volumes greater than 0.5 ml (about 0.02 oz) are treated as aggressive cancers and vice versa. Aggressive cancer patients undergo chemotherapy, while indolent cancer patients are selected for active surveillance.
Prostate cancer is biologically heterogenous, thus complicating diagnosis, treatment, and prognosis. As a result, genomic profiling and multiple guidelines have been developed to address this issue. In addition to genomic profiling, AI research in prostate cancer aims to help clinicians detect, localize, characterize, stage, and monitor prostate cancer. MRI and ultrasound techniques are increasingly being utilized to detect aggressive prostate cancers with promising results. These techniques rely on supervised machine learning, deep learning, and computational methods. [29]
Using multiparametric magnetic resonance imaging (mpMRI), soft-tissue contrast can help detect and localize clinically suspicious prostate lesions. Data derived from mpMRI includes tissue anatomy, characteristics, and function. Consequently, this technology is well-equipped to detect potentially aggressive prostate cancers. MpMRI has also shown improvements in targeted biopsy sampling of prostate cancers. According to a study in the United Kingdom, mpMRI significantly decreased the overdiagnosis of prostate cancer. In addition, the study also demonstrated a decrease in unnecessary prostate biopsies by a quarter when mpMRI was employed as a triaging tool in prostate cancers. [58] In another randomized trial of 500 patients, the use of mpMRI prostate screening before biopsy demonstrated a significant increase in the detection of aggressive prostate tumors compared to the current standard of care (38% vs. 26%). [59]
In recent years, AI models in the form of CADe and CADx systems, when integrated with Prostate Imaging Reporting and Data System (PI-RADS), have increased prostate cancer diagnostic accuracy and reduced interobserver disagreements among radiologists. [60,61,62,63] In addition, recent advances in deep learning networks such as CNN have revolutionized investigative research into prostate cancer imaging. For instance, CNN architectures used to train deep networks for prostate cancer have achieved significant results. CNNs were used to classify MRI findings by utilizing auto-winding mechanisms. Some researchers used this method to overcome the difficulties faced in MRI interpretation, such as the high dynamic ranges and low contrast edges associated with high contrast imagery. [64] Other investigators have stacked mpMRI images as a 2D channel of red-green-blue (RGB) images for training purposes. [65,66] Prostate cancers can also be localized and classified at the same time using deep learning systems. [67] Anatomic features added to the last layers of CNNs significantly improved their performance. [64] Additionally, integrating radiofrequency ultrasound data with AI techniques for prostate cancer classification yielded promising results. [68,69,70]
6)
Imaging of CNS Tumors:
Central nervous system (CNS) pathologies come in various varieties. CNS parenchymal cancers arise mainly from systemic metastasis and gliomas. Non-neural tissue tumors like pituitary adenomas, schwannomas, and meningiomas also comprise parts of CNS tumors. These varieties of tumors in the CNS can pose diagnostic challenges to clinicians during imaging. Significant challenges encountered during CNS imaging include the following:
A)
Ensuring that tumor diagnosis is accurate enough to optimize clinical decisions.
B)
Ability to distinguish signal characteristics of surrounding neural tissue from those of the primary tumor throughout the clinical surveillance period of the tumor.
C)
Ability to map the genotypes of tumors based on their phenotypic manifestations during imaging.
Traditionally, CNS tumors are diagnosed via imaging. The decision to treat a patient or not is made using a set of clinical and radiologic criteria. A definitive histopathologic diagnosis is made after biopsy, after which prognosis is eventually determined over time through scheduled follow-up visits by the patient. AI aims to extrapolate pathologic and genomic data from imaging data and use computational imaging analysis through shared network algorithms to provide clinicians with enough data upfront to make the best clinical decisions for their patients. [29]
Table 3. below summarizes the role of artificial intelligence in the imaging of CNS Tumors:
Table 3.  
Table 3.  
Preprints 95347 i008Preprints 95347 i009Preprints 95347 i010Preprints 95347 i011
Abbreviations: ACC, accuracy; ACRIN, American College of Radiology Imaging Network; ALK+, anaplastic lymphoma kinase positive; AUC, area under curve; CANARY, Computer-Aided Nodule Assessment and Risk Yield; CI, concordance index; CT, computed tomography; EGFR+/EGFR−, epidermal growth factor receptor positive/negative; HR, hazard ratio; KRAS, KRAS proto-oncogene, guanosine-triphosphatase; NSCLC, non-small cell lung cancer; Sn, sensitivity; Sp, specificity; SVM, support vector machine. Note: From “Artificial intelligence in cancer imaging: Clinical challenges and applications” by Bi et al., 2019, American Cancer Society (ACS) Journals (https://acsjournals.onlinelibrary.wiley.com/doi/full/10.3322/caac.21552). CC BY-NC. .
B) AI and Histopathologic Diagnosis and staging:
Pathology involves the diagnosis of diseases. Anatomic pathologists usually examine body tissues that have undergone processing and fixation on glass slides, usually through a microscope. [71] Pathologists across the globe have historically relied on glass slides to render a diagnosis. However, preparing tissues for glass slide examination and diagnostic reporting by a pathologist is time-consuming. Even more time-consuming is the art of transporting the slides around for second opinions by subspecialty pathologists, thus serving as a cog in the wheel progress for excellent and timely health care delivery to patients. [72] Fortunately, recent advancements in whole slide imaging (WSI) technology seek to reverse this trend. [73] WSI involves scanning entire pathology glass slides to produce digital image outputs of diagnostic quality. [74,75] Whole slide imaging scanners usually produce diagnostic quality images through specialized high-resolution cameras combined with optics and relevant computer software. [76] These digitized images are then converted to pixel pipelines that can be viewed remotely and easily shared among institutions for second opinions. [77] AI can be used to integrate these digitized image pixels into a deep learning algorithm capable of identifying patterns, features, and shapes on WSI slides, all geared towards improving the diagnostic workflow and accuracy of the pathologist. [74] Multiple research works have conclusively demonstrated that diagnoses rendered through digital images are not significantly different from those made through conventional microscopes and glass slides. [73,78,79,80,81]
According to a study in 2011 by Beck et al., computer algorithms trained using standard anatomic pathology glass slides of breast cancer could correctly predict the likelihood of certain patients’ breast cancer progressing to more severe disease. The study also generated an image-based risk score for use in breast cancer prognosis, thus decreasing the need for performing expensive and time-consuming molecular assays. [82] In 2019, Nagpal et al. developed a two-stage deep learning system (DLS) to carry out Gleason scoring and quantitation on prostatectomy specimens. The first stage of training was done using a CNN-based regional Gleason pattern classification. For the second stage of training, one thousand one hundred fifty-nine slide-level pathologist classifications were used. On analysis, the DLS was found to possess a significantly higher diagnostic accuracy of 0.70 (p = 0.002) than the diagnostic accuracy of 0.61 by 29 select pathologists on a validation set. Thus, the DLS provided better patient risk stratification in correlation to clinical follow-up data. [83] For the Gleason grading system, the DLS achieved an AOC of 0.95 - 0.96. In addition, for Gleason grades ≥ 4, DLS showed greater sensitivity and specificity than 9 out of 10 pathologists.
Digital images and AI technologies enable the storage of large image data sets and retrieving images that are not annotated or indexed. Hedge et al. (2019) described SMILY (Similar image search for histopathology), an AI algorithm developed by GOOGLE that uses a database of unlabeled images to find similar images. [84]
C) Limitations and Future prospects of AI and Cancer imaging
Several limitations still abound as regards AI and cancer imaging despite the numerous success it has recorded thus far. When it comes to AI applications in histopathologic diagnosis of cancers, a lot remains to be done. There is a need to digitize pathology laboratories in addition to changing pathologists’ workflow and acquiring WSI imaging scanners. Fundamental changes are needed on how tissues are processed to implement a digitized laboratory workflow, computer assisted diagnosis and automated image analysis. [71,85,86]
In radiology, the inability to curate the large volumes of data generated from CT scans and MRI remains an obstacle to researchers’ hope of developing automated clinical solutions. Deep neural networks and other AI mechanisms are data-hungry processes that rely highly on training from curated data sets. Curating this enormous data requires highly trained specialists, which jacks up the overall cost of the process significantly. [29] Alternatives to the use of curated data sets include the use of unsupervised [87] and self-supervised [88] AI methods and synthetic data [89]. Yet another limitation is the lack of consensus on specific data sets that can be used for standardized benchmarking in cancer imaging. [90]
In addition, the hoarding of relevant data sets from AI scientists by institutional, professional, and government groups due to technical, legal, or ethical concerns remains a challenge to overcome. [91] Nevertheless, some progress has been recorded in this regard, an example being the National Institutes of Health’s (NIH) recent sharing of chest x-rays and CT repositories to AI scientists for research purposes. [92]
In the world of ethics, some algorithmic designs may be unethical [93] and, as such, compromise holistic healthcare delivery in favor of profit-making. High reliance on AI-based solutions may encourage physicians to abandon common sense in medicine and negatively impact patient-doctor relationships and confidentiality. [94] Imaging may also detect clinically insignificant incidental findings that may be poorly interpreted, leading to numerous unwarranted tests and treatments that may increase morbidity and decrease a patient’s quality of life. As a result, much work remains to be done in this area to allow for AI-based classification of incidental findings into indolent to potentially aggressive lesions. As for the future, the ability of AI to supplant clinicians’ workflows will depend on significant improvements in AI methodologies with comparable or superior efficacy compared to human experts. [29]
Finally, in a world with limited access to expert clinicians, AI may serve as the chief consultant to physicians in disease interpretations from imaging. This achievement will improve healthcare delivery and efficiency, reduce the overall cost of healthcare and open new possibilities in disease detection and management not hitherto conceived. [29]
  • COVER LETTER
Dear Author Coordinators,
This manuscript examines the role of artificial intelligence in cancer imaging. It throws light on how artificial intelligence (AI) can significantly cut the wait time of patients and their clinicians in getting cancer diagnosis and assist health care providers by highlighting areas of an image where the interpreter needs to focus more for better result analysis and quality output. It also provides details on how the rise of AI attempts to standardize imaging results across providers by eliminating inter and intra observer variations among health care providers. Finally, it exposes the numerous limitations associated with AI use in cancer imaging such as: The need to digitize pathology laboratories and workflow, as well as transform our century old tissue processing methods to fit into modern technological standards. In the field of Radiology, the challenge of curating the enormous amount of data generated through MRIs, CT scans, PET Scans etc., poses a significant challenge to the ability of researchers to train relevant AI models to high levels of accuracy and reliability. In addition, over reliance on AI by clinicians can deprive them of common-sense approach to health care issues of their patients and negatively impact doctor-patient relationships and confidentiality.
We confirm that neither the manuscript nor any parts of its content are currently under consideration or published in another journal.
All authors have approved the manuscript and agree with its submission to MDPI.

Conflicts of Interest

The authors declare no conflicts of interest. .

Funding

This research received no external funding.
Thank you.
Ugochukwu Jonah

References

  1. What is generative AI? (n.d.). McKinsey & Company. Available online: https://www.mckinsey.com/featured-insights/mckinsey-explainers/what-is-generative-ai.
  2. Wikipedia contributors. Artificial intelligence. Wikipedia. Available online: https://en.wikipedia.org/wiki/Artificial_intelligence (accessed on 20 February 2023).
  3. Cancer Imaging Program (CIP). (n.d.). Available online: https://imaging.cancer.gov/imaging_basics/cancer_imaging/uses_of_imaging.htm.
  4. Can Artificial Intelligence Help See Cancer in New Ways? National Cancer Institute. Available online: https://www.cancer.gov/news-events/cancer-currents-blog/2022/artificial-intelligence-cancer-imaging (accessed on 22 March 2022).
  5. Cancer Screening Overview (PDQ®)–Patient Version. National Cancer Institute. Available online: https://www.cancer.gov/about-cancer/screening/patient-screening-overview-pdq (accessed on 19 August 2020).
  6. X, S. AI tool improves breast cancer detection on mammography. Available online: https://medicalxpress.com/news/2020-11-ai-tool-breast-cancer-mammography.html (accessed on 4 November 2020).
  7. Rawashdeh, M.A.; Lee, W.B.; Bourne, R.M.; Ryan, E.A.; Pietrzyk, M.W.; Reed, W.M.; Heard, R.C.; Black, D.A.; Brennan, P.C. Markers of Good Performance in Mammography Depend on Number of Annual Readings. Radiology 2013, 269, 61–67. [Google Scholar] [CrossRef] [PubMed]
  8. Pacilè, S.; Lopez, J.; Chone, P.; Bertinotti, T.; Grouin, J.M.; Fillard, P. Improving Breast Cancer Detection Accuracy of Mammography with the Concurrent Use of an Artificial Intelligence Tool. Radiol. Artif. Intell. 2020, 2, e190208. [Google Scholar] [CrossRef] [PubMed]
  9. Sinichkina, E.; MammoScreen is FDA-cleared and available for sale in the US - MammoScreen®. MammoScreen®. Available online: https://www.mammoscreen.com/mammoscreen-fda-cleared-available-for-sale-us (accessed on 17 November 2020).
  10. Holmström, O.; Linder, N.; Kaingu, H.; Mbuuko, N.; Mbete, J.; Kinyua, F.; Törnquist, S.; Muinde, M.; Krogerus, L.; Lundin, M.; et al. Point-of-Care Digital Cytology With Artificial Intelligence for Cervical Cancer Screening in a Resource-Limited Setting. JAMA Netw. Open 2021, 4, e211740–e211740. [Google Scholar] [CrossRef]
  11. Shalev-Shwartz, S.; Ben-David, S. Understanding Machine Learning: From Theory to Algorithms; Cambridge University Press: Cambridge, UK, 2014. [Google Scholar]
  12. Ruffle, J.K.; Farmer, A.D.; Aziz, Q. Artificial Intelligence-Assisted Gastroenterology— Promises and Pitfalls. Am. J. Gastroenterol. 2018, 114, 422–428. [Google Scholar] [CrossRef] [PubMed]
  13. Hamet, P.; Tremblay, J. Artificial intelligence in medicine. Metabolism 2017, 69, S36–S40. [Google Scholar] [CrossRef]
  14. Bray, F.; Ferlay, J.; Soerjomataram, I.; Siegel, R.L.; Torre, L.A.; Jemal, A. Global Cancer Statistics 2018: GLOBOCAN Estimates of Incidence and Mortality Worldwide for 36 Cancers in 185 Countries. CA Cancer J. Clin. 2018, 68, 394–424. [Google Scholar] [CrossRef]
  15. Maida, M.; Macaluso, F.S.; Ianiro, G.; Mangiola, F.; Sinagra, E.; Hold, G.; Maida, C.; Cammarota, G.; Gasbarrini, A.; Scarpulla, G. Screening of colorectal cancer: present and future. Expert Rev. Anticancer. Ther. 2017, 17, 1131–1146. [Google Scholar] [CrossRef] [PubMed]
  16. Vining, D.J.; Gelfand, D.W.; Bechtold, R.E.; Scharling, E.S.; Grishaw, E.K.; Shifrin, R.Y. Technical Feasibility of Colon Imaging with Helical CT and Virtual Reality. AJR Am. J. Roentgenol. 1994, 162, 104. [Google Scholar]
  17. Song, B.; Zhang, G.; Lu, H.; Wang, H.; Zhu, W.; J Pickhardt, P.; Liang, Z. Volumetric texture features from higher-order images for diagnosis of colon lesions via CT colonography. Int. J. Comput. Assist. Radiol. Surg. 2014, 9, 1021–1031. [Google Scholar] [CrossRef]
  18. Mitsala, A.; Tsalikidis, C.; Pitiakoudis, M.; Simopoulos, C.; Tsaroucha, A.K. Artificial Intelligence in Colorectal Cancer Screening, Diagnosis and Treatment. A New Era. Curr. Oncol. 2021, 28, 1581–1607. [Google Scholar] [CrossRef]
  19. Taylor, S.A.; Iinuma, G.; Saito, Y.; Zhang, J.; Halligan, S. CT colonography: computer-aided detection of morphologically flat T1 colonic carcinoma. Eur. Radiol. 2008, 18, 1666–1673. [Google Scholar] [CrossRef] [PubMed]
  20. O’Brien, M. J., Winawer, S. J., Zauber, A. G., Bushey, M. T., Sternberg, S. S., Gottlieb, L. S., Bond, J. H., Waye, J. D., & Schapiro, M. Flat adenomas in the National Polyp Study: Is there an increased risk for high-grade dysplasia initially or during surveillance? Clinical Gastroenterology and Hepatology 2004, 2, 905–911.
  21. Cancer Imaging Program (CIP). (n.d.). Available online: https://imaging.cancer.gov/imaging_basics/cancer_imaging/digital_mammography.htm.
  22. Neumann, H.; Kudo, S.; Vieth, M.; Neurath, M.F. Real-time in vivo histologic examination using a probe-based endocytoscopy system for differentiating duodenal polyps. Endoscopy 2013, 45, E53–E54. [Google Scholar] [CrossRef] [PubMed]
  23. Takeda, K.; Kudo, S.-E.; Mori, Y.; Misawa, M.; Kudo, T.; Wakamura, K.; Katagiri, A.; Baba, T.; Hidaka, E.; Ishida, F.; et al. Accuracy of diagnosing invasive colorectal cancer using computer-aided endocytoscopy. Endoscopy 2017, 49, 798–802. [Google Scholar] [CrossRef] [PubMed]
  24. Neumann, H.; Kiesslich, R.; Wallace, M.B.; Neurath, M.F. Confocal Laser Endomicroscopy: Technical Advances and Clinical Applications. Gastroenterology 2010, 139, 388–392. [Google Scholar] [CrossRef]
  25. Ştefănescu, D.; Streba, C.; Cârţână, E.T.; Săftoiu, A.; Gruionu, G.; Gruionu, L.G. Computer Aided Diagnosis for Confocal Laser Endomicroscopy in Advanced Colorectal Adenocarcinoma. PLOS ONE 2016, 11, e0154863–e0154863. [Google Scholar] [CrossRef]
  26. Svoboda, E. Artificial intelligence is improving the detection of lung cancer. Nature 2020, 587, S20–S22. [Google Scholar] [CrossRef]
  27. Ardila, D.; Kiraly, A.P.; Bharadwaj, S.; Choi, B.; Reicher, J.J.; Peng, L.; Tse, D.; Etemadi, M.; Ye, W.; Corrado, G.; et al. End-to-end lung cancer screening with three-dimensional deep learning on low-dose chest computed tomography. Nat. Med. 2019, 25, 954–961. [Google Scholar] [CrossRef]
  28. Lung Cancer: Screening. Available online: https://uspreventiveservicestaskforce.org/uspstf/recommendation/lung-cancer-screening (accessed on 9 March 2021).
  29. Bi, W.L.; Hosny, A.; Schabath, M.B.; Giger, M.L.; Birkbak, N.J.; Mehrtash, A.; Allison, T.; Arnaout, O.; Abbosh, C.; Dunn, I.F.; et al. Artificial intelligence in cancer imaging: Clinical challenges and applications. CA: A Cancer J. Clin. 2019, 69, 127–157. [Google Scholar] [CrossRef]
  30. Castellino, R.A. Computer aided detection (CAD): an overview. Cancer Imaging 2005, 5, 17–19. [Google Scholar] [CrossRef]
  31. Liang, M.; Tang, W.; Xu, D.M.; Jirapatnakul, A.C.; Reeves, A.P.; Henschke, C.I.; Yankelevitz, D. Low-Dose CT Screening for Lung Cancer: Computer-aided Detection of Missed Lung Cancers. Radiology 2016, 281, 279–288. [Google Scholar] [CrossRef] [PubMed]
  32. Ambrosini R, Wang P, Kolar B, O’Dell W. Computer-aided detection of metastatic brain tumors using automated 3-D template matching [abstract]. Proc Intl Soc Mag Reson Med. 2008, 16, 3414.
  33. Cheng, H.; Cai, X.; Chen, X.; Hu, L.; Lou, X. Computer-aided detection and classification of microcalcifications in mammograms: a survey. Pattern Recognit. 2003, 36, 2967–2991. [Google Scholar] [CrossRef]
  34. Nishikawa, RM. Computer-aided detection and diagnosis. In: Bick U, Diekmann F, eds. Digital Mammography. Berlin, Germany: Springer, 2010, 85-106. Available online: https://link.springer.com/chapter/10.1007/978-3-540-78450-0_6.
  35. Siegel, R.L.; Miller, K.D.; Wagle, N.S.; Jemal, A. Cancer statistics, 2023. CA: A Cancer J. Clin. 2023, 73, 17–48. [Google Scholar] [CrossRef] [PubMed]
  36. National Lung Screening Trial Research Team; Aberle, D. R.; Adams, A.M.; Berg, C.D.; Black, W.C.; Clapp, J.D.; Fagerstrom, R.M.; Gareen, I.F.; Gatsonis, C.; Marcus, P.M.; et al. Reduced Lung-Cancer Mortality with Low-Dose Computed Tomographic Screening. N. Engl. J. Med. 2011, 365, 395–409. [Google Scholar] [CrossRef]
  37. McKee, B.J.; Regis, S.; Borondy-Kitts, A.K.; Hashim, J.A.; French, R.J.; Wald, C.; McKee, A.B. NCCN Guidelines as a Model of Extended Criteria for Lung Cancer Screening. J. Natl. Compr. Cancer Netw. 2018, 16, 444–449. [Google Scholar] [CrossRef]
  38. American College of Radiology. Lung-RADS. Reston, VA: American College of Radiology; 2018. acr.org/Clinical-Resources/Reporting-and-Data-Systems/Lung-Rads. Accessed May 15, 2018.
  39. MacMahon, H.; Naidich, D.P.; Goo, J.M.; Lee, K.S.; Leung, A.N.C.; Mayo, J.R.; Mehta, A.C.; Ohno, Y.; Powell, C.A.; Prokop, M.; et al. Guidelines for Management of Incidental Pulmonary Nodules Detected on CT Images: From the Fleischner Society 2017. Radiology 2017, 284, 228–243. [Google Scholar] [CrossRef] [PubMed]
  40. Kaggle Inc. Data Science Bowl 2017. San Francisco, CA: Kaggle Inc; 2017. kaggle.com/c/data-science-bowl-2017. Accessed May 14, 2018.
  41. Jamal-Hanjani, M.; Wilson, G.A.; McGranahan, N.; Birkbak, N.J.; Watkins, T.B.K.; Veeriah, S.; Shafi, S.; Johnson, D.H.; Mitter, R.; Rosenthal, R.; et al. Tracking the Evolution of Non–Small-Cell Lung Cancer. N. Engl. J. Med. 2017, 376, 2109–2121. [Google Scholar] [CrossRef]
  42. Aerts HJ, Velazquez ER, Leijenaar RT, et al. Decoding tumour phenotype by noninvasive imaging using a quantitative radiomics approach [serial online]. Nat Commun. 2014, 5, 4006. [Google Scholar] [CrossRef]
  43. Coroller, T.P.; Grossmann, P.; Hou, Y.; Rios Velazquez, E.; Leijenaar, R.T.H.; Hermann, G.; Lambin, P.; Haibe-Kains, B.; Mak, R.H.; Aerts, H.J.W.L. CT-based radiomic signature predicts distant metastasis in lung adenocarcinoma. Radiother. Oncol. 2015, 114, 345–350. [Google Scholar] [CrossRef]
  44. Tunali, I.; Stringfield, O.; Guvenis, A.; Wang, H.; Liu, Y.; Balagurunathan, Y.; Lambin, P.; Gillies, R.J.; Schabath, M.B. Radial gradient and radial deviation radiomic features from pre-surgical CT scans are associated with survival among lung adenocarcinoma patients. Oncotarget 2017, 8, 96013–96026. [Google Scholar] [CrossRef] [PubMed]
  45. Li, Q.; Kim, J.; Balagurunathan, Y.; Qi, J.; Liu, Y.; Latifi, K.; Moros, E.G.; Schabath, M.B.; Ye, Z.; Gillies, R.J.; et al. CT imaging features associated with recurrence in non-small cell lung cancer patients after stereotactic body radiotherapy. Radiat. Oncol. 2017, 12, 158–158. [Google Scholar] [CrossRef] [PubMed]
  46. Wu W, Parmar C, Grossmann P, et al. Exploratory study to identify radiomics classifiers for lung cancer histology [serial online]. Front Oncol. 2016, 6, 71.
  47. Parmar C, Grossmann P, Bussink J. Lambin P Aerts HJ. Machine learning methods for quantitative radiomic biomarkers [serial online]. Sci Rep. 2015, 5, 13087.
  48. Coroller, T.P.; Grossmann, P.; Hou, Y.; Rios Velazquez, E.; Leijenaar, R.T.H.; Hermann, G.; Lambin, P.; Haibe-Kains, B.; Mak, R.H.; Aerts, H.J.W.L. CT-based radiomic signature predicts distant metastasis in lung adenocarcinoma. Radiother. Oncol. 2015, 114, 345–350. [Google Scholar] [CrossRef] [PubMed]
  49. Rios Velazquez E, Parmar C, Liu Y, et al. Somatic mutations drive distinct imaging phenotypes in lung cancer. Cancer Res. 2017, 77, 3922–3930. [Google Scholar] [CrossRef]
  50. Grossmann P, Stringfield O, El-Hachem N, et al. Defining the biological basis of radiomic phenotypes in lung cancer [serial online]. Elife. 2017, 6, 323421. [Google Scholar]
  51. Gatenby, R.A.; Grove, O.; Gillies, R.J. Quantitative Imaging in Cancer Evolution and Ecology. Radiology 2013, 269, 8–14. [Google Scholar] [CrossRef]
  52. Gillies, R.J.; Kinahan, P.E.; Hricak, H. Radiomics: Images Are More than Pictures, They Are Data. Radiology 2016, 278, 563–577. [Google Scholar] [CrossRef]
  53. National Lung Screening Trial Research Team, Aberle DR, Adams AM, et al. Reduced lung-cancer mortality with low-dose computed tomographic screening. N Engl J Med. 2011, 365, 395–409. [Google Scholar] [CrossRef]
  54. Siegel, R.L.; Miller, K.D.; Jemal, A. Cancer statistics, 2018. CA Cancer J. Clin. 2018, 68, 7–30. [Google Scholar] [CrossRef] [PubMed]
  55. Gao, B.; Zhang, H.; Zhang, S.-D.; Cheng, X.-Y.; Zheng, S.-M.; Sun, Y.-H.; Zhang, D.-W.; Jiang, Y.; Tian, J.-W. Mammographic and clinicopathological features of triple-negative breast cancer. Br. J. Radiol. 2014, 87, 20130496. [Google Scholar] [CrossRef] [PubMed]
  56. Hamdy, F.C.; Donovan, J.L.; Lane, J.A.; Mason, M.; Metcalfe, C.; Holding, P.; Davis, M.; Peters, T.J.; Turner, E.L.; Martin, R.M.; et al. 10-Year Outcomes after Monitoring, Surgery, or Radiotherapy for Localized Prostate Cancer. N. Engl. J. Med. 2016, 375, 1415–1424. [Google Scholar] [CrossRef] [PubMed]
  57. Loeb, S.; Bjurlin, M.A.; Nicholson, J.; Tammela, T.L.; Penson, D.F.; Carter, H.B.; Carroll, P.; Etzioni, R. Overdiagnosis and Overtreatment of Prostate Cancer. Eur. Urol. 2014, 65, 1046–1055. [Google Scholar] [CrossRef]
  58. Ahmed, H.U.; El-Shater Bosaily, A.; Brown, L.C.; Gabe, R.; Kaplan, R.; Parmar, M.K.; Collaco-Moraes, Y.; Ward, K.; Hindley, R.G.; Freeman, A.; et al. Diagnostic accuracy of multi-parametric MRI and TRUS biopsy in prostate cancer (PROMIS): A paired validating confirmatory study. Lancet 2017, 389, 815–822. [Google Scholar] [CrossRef] [PubMed]
  59. Kasivisvanathan, V.; Rannikko, A.S.; Borghi, M.; Panebianco, V.; Mynderse, L.A.; Vaarala, M.H.; Briganti, A.; Budäus, L.; Hellawell, G.; Hindley, R.G.; et al. ECISION Study Group Collaborators. MRI-Targeted or Standard Biopsy for Prostate-Cancer Diagnosis. New Engl. J. Med. 2018, 378, 1767–1777. [Google Scholar] [CrossRef] [PubMed]
  60. Liu L, Tian Z, Zhang Z, Fei B. Computer-aided detection of prostate cancer with MRI: technology and applications. Acad Radiol. 2016, 23, 1024–1046. [Google Scholar] [CrossRef] [PubMed]
  61. Giannini, V.; Mazzetti, S.; Armando, E.; Carabalona, S.; Russo, F.; Giacobbe, A.; Muto, G.; Regge, D. Multiparametric magnetic resonance imaging of the prostate with computer-aided detection: experienced observer performance study. Eur. Radiol. 2017, 27, 4200–4208. [Google Scholar] [CrossRef]
  62. Hambrock, T.; Vos, P.C.; de Kaa, C.A.H.; Barentsz, J.O.; Huisman, H.J. Prostate Cancer: Computer-aided Diagnosis with Multiparametric 3-T MR Imaging—Effect on Observer Performance. Radiology 2013, 266, 521–530. [Google Scholar] [CrossRef]
  63. E Seltzer, S.; Getty, D.J.; Tempany, C.M.; Pickett, R.M.; Schnall, M.D.; McNeil, B.J.; A Swets, J. Staging prostate cancer with MR imaging: a combined radiologist-computer system. Radiology 1997, 202, 219–226. [Google Scholar] [CrossRef]
  64. Seah, J., Tang, J. H., & Kitchen, A. (2017). Detection of prostate cancer on multiparametric MRI. Proceedings of SPIE. Available online: https://www.spiedigitallibrary.org/conference-proceedings-of-spie/10134/1/Detection-of-prostate-cancer-on-multiparametric-MRI/10.1117/12.2277122.short.
  65. Liu, S., Zheng, H., Feng, Y., & Li, W. (2017). Prostate cancer diagnosis using deep learning with 3D multiparametric MRI. Proceedings of SPIE.
  66. Chen, Q., Xu, X., Hu, S., Li, X., Zou, Q., & Li, Y. (2017). A transfer learning approach for classification of clinical significant prostate cancers from mpMRI scans. Proceedings of SPIE. Available online: https://www.spiedigitallibrary.org/conference-proceedings-of-spie/10134/1/A-transfer-learning-approach-for-classification-of-clinical-significant-prostate/10.1117/12.2279021.short.
  67. Kiraly AP, Nader CA, Tyusuzoglu A, et al.Deep convolutional encoder-decoders for prostate cancer detection and classification. In: Descoteaux M, Maier-Hein L, Franz AM, Jannin P, Collins L, Duchesne S, eds. Medical Image Computing and Computer-Assisted Intervention-MICCAI 2017. 20th International Conference, Quebec City, QC, Canada, September 11–13, 2017. Proceedings, pt III. Cham, Switzerland: Springer International Publishing AG, 2017, 489-497. Available online: https://link.springer.com/chapter/10.1007/978-3-319-66179-7_56?utm_source=getftr&utm_medium=getftr&utm_campaign=getftr_pilot.
  68. Moradi, M.; Abolmaesumi, P.; Siemens, D.R.; Sauerbrei, E.E.; Boag, A.H.; Mousavi, P. Augmenting Detection of Prostate Cancer in Transrectal Ultrasound Images Using SVM and RF Time Series. IEEE Trans. Biomed. Eng. 2008, 56, 2214–2224. [Google Scholar] [CrossRef] [PubMed]
  69. Imani, F.; Abolmaesumi, P.; Gibson, E.; Khojaste, A.; Gaed, M.; Moussa, M.; Gomez, J.A.; Romagnoli, C.; Leveridge, M.; Chang, S.; et al. Computer-Aided Prostate Cancer Detection Using Ultrasound RF Time Series: In Vivo Feasibility Study. IEEE Trans. Med Imaging 2015, 34, 2248–2257. [Google Scholar] [CrossRef]
  70. Azizi, S.; Bayat, S.; Yan, P.; Tahmasebi, A.; Nir, G.; Kwak, J.T.; Xu, S.; Wilson, S.; Iczkowski, K.A.; Lucia, M.S.; et al. Detection and grading of prostate cancer using temporal enhanced ultrasound: combining deep neural networks and tissue mimicking simulations. Int. J. Comput. Assist. Radiol. Surg. 2017, 12, 1293–1305. [Google Scholar] [CrossRef]
  71. Parwani, A.V. Next generation diagnostic pathology: use of digital pathology and artificial intelligence tools to augment a pathological diagnosis. Diagn. Pathol. 2019, 14, 1–3. [Google Scholar] [CrossRef]
  72. Mandong, B.M. Diagnostic oncology: role of the pathologist in surgical oncology--a review article. . 2009, 81–88. [Google Scholar]
  73. Amin, W.; Srintrapun, S.J.; Parwani, A.V.; Md; D, P. Automated whole slide imaging. Expert Opin. Med Diagn. 2008, 2, 1173–1181. [Google Scholar] [CrossRef] [PubMed]
  74. Abels, E.; Pantanowitz, L.; Aeffner, F.; Zarella, M.D.; van der Laak, J.; Bui, M.M.; Vemuri, V.N.; Parwani, A.V.; Gibbs, J.; Agosto-Arroyo, E.; et al. Computational pathology definitions, best practices, and recommendations for regulatory guidance: a white paper from the Digital Pathology Association. J. Pathol. 2019, 249, 286–294. [Google Scholar] [CrossRef]
  75. Abels, E.; Pantanowitz, L.; Aeffner, F.; Zarella, M.D.; van der Laak, J.; Bui, M.M.; Vemuri, V.N.; Parwani, A.V.; Gibbs, J.; Agosto-Arroyo, E.; et al. Computational pathology definitions, best practices, and recommendations for regulatory guidance: a white paper from the Digital Pathology Association. J. Pathol. 2019, 249, 286–294. [Google Scholar] [CrossRef] [PubMed]
  76. Zarella, M.D.; Bowman, D.; Aeffner, F.; Farahani, N.; Xthona, A.; Absar, S.F.; Parwani, A.; Bui, M.; Hartman, D.J. A Practical Guide to Whole Slide Imaging: A White Paper From the Digital Pathology Association. Arch. Pathol. Lab. Med. 2018, 143, 222–234. [Google Scholar] [CrossRef]
  77. Zhao, C.; Wu, T.; Ding, X.; Parwani, A.V.; Chen, H.; McHugh, J.; Piccoli, A.; Xie, Q.; Lauro, G.R.; Feng, X.; et al. International telepathology consultation: Three years of experience between the University of Pittsburgh Medical Center and KingMed Diagnostics in China. J. Pathol. Informatics 2015, 6, 63. [Google Scholar] [CrossRef]
  78. Amin, S.; Mori, T.; Itoh, T. A validation study of whole slide imaging for primary diagnosis of lymphoma. Pathol. Int. 2019, 69, 341–349. [Google Scholar] [CrossRef] [PubMed]
  79. Azizi, S.; Bayat, S.; Yan, P.; Tahmasebi, A.; Nir, G.; Kwak, J.T.; Xu, S.; Wilson, S.; Iczkowski, K.A.; Lucia, M.S.; et al. Detection and grading of prostate cancer using temporal enhanced ultrasound: combining deep neural networks and tissue mimicking simulations. Int. J. Comput. Assist. Radiol. Surg. 2017, 12, 1293–1305. [Google Scholar] [CrossRef] [PubMed]
  80. Bauer TW, et al. Validation of whole slide imaging for primary diagnosis in surgical pathology. Arch Pathol Lab Med. 2013, 137, 518–524.
  81. Buck, T.P.; Dilorio, R.; Havrilla, L.; O’neill, D.G. Validation of a whole slide imaging system for primary diagnosis in surgical pathology: A community hospital experience. J. Pathol. Informatics 2014, 5, 43. [Google Scholar] [CrossRef]
  82. Beck, A.H.; Sangoi, A.R.; Leung, S.; Marinelli, R.J.; Nielsen, T.O.; van de Vijver, M.J.; West, R.B.; van de Rijn, M.; Koller, D. Systematic Analysis of Breast Cancer Morphology Uncovers Stromal Features Associated with Survival. Sci. Transl. Med. 2011, 3, 108ra113–108ra113. [Google Scholar] [CrossRef] [PubMed]
  83. Nagpal K, et al. Development and validation of a deep learning algorithm for improving Gleason scoring of prostate cancer. NPJ Digit Med. 2019, 2, 48.
  84. Hegde, N.; Hipp, J.D.; Liu, Y.; Emmert-Buck, M.; Reif, E.; Smilkov, D.; Terry, M.; Cai, C.J.; Amin, M.B.; Mermel, C.H.; et al. Similar image search for histopathology: SMILY. npj Digit. Med. 2019, 2, 1–9. [Google Scholar] [CrossRef] [PubMed]
  85. Fraggetta, F.; Yagi, Y.; Garcia-Rojo, M.; Evans, A.J.; Tuthill, J.M.; Baidoshvili, A.; Hartman, D.J.; Fukuoka, J.; Pantanowitz, L. The Importance of eSlide Macro Images for Primary Diagnosis with Whole Slide Imaging. J. Pathol. Informatics 2018, 9, 46. [Google Scholar] [CrossRef]
  86. Vodovnik A, Aghdam MRF. Complete routine remote digital pathology services. J Pathol Inform. 2018, 9, 36. [Google Scholar] [CrossRef]
  87. Aganj I, Harisinghani MG, Weissleder R, Fischl B. Unsupervised medical image segmentation based on the local center of mass [serial online]. Sci Rep. 2018, 8, 13012. [Google Scholar] [CrossRef]
  88. Jamaludin A, Kadir T,Zisserman A. Self-supervised learning for spinal MRIs. In: Cardoso J, Argbel T, Carniero G, et al. eds. Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support. Third International Workshop, DLMIA 2017, and 7th International Workshop, ML-CDS 2017; Held in Conjunction with MICCAI 2017, Quebec City, QC, Canada, September 14, Proceedings. Cham, Switzerland: Springer International Publishing AG, 2017, 294-302.
  89. Shin, H.-C.; Tenenholtz, N.A.; Rogers, J.K.; Schwarz, C.G.; Senjem, M.L.; Gunter, J.L.; Andriole, K.P.; Michalski, M. Medical Image Synthesis for Data Augmentation and Anonymization Using Generative Adversarial Networks. In International Workshop on Simulation and Synthesis in Medical Imaging; Springer: Cham, Switzerland, 2018; pp. 1–11. [Google Scholar]
  90. Purushotham, S.; Meng, C.; Che, Z.; Liu, Y. Benchmarking deep learning models on large healthcare datasets. J. Biomed. Informatics 2018, 83, 112–134. [Google Scholar] [CrossRef] [PubMed]
  91. Chang, K.; Balachandar, N.; Lam, C.; Yi, D.; Brown, J.; Beers, A.; Rosen, B.; Rubin, D.L.; Kalpathy-Cramer, J. Distributed deep learning networks among institutions for medical imaging. J. Am. Med Informatics Assoc. 2018, 25, 945–954. [Google Scholar] [CrossRef] [PubMed]
  92. Wang X, Peng Y, Lu L, Lu Z, Bagheri M, Sumers RM. ChestX-ray8: hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases. In: Institute of Electrical and Electronics Engineers (IEEE) and IEEE Computer Society, eds. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Los Alamitos, CA: IEEE Computer Society; 2017. Available online: https://ieeexplore.ieee.org/document/8099852/keywords.
  93. Barrett, S.R.H.; Speth, R.L.; Eastham, S.D.; Dedoussi, I.C.; Ashok, A.; Malina, R.; Keith, D.W. Impact of the Volkswagen emissions control defeat device on US public health. Environ. Res. Lett. 2015, 10, 114005. [Google Scholar] [CrossRef]
  94. Char, D.S.; Shah, N.H.; Magnus, D. Implementing Machine Learning in Health Care — Addressing Ethical Challenges. New Engl. J. Med. 2018, 378, 981–983. [Google Scholar] [CrossRef] [PubMed]
Figure 3. Overview of Sample Processing and Algorithm Training and Validation. A) Flowchart illustrating the sample-processing workflow, showing stages from the collection of samples to the analysis of digital images and physical slides. B) Schematic view of the annotation process used for creation of the digital-slide data for training of the deep learning system (DLS). C) Validation analysis of a digitized image of a whole slide (Papanicolaou test) with the DLS, showing calculations of areas of atypia, with locations of atypia in a heatmap of the digital slide, and identification of individual cells, with color overlays (red for high-grade atypia and green for low-grade atypia). HSIL indicates high grade squamous intraepithelial lesions; LSIL, low-grade squamous intraepithelial lesions. Note: From “Point-of-Care Digital Cytology with Artificial Intelligence for Cervical Cancer Screening in a Resource-Limited Setting” by O. Holmstrom et al., 2021, JAMAL Network Open (https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2777600). CC BY-NC.
Figure 3. Overview of Sample Processing and Algorithm Training and Validation. A) Flowchart illustrating the sample-processing workflow, showing stages from the collection of samples to the analysis of digital images and physical slides. B) Schematic view of the annotation process used for creation of the digital-slide data for training of the deep learning system (DLS). C) Validation analysis of a digitized image of a whole slide (Papanicolaou test) with the DLS, showing calculations of areas of atypia, with locations of atypia in a heatmap of the digital slide, and identification of individual cells, with color overlays (red for high-grade atypia and green for low-grade atypia). HSIL indicates high grade squamous intraepithelial lesions; LSIL, low-grade squamous intraepithelial lesions. Note: From “Point-of-Care Digital Cytology with Artificial Intelligence for Cervical Cancer Screening in a Resource-Limited Setting” by O. Holmstrom et al., 2021, JAMAL Network Open (https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2777600). CC BY-NC.
Preprints 95347 g003
Figure 5. Diagram of the medical imaging system, NAVICAD diagnosis application. Note: From “Computer Aided Diagnosis for Confocal Laser Endomicroscopy in Advanced Colorectal Adenocarcinoma” by D. Ştefănescu et al., 2016, PLoS ONE 11(5), (https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0154863). CC BY-NC.
Figure 5. Diagram of the medical imaging system, NAVICAD diagnosis application. Note: From “Computer Aided Diagnosis for Confocal Laser Endomicroscopy in Advanced Colorectal Adenocarcinoma” by D. Ştefănescu et al., 2016, PLoS ONE 11(5), (https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0154863). CC BY-NC.
Preprints 95347 g005
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated