2.1. AI and Radiologic Cancer Screening and Detection
Cancer screening is aimed at detecting cancer before symptoms emerge. [
5] Several studies have successfully demonstrated AI’s usefulness in improving certain cancers’ diagnostic value using cancer imaging tools. Using computers to localize areas of interest in radiographs is called computer-aided detection (CADe). AI-powered CADe can quickly screen radiographs to help users avoid errors of omission. [
30] CADe can highlight suspicious lesions on radiographs using a pattern-recognition complex, thus enhancing the ability of image readers to identify lesions they might have overlooked at first. Missed cancers in low-dose CT screenings have been eventually identified using CADe. [
31] Other uses of CADe include; reduction of image interpretation time in Magnetic Resonance Imaging (MRI) based imaging of brain metastases, [
32] identification of microcalcifications in mammograms of early breast cancers, [
33] and significant improvements in the sensitivity of radiologists to anomalies on radiographs. [
34]
- 1)
Breast Cancer Imaging: The most diagnosed cancer among women in the United States is breast cancer. It also accounts for the 2
nd highest number of cancer-related deaths. [
54] The introduction of mammography for breast cancer screening has significantly improved early cancer detection and decreased morbidity and mortality overall. However, therapeutic response to breast cancer is highly variable and depends on the presence or absence of specific receptors on the tumor. These receptors include; estrogen (ER), progesterone (PR), and Human Epidermal Receptor 2 (HER 2) receptors. Triple receptor-negative breast cancers are more difficult to identify on mammography as they lack the typical characteristics of the tumor. [
55] Consequently, triple-negative tumors are more likely to be detected later and carry a worse prognosis.
Conventional mammography uses X-rays to look for tumors or suspicious areas in the breasts. Digital mammography also uses X-rays, but the data is stored on a computer instead of a piece of the film, thus enabling computer enhancement of digital mammograms. These computers can further screen digital mammograms and theoretically detect suspicious areas that human error might miss. Digital mammography comes in either 2D or 3D versions. [
21] AI has improved radiologists’ performance in reading breast cancer screening mammograms. Studies have shown that up to 30% to 40% of breast cancers can be missed during screening, and on average, only 10% of women recalled from screening for diagnostic workup are ultimately found to have cancer [
6,
7] AI based algorithms hold promises of improving the accuracy of digital mammography. Scientists can train AI on existing mammogram images enabling it to identify cancer abnormalities and distinguish them from benign findings. An example of this can be found in MammoScreen, an AI tool that improves cancer detection in mammograms. This AI system can identify regions suspicious of breast cancer on 2-D digital mammograms and determine their possibility of malignancy. A study by fourteen radiologists used a dataset of 240 2-D digital mammography images acquired between 2013 and 2016 containing a variety of abnormalities. (
Figure 1)
Half of the dataset was read without AI and the other half with the help of AI during the reading session and vice versa during a second reading session which was separated from the first by a washout period. Results of the study revealed that for a low likelihood of malignancy (< 2.5%), the time was about the same in the first reading session and slightly decreased in the second reading session. However, for a higher likelihood of malignancy, the reading time, on average, increased with the use of AI. The AI system also significantly improved the cancer detection rate and the false-positive rate for each reader, as shown below (
Figure 2). In one instance, nine of the 14 radiologists detected an invasive ductal carcinoma when reading the case using the AI tool, in contrast to only three radiologists who saw cancer in the unaided reading condition. This study conclusively demonstrated that concurrent use of this AI tool improved the diagnostic performance of radiologists in detecting breast cancer without prolonging their workflow. [
8] The U.S. Food and Drug Administration cleared MammoScreen for use in the clinic in 2020. [
9]
Figure 2.
A, Cancer detection rate and percentage improvement brought by the use of the artificial intelligence (AI) system and, B, false-positive rate and percentage decrease as a result of the use of AI. Green bars indicate the percentage improvement brought by the help of AI, thus an increase in cancer detection rate and a decrease in the false-positive rate. Similarly, red bars indicate a deterioration of performances, thus a, A, decrease in cancer detection rate, B, and an increase in false-positive rate.
Note: From “Improving Breast Cancer Detection Accuracy of Mammography with the Concurrent Use of an Artificial Intelligence Tool” by S. Pacile et al., 2020, Radiology: Artificial Intelligence, (https://pubs.rsna.org/doi/10.1148/ryai.2020190208#pane-pcw-references). CC BY-NC-ND.
Figure 2.
A, Cancer detection rate and percentage improvement brought by the use of the artificial intelligence (AI) system and, B, false-positive rate and percentage decrease as a result of the use of AI. Green bars indicate the percentage improvement brought by the help of AI, thus an increase in cancer detection rate and a decrease in the false-positive rate. Similarly, red bars indicate a deterioration of performances, thus a, A, decrease in cancer detection rate, B, and an increase in false-positive rate.
Note: From “Improving Breast Cancer Detection Accuracy of Mammography with the Concurrent Use of an Artificial Intelligence Tool” by S. Pacile et al., 2020, Radiology: Artificial Intelligence, (https://pubs.rsna.org/doi/10.1148/ryai.2020190208#pane-pcw-references). CC BY-NC-ND.
Further advances in AI for use in risk assessment, detection, diagnosis, prognosis and therapeutic response in breast cancer imaging are detailed in
Table 2 below.
Table 2.
Summary of Key Studies on Imaging Characterization of Breast Lesions, Including Detection, Diagnosis, Biologic Characterization, and Predicting Prognosis and Treatment Response.
Table 2.
Summary of Key Studies on Imaging Characterization of Breast Lesions, Including Detection, Diagnosis, Biologic Characterization, and Predicting Prognosis and Treatment Response.
- 2)
Cervical cancer screening: Researchers at Karolinska Institute in Sweden detected precursors to cervical cancer in women in resource-limited settings using artificial intelligence and mobile digital microscopy. [
10] In this diagnostic study, cervical smears from 740 HIV-positive women aged between 18 and 64 were collected. The smears were then digitized with a portable slide scanner, uploaded to a cloud server using mobile networks, and used to train and validate a deep learning system (DLS) to detect atypical cervical cells. (
Figure 3) Sensitivity for detection of atypia was high (96%-100%), with higher specificity for high-grade lesions (93%-99%) than for low-grade lesions (82%-86%), and no slides manually classified as the high grade was incorrectly classified as negative.
- 3)
Colorectal cancer (CRC) Screening: Colorectal cancer is the third most common malignancy in men and women. [
14] Early-stage detection of CRC may improve patients’ clinical outcomes by avoiding treatment delays and reducing morbidity and mortality. [
15]
i. Virtual colonoscopy or computed tomographic colonography (CTC) is a modified computed tomography (CT) examination that presently serves as an alternative screening tool to conventional colonoscopy for CRC patients, especially moderate-risk patients. It was first described in 1994 by Vining et al. [
16] Computer-aided AI-based algorithms can achieve optimal diagnostic performance and image quality in CTC. Song et al. [
17], conducted a study to differentiate colon lesions according to underlying pathology, e.g., neoplastic and non-neoplastic lesions. By employing the Haralick texture analysis method, and a virtual pathological model, they explored the utility of texture features from high-order differentiations, such as the gradient and curvature, of the image intensity distribution. Results of this investigation revealed that the AUC of classification improved from 0.74 (using the image intensity alone) to 0.85 in differentiating the neoplastic lesions from non-neoplastic ones, thus demonstrating that texture features from higher-order images can significantly improve classification accuracy in the pathological differentiation of colorectal lesions. [
18] AI may assist in automatically detecting flat neoplastic lesions, thus reducing their interval cancer risk. These flat colorectal adenomas may demonstrate aggressive tumorigenesis and can be a determining factor in increased adenoma miss rates (AMRs). [
20] A computer-aided detection (CADe) model was developed by Taylor et al. to examine the diagnostic capability for flat early-stage CRC (T1) using CTC. [
19] The CADe system, which was applied at three settings of sphericity, showed an inverse correlation between adenoma detection sensitivity and sphericity (83.3%, 70.8%, and 54.1% at sphericity of 0, 0.75, and 1, respectively) while also revealing a direct correlation between accuracy and sphericity, thus indicating that novel applications of computer-aided systems through CTC may effectively detect even flat CRC.
ii. Endocytoscopy is an emerging endoscopic imaging modality that allows in vivo microscopic imaging and real-time diagnosis of cellular structures at exceptionally high magnification with up to 400-fold magnification power in endoscope-based and up to 1400-fold magnification power in probe-based endocytoscopy during ongoing colonoscopy. [
22] Takeda et al. [
23] evaluated a computer-aided diagnosis system using ultra-high (approximately × 400) magnification endocytoscopy (EC-CAD) to distinguish between invasive colorectal cancerous and less-aggressive lesions. Their model achieved high-confidence diagnosis with sensitivity, specificity, accuracy, PPV, and NPV of 98.1 %, 100 %, 99.3 %, 100 %, and 98.8 %, respectively. Their results suggested that EC-CAD may help diagnose invasive colorectal cancer.
iii. Confocal Laser Endomicroscopy (CLE) is a microscopic imaging modality that enables in vivo observation of cellular and subcellular structures (up to 250μm in depth) at 1000-fold magnification power. [
24] Using fractal analysis and neural network models of CLE-generated colon mucosa images, Ştefănescu et al. developed an automatic diagnosis algorithm for colorectal cancer (CRC) with an accuracy of 84.5% in differentiating advanced colorectal adenocarcinomas from the normal intestinal mucosa. However, they recommended further assessment of their method with randomized controlled trials. (
Figure 5) [
25]
- 4)
Lung cancer screening and detection
The leading cause of cancer-related mortality among men and women in the United States is lung cancer. [
35] In addition, the 5-year survival rate for lung cancer is low due to late detection problems. About 70% of lung cancers are detected in their late stages when they have become difficult to treat. [
26]
Current AI systems integrated into CT scans have enabled improvements in cancer detection. These AI systems use deep learning (DL) to determine what a tumor is from real-world examples. They are usually fed with thousands of data containing CT scans of the lungs of patients with cancer and those without, enabling the machines to learn how a cancer nodule looks. So far, these AI systems have proven to be accurate compared to non-AI systems and have improved physicians’ clinical decision-making capabilities. [
26] If discovered early, lung cancer is curable, overtreatment can be prevented, patients’ quality of life can be significantly improved, and more lives can be saved. [
29]
AI can also enhance the staging and treatment selection for lung cancer, as detailed in
Table 1 below;
i) AI and Lung cancer screening:
For a long time, cancer screening was not feasible. However, significant advancements have been made in this regard in recent times. The United States Preventive Services Task Force (USPSTF) in 2021 recommended screening for lung cancer annually using low dose computed tomography (LDCT) in adults aged 50 to 80 years with significant smoking history. [
28] Low-dose CT (LDCT) screening for lung cancer reduced mortality in high-risk patients by about 20%, according to the National Lung Screening Trial (NLST). [
36] In subsequent sessions, we shall endeavor to analyze the benefits of LDCT screening for lung cancer, its numerous limitations, and possible future improvements. [
36]
LDCT screening for lung cancer regularly identifies numerous pulmonary nodules, some of which are subsequently diagnosed as cancer. See
Figure 6 below. According to the NLST, most of the pulmonary nodules, up to 96.4%, discovered in LDCT screens were benign. A systematic algorithm that can classify these nodules into benign and malignant lesions is yet to be developed. LDCT screens can also pick up incidental indolent cancers that are not life-threatening if untreated, thus exposing patients to the risk of cancer chemotherapy and its accompanying toxicity. As such, physicians must beware of such potential overdiagnosis and make conscious efforts to reduce it. [
36] They can best avoid the above scenarios by following clinical guidelines for pulmonary nodules assessment. [
37] These clinical guidelines, however, cannot discriminate between benign and malignant lesions, nor can they successfully predict the future cancer risk of affected patients. Research in AI is currently geared towards identifying biomarkers that can accurately differentiate between benign and malignant lesions to mitigate false-positive results from imaging. Such advancement will allow for a more quantitative prediction of the risk and incidence of lung cancer and improve clinicians’ decision-making guidelines.
Figure 6.
Clinical Applications of Artificial Intelligence in Lung Cancer Screening on Detection of Incidental Pulmonary Nodules. Imaging analysis shows promise in predicting the risk of developing lung cancer on initial detection of an incidental lung nodule and in distinguishing indolent from aggressive lung neoplasms. PFS indicates progression-free survival; ROC, receiver operating characteristic.
Note: From “Artificial intelligence in cancer imaging: Clinical challenges and applications” by Bi et al., 2019,
American Cancer Society (ACS) Journals (
https://acsjournals.onlinelibrary.wiley.com/doi/full/10.3322/caac.21552). CC BY-NC.
Figure 6.
Clinical Applications of Artificial Intelligence in Lung Cancer Screening on Detection of Incidental Pulmonary Nodules. Imaging analysis shows promise in predicting the risk of developing lung cancer on initial detection of an incidental lung nodule and in distinguishing indolent from aggressive lung neoplasms. PFS indicates progression-free survival; ROC, receiver operating characteristic.
Note: From “Artificial intelligence in cancer imaging: Clinical challenges and applications” by Bi et al., 2019,
American Cancer Society (ACS) Journals (
https://acsjournals.onlinelibrary.wiley.com/doi/full/10.3322/caac.21552). CC BY-NC.
The American College of Radiology Lung CT Screening Reporting and Data System (Lung-RADS) and the Fleischner Society recommend that physicians follow up with their patients for 3 to 13 months following the incidental detection of pulmonary nodules before resorting to more invasive tests like biopsy. [
38,
39]
The National Cancer Institute (NCI), in 2017, provided a community of AI teams with thousands of annotated CT images from their cancer imaging archive. These teams then used convolutional neural networks (CNN) to diagnose lesions. The competition’s winning team recorded a high performance (with a log loss of 0.3999; note that a perfect model would have reported a log loss of 0). However, this excellent achievement by the team was not without flaws; their input data had a cancer prevalence of 50% compared to the 4% prevalence found in the screening population with lung nodules. [
40]
Thus, the team needs a clinical setting to train, evaluate and validate their model.
Ardila et al., in a study, proposed a deep learning algorithm that uses patients’ present and past CT volumes to predict the risk of lung cancer. Their model ended up with an area under the curve (AOC) of 94.4% on close to 6,716 National Lung Cancer Screening Trial cases. Their model also performed similarly under an independent clinical validation set of over a thousand cases. They further carried out two reader studies; In the first model, their model outperformed six radiologists when previous CT scans were excluded; with absolute reductions of 11% in false positives and 5 % in false negatives. However, their model was at par with the same radiologists’ when previous CT scans were included in the study. Overall, their study demonstrated the potential for deep learning models to improve the accuracy, consistency, and adoption of lung cancer screening across the globe. [
27]
AI, in the future, aims to use machine learning and deep learning to develop clinical approaches to indeterminate lung nodules able to predict future cancer incidence, differentiate benign from malignant nodules, and distinguish indolent tumors from biologically aggressive ones. [
29]
ii) Lung cancer characterization using imaging
Lung cancer is sometimes regarded as a “moving target” due to its dynamic nature. It is constantly evolving, modifying its genomic and phenotypic properties, and diversifying, thus making therapy difficult for the oncologist, who is seen to be constantly chasing a constantly changing disease. [
41]
Multiple attempts have been made in the past to identify image-based biomarkers. These biomarkers can noninvasively capture the radiographic features that underly the pathophysiology of a tumor. Properties measured include tumor size, e.g., the longest diameter of the tumor, which can be used to stage the tumor and its response to treatment. This method, however, has limitations, one of which is its marked variation in clinical outcomes and treatment responses. Nevertheless, researchers have successfully predicted lung cancer outcomes in patients using tumors’ semantic and automatic radiomic features of tumors. [
42,
43,
44,
45,
46]. Preliminary studies in this regard include using the Computer Aided Nodule Assessment and Risk Yield (CANARY) tool to carry out semantic-based risk stratification of specific subsets of lung adenocarcinoma. Using this method, AI can automatically quantify the radiographic characteristics of a tumor and even offer prognostic insights into various kinds of cancer, such as Lung cancer (P<3.53 × 10-6) [
47] Information generated by this tool can also be used to determine distant metastasis in lung adenocarcinoma, [
48] tumor histologic subtypes, [
46], biological patterns of tumors such as somatic mutations, [
49] and gene expression profiles. [
50]
In addition, cancer imaging can quantify the intratumor characteristics of lung cancer, otherwise known as intratumor heterogeneity (ITH). [
51] Unlike biomarkers, AI-guided cancer imaging can be available in real-time to the clinician, does not require time-consuming laboratory assay testing, and is non-invasive. Imaging can also present these tumors in 3 dimensions (3D) and not just the portion of the tumor biopsied for further testing. [
52]
- 5)
AI and Prostate cancer screening and detection:
Aside from skin cancers, the most common cancer in men in the United States (US) men is prostate cancer. It also accounts for the 2nd highest cause of cancer-related deaths in the US. [
54] Fortunately, mortality from prostate cancers is low relative to other cancers. [
56] Several challenges confront the diagnosis and management of prostate cancer, including; A) the Inability to predict if cancer detected on screening will become very aggressive. This problem leads to overdiagnosis and overtreatment of prostate cancer. B) Poor sampling of prostate tissue biopsies leads to misdiagnosis and progression of missed prostate cancers. In some studies, overdiagnosis of indolent prostate cancers is up to 67%, with an increased risk of treatment-related morbidity in these patients. [
57] This has led to the development of specific classification systems for prostate cancer, such as the Gleason grading system, aimed at differentiating indolent from aggressive cancers. Prostate cancers with Gleason grade 7 and above or pathologic volumes greater than 0.5 ml (about 0.02 oz) are treated as aggressive cancers and vice versa. Aggressive cancer patients undergo chemotherapy, while indolent cancer patients are selected for active surveillance.
Prostate cancer is biologically heterogenous, thus complicating diagnosis, treatment, and prognosis. As a result, genomic profiling and multiple guidelines have been developed to address this issue. In addition to genomic profiling, AI research in prostate cancer aims to help clinicians detect, localize, characterize, stage, and monitor prostate cancer. MRI and ultrasound techniques are increasingly being utilized to detect aggressive prostate cancers with promising results. These techniques rely on supervised machine learning, deep learning, and computational methods. [
29]
Using multiparametric magnetic resonance imaging (mpMRI), soft-tissue contrast can help detect and localize clinically suspicious prostate lesions. Data derived from mpMRI includes tissue anatomy, characteristics, and function. Consequently, this technology is well-equipped to detect potentially aggressive prostate cancers. MpMRI has also shown improvements in targeted biopsy sampling of prostate cancers. According to a study in the United Kingdom, mpMRI significantly decreased the overdiagnosis of prostate cancer. In addition, the study also demonstrated a decrease in unnecessary prostate biopsies by a quarter when mpMRI was employed as a triaging tool in prostate cancers. [
58] In another randomized trial of 500 patients, the use of mpMRI prostate screening before biopsy demonstrated a significant increase in the detection of aggressive prostate tumors compared to the current standard of care (38% vs. 26%). [
59]
In recent years, AI models in the form of CADe and CADx systems, when integrated with Prostate Imaging Reporting and Data System (PI-RADS), have increased prostate cancer diagnostic accuracy and reduced interobserver disagreements among radiologists. [
60,
61,
62,
63] In addition, recent advances in deep learning networks such as CNN have revolutionized investigative research into prostate cancer imaging. For instance, CNN architectures used to train deep networks for prostate cancer have achieved significant results. CNNs were used to classify MRI findings by utilizing auto-winding mechanisms. Some researchers used this method to overcome the difficulties faced in MRI interpretation, such as the high dynamic ranges and low contrast edges associated with high contrast imagery. [
64] Other investigators have stacked mpMRI images as a 2D channel of red-green-blue (RGB) images for training purposes. [
65,
66] Prostate cancers can also be localized and classified at the same time using deep learning systems. [
67] Anatomic features added to the last layers of CNNs significantly improved their performance. [
64] Additionally, integrating radiofrequency ultrasound data with AI techniques for prostate cancer classification yielded promising results. [
68,
69,
70]
- 6)
Imaging of CNS Tumors:
Central nervous system (CNS) pathologies come in various varieties. CNS parenchymal cancers arise mainly from systemic metastasis and gliomas. Non-neural tissue tumors like pituitary adenomas, schwannomas, and meningiomas also comprise parts of CNS tumors. These varieties of tumors in the CNS can pose diagnostic challenges to clinicians during imaging. Significant challenges encountered during CNS imaging include the following:
- A)
Ensuring that tumor diagnosis is accurate enough to optimize clinical decisions.
- B)
Ability to distinguish signal characteristics of surrounding neural tissue from those of the primary tumor throughout the clinical surveillance period of the tumor.
- C)
Ability to map the genotypes of tumors based on their phenotypic manifestations during imaging.
Traditionally, CNS tumors are diagnosed via imaging. The decision to treat a patient or not is made using a set of clinical and radiologic criteria. A definitive histopathologic diagnosis is made after biopsy, after which prognosis is eventually determined over time through scheduled follow-up visits by the patient. AI aims to extrapolate pathologic and genomic data from imaging data and use computational imaging analysis through shared network algorithms to provide clinicians with enough data upfront to make the best clinical decisions for their patients. [
29]
Table 3. below summarizes the role of artificial intelligence in the imaging of CNS Tumors:
B) AI and Histopathologic Diagnosis and staging:
Pathology involves the diagnosis of diseases. Anatomic pathologists usually examine body tissues that have undergone processing and fixation on glass slides, usually through a microscope. [
71] Pathologists across the globe have historically relied on glass slides to render a diagnosis. However, preparing tissues for glass slide examination and diagnostic reporting by a pathologist is time-consuming. Even more time-consuming is the art of transporting the slides around for second opinions by subspecialty pathologists, thus serving as a cog in the wheel progress for excellent and timely health care delivery to patients. [
72] Fortunately, recent advancements in whole slide imaging (WSI) technology seek to reverse this trend. [
73] WSI involves scanning entire pathology glass slides to produce digital image outputs of diagnostic quality. [
74,
75] Whole slide imaging scanners usually produce diagnostic quality images through specialized high-resolution cameras combined with optics and relevant computer software. [
76] These digitized images are then converted to pixel pipelines that can be viewed remotely and easily shared among institutions for second opinions. [
77] AI can be used to integrate these digitized image pixels into a deep learning algorithm capable of identifying patterns, features, and shapes on WSI slides, all geared towards improving the diagnostic workflow and accuracy of the pathologist. [
74] Multiple research works have conclusively demonstrated that diagnoses rendered through digital images are not significantly different from those made through conventional microscopes and glass slides. [
73,
78,
79,
80,
81]
According to a study in 2011 by Beck et al., computer algorithms trained using standard anatomic pathology glass slides of breast cancer could correctly predict the likelihood of certain patients’ breast cancer progressing to more severe disease. The study also generated an image-based risk score for use in breast cancer prognosis, thus decreasing the need for performing expensive and time-consuming molecular assays. [
82] In 2019, Nagpal et al. developed a two-stage deep learning system (DLS) to carry out Gleason scoring and quantitation on prostatectomy specimens. The first stage of training was done using a CNN-based regional Gleason pattern classification. For the second stage of training, one thousand one hundred fifty-nine slide-level pathologist classifications were used. On analysis, the DLS was found to possess a significantly higher diagnostic accuracy of 0.70 (p = 0.002) than the diagnostic accuracy of 0.61 by 29 select pathologists on a validation set. Thus, the DLS provided better patient risk stratification in correlation to clinical follow-up data. [
83] For the Gleason grading system, the DLS achieved an AOC of 0.95 - 0.96. In addition, for Gleason grades ≥ 4, DLS showed greater sensitivity and specificity than 9 out of 10 pathologists.
Digital images and AI technologies enable the storage of large image data sets and retrieving images that are not annotated or indexed. Hedge et al. (2019) described SMILY (Similar image search for histopathology), an AI algorithm developed by GOOGLE that uses a database of unlabeled images to find similar images. [
84]
C) Limitations and Future prospects of AI and Cancer imaging
Several limitations still abound as regards AI and cancer imaging despite the numerous success it has recorded thus far. When it comes to AI applications in histopathologic diagnosis of cancers, a lot remains to be done. There is a need to digitize pathology laboratories in addition to changing pathologists’ workflow and acquiring WSI imaging scanners. Fundamental changes are needed on how tissues are processed to implement a digitized laboratory workflow, computer assisted diagnosis and automated image analysis. [
71,
85,
86]
In radiology, the inability to curate the large volumes of data generated from CT scans and MRI remains an obstacle to researchers’ hope of developing automated clinical solutions. Deep neural networks and other AI mechanisms are data-hungry processes that rely highly on training from curated data sets. Curating this enormous data requires highly trained specialists, which jacks up the overall cost of the process significantly. [
29] Alternatives to the use of curated data sets include the use of unsupervised [
87] and self-supervised [
88] AI methods and synthetic data [
89]. Yet another limitation is the lack of consensus on specific data sets that can be used for standardized benchmarking in cancer imaging. [
90]
In addition, the hoarding of relevant data sets from AI scientists by institutional, professional, and government groups due to technical, legal, or ethical concerns remains a challenge to overcome. [
91] Nevertheless, some progress has been recorded in this regard, an example being the National Institutes of Health’s (NIH) recent sharing of chest x-rays and CT repositories to AI scientists for research purposes. [
92]
In the world of ethics, some algorithmic designs may be unethical [
93] and, as such, compromise holistic healthcare delivery in favor of profit-making. High reliance on AI-based solutions may encourage physicians to abandon common sense in medicine and negatively impact patient-doctor relationships and confidentiality. [
94] Imaging may also detect clinically insignificant incidental findings that may be poorly interpreted, leading to numerous unwarranted tests and treatments that may increase morbidity and decrease a patient’s quality of life. As a result, much work remains to be done in this area to allow for AI-based classification of incidental findings into indolent to potentially aggressive lesions. As for the future, the ability of AI to supplant clinicians’ workflows will depend on significant improvements in AI methodologies with comparable or superior efficacy compared to human experts. [
29]
Finally, in a world with limited access to expert clinicians, AI may serve as the chief consultant to physicians in disease interpretations from imaging. This achievement will improve healthcare delivery and efficiency, reduce the overall cost of healthcare and open new possibilities in disease detection and management not hitherto conceived. [
29]
Dear Author Coordinators,
This manuscript examines the role of artificial intelligence in cancer imaging. It throws light on how artificial intelligence (AI) can significantly cut the wait time of patients and their clinicians in getting cancer diagnosis and assist health care providers by highlighting areas of an image where the interpreter needs to focus more for better result analysis and quality output. It also provides details on how the rise of AI attempts to standardize imaging results across providers by eliminating inter and intra observer variations among health care providers. Finally, it exposes the numerous limitations associated with AI use in cancer imaging such as: The need to digitize pathology laboratories and workflow, as well as transform our century old tissue processing methods to fit into modern technological standards. In the field of Radiology, the challenge of curating the enormous amount of data generated through MRIs, CT scans, PET Scans etc., poses a significant challenge to the ability of researchers to train relevant AI models to high levels of accuracy and reliability. In addition, over reliance on AI by clinicians can deprive them of common-sense approach to health care issues of their patients and negatively impact doctor-patient relationships and confidentiality.
We confirm that neither the manuscript nor any parts of its content are currently under consideration or published in another journal.
All authors have approved the manuscript and agree with its submission to MDPI.