Our key findings are summarized in
Table 1. Publication metrics for AI over time show an exponential increase in recent years [
5]. There are now over 60,000 scientific articles dealing with AI overall (including all the techniques ML, DL, and classic AI), and around 3,000 specifically dealing with the application of AI in biomedical imaging. The percentage of such articles that explicitly mention the term “AI” (as opposed to, e.g., “multivariate regression”) in the title has also increased to almost 60% currently. When not explicitly specified, most modern usage of the term AI refers to ML/DL.
The current approach to AI is amenable to further advancement and exponential returns thanks to novel computing paradigms and technologies. Neural network programs have been around since the 1980s, but required massive computational power; most of the recent explosive advances in DL are not due to more sophisticated models but rather hardware bottlenecks finally being removed through GPU acceleration, massively parallel distributed computing, etc. Embodied cognition of LLMs in robots can be used to improve their inner “world simulations” with perception [
6]. Genetic algorithms (GAs) are frequently used in AI research to improve the performance of existing algorithms by optimizing their parameters, such as the number of layers, the learning rate, and the activation functions; they are a type of metaheuristic optimization algorithm inspired by the process of natural selection to “evolve”, in silico, a population of solutions to a problem. They are likely to play an increasingly significant role in AI research going forward, especially as computational power improves to the point that billions of parallel simulations of evolutionary processes can be run at timescales orders of magnitude faster than biological evolution.
Despite its name, Deep Learning’s understanding of the problems it is solving has often been criticized as superficial. One of the most promising avenues for future advancements in AI is integration with knowledge graphs (structured common-sense knowledge databases or ontologies), and computational engines like IBM Watson and Wolfram Alpha [
7,
8,
9,
10,
11,
12,
13,
14]. This can cover blind spots and provide a deeper semantic and contextual understanding and situational awareness. For instance, it can help AI understand the different usage of the word “like” in the phrase “time flies like an arrow” vs. “fruit flies like a banana.” This is important so that AI can know what scale or scope to focus on (e.g., gestalt object as-a-whole vs. parts of the object) so as not to make category mistakes. The longest-running and one of the most ambitious examples of such a common-sense knowledge database is the Cyc AI project, which started in 1984 with the goal of creating a system that could codify human knowledge and reasoning abilities [
7]. It is now one of the largest repositories of human knowledge in the world, its architecture consisting of common-sense statements about the world that were manually written and curated by humans and codified in predicate logic. Modern knowledge graphs are updated automatically rather than manually. The Wolfram Alpha computational engine works through generalized grammar and linguistic understanding, symbolic mathematical representation, real-time curated structured data from databases, and computational algorithms resulting ultimately in a structured report [
8,
9]. The salient features of a knowledge graph are accuracy, trustworthiness, consistency, relevancy, completeness, timeliness, ease of understanding, interoperability, accessibility, and licensing, which all need to be assigned confidence scores to find the best-fit knowledge graph for solving a given problem.
AI has been transforming the field of biomedical research and healthcare practice in recent years. What is considered a problem of “information overload” in the medical field is just a filter problem—more data in principle is better, but it needs to be structured and organized, and prioritized with the appropriate signal-to-noise ratio and confidence scores. With its ability to process and analyze vast amounts of data, AI is providing doctors and researchers with new tools to identify diseases, discover new treatments, and improve patient outcomes. These skills equip AI to solve the filter problem. The remainder of this review will explore the use of AI in biomedical research and healthcare practice, highlighting the benefits and challenges associated with these applications. In biomedical research, AI is being used to automate image analysis of 2D and reconstructed 3D microscopy images, segmenting the boundaries of anything from whole cells and tissue slices down to organelles and other subcellular structures. This then allows it to track and count such structures. AI is being used to analyze large datasets and identify new research directions. AI algorithms can analyze massive amounts of data from research studies and clinical trials to identify patterns and trends, leading to new discoveries and better understanding of complex diseases. It can even perform meta-analyses, updating in real-time in the cloud as new research is published. These applications of AI are revolutionizing biomedical research and are already improving the speed and accuracy of scientific discoveries. AI is also being used in rationally developing new drugs and therapies. By studying large datasets of biological (e.g., genomic, and simulated protein folding) information, AI algorithms are helping identify novel potential drug targets and predict the potential effectiveness and specificity of different chemical compounds, thereby accelerating drug development processes. Moreover, AI can help map out the complex web of interactions between genotypic variation and the environment to predict drug responsiveness, thereby further improving patient outcomes. One of the most significant applications of AI in healthcare practice is for medical diagnosis. AI algorithms can analyze large datasets of medical records, lab tests, and imaging scans to provide doctors with an accurate disease diagnosis. By detecting patterns and anomalies in the data that human doctors may miss, AI can inform clinical decision-making and suggest potential treatments. Additionally, AI can predict the likelihood of a patient developing a particular disease or condition, allowing doctors to take preventive measures to reduce the risk of future complications [
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29].
3.1. Biomedical Research
AI has emerged as a powerful tool for advancing research and development in optics and biomedicine. With its ability to process and analyze complex data and identify essential patterns, AI is transforming the way researchers understand disease processes, develop medical devices and treatments, and improve overall patient outcomes. AI has been applied toward automated high-resolution whole cell and tissue segmentation, for instance whole kidney cell segmentation in Focused Ion Beam Scanning Electron Microscope (FIB-SEM) imaging data [
15]. Thanks to advances in microscope technology, FIB-SEM can generate data at nanoscale (4 nm) resolution. It works by adding a second beam (the ion beam) to a conventional scanning-electron microscope. This resolution enables the capture of unprecedented amounts of data. Researchers are generating 3D images with all the organelles in the cell and their respective volumes being predicted by a trained AI model. This task would be impossible without the help of deep-learning model pipelines due to the sheer level of data generated at this resolution. Once organelles are segmented with the help of AI, scientists use AI to help segment multicellular 3D structures. For example, FIB-SEM based ML in the freshwater sponge
Spongilla lacustri was used to showcase a rendered 3D volume of the choanocyte chamber [
16]. AI can automate tracking and counting of whole cells [
17,
18], cilia and other tubular structures [
19,
20] in (e.g., confocal) microscopy image slices and Z-stacks. In the field of optics, AI is being used to improve and develop novel medical imaging technologies with enhanced capabilities to diagnose and treat diseases. For example, Optical Coherence Tomography (OCT) is an imaging technique that uses light waves to produce images of internal body structures. AI algorithms can be leveraged to analyze the large datasets produced by OCT to identify patterns indicative of disease or other pathological conditions that could be missed by traditional methods. For example, DL has been used in OCT imaging of diabetic retinopathy. It can segment and detect vasculature (
Figure 2A), shadowing artifacts, perfused areas, and even diagnose the severity of the disease state [
21,
22,
23]. In biomedical engineering, AI is also being used to enhance the performance of optical instruments and instruments used in research. By studying enormous data sets, AI algorithms can identify new imaging targets and qualities, which accelerate the development of new optical technologies with enhanced sensitivity and specificity. AI is providing researchers and clinicians with new ways to understand disease mechanisms and develop treatments.
ML algorithms provide insights into new areas of research, exposing previously unknown relationships between data sets, and identifying novel drug targets. AI algorithms can also be used for the design of novel drugs and the optimization of molecular structures to increase potency, selectivity, and reduce toxicity. AI is increasingly being used in projects such as AlphaFold [
24] to simulate protein folding, the process by which proteins adopt their functional, three-dimensional structures. Understanding the dynamics of protein folding is critical to understanding how proteins function in the body and how they can be targeted by drugs. AI algorithms are particularly well-suited to this task because they can rapidly explore thousands of possible conformations virtually and identify the most energetically favorable structures. By using AI to simulate protein folding, researchers can gain insights into how proteins work and how they engage in disease pathology. This leads to the identification of more specific small molecule libraries and thereby the development of more effective drugs that target specific proteins or protein-protein interactions [
25,
26,
27,
28,
29].
Moravec’s paradox [
2] predicts that robot technicians are farther off on the hype cycle than automated grant and paper-writing assistants, automated image and data analyzers, and automated literature reviewers. AI is fully capable of reviewing data and literature and new data collected from new experiments to form novel conclusions. Similarly, AI is just as equipped to take literature find the gaps and identify experiments that remain to be done. AI is also able to troubleshoot thousands of methods all at once, but human troubleshooting methods currently outpace AI in a time and cost-efficient manner. However, the highest-level cognitive skills that necessitate both advanced vertical thinking (logic and deductive reasoning) and lateral thinking (creativity and inductive reasoning) likely remain the farthest off-limits. Completely replacing a human research team including the principal investigator would require “strong” i.e., human-level AI [
30].
Figure 2.
A) DL can apply object and pattern recognition towards automatically segmenting both microscopy and clinical (2D and reconstructed 3D) images. From left to right: predicted organelle boundaries; cell and cilia tracking and counting; detection of the vasculature in diabetic retinopathy.
B) An example of the traveling salesperson problem and solution in computational complexity theory.
C) A medical diagnostic decision tree is isomorphic to an algorithm running on a nondeterministic computer (adapted from Arle et al., 2021 [
31], used with permission).
Figure 2.
A) DL can apply object and pattern recognition towards automatically segmenting both microscopy and clinical (2D and reconstructed 3D) images. From left to right: predicted organelle boundaries; cell and cilia tracking and counting; detection of the vasculature in diabetic retinopathy.
B) An example of the traveling salesperson problem and solution in computational complexity theory.
C) A medical diagnostic decision tree is isomorphic to an algorithm running on a nondeterministic computer (adapted from Arle et al., 2021 [
31], used with permission).
3.2. Medical Practice
In computational complexity theory, NP-complete problems are decision problems that belong to both the NP complexity class and the class of NP-hard problems. NP refers to "nondeterministic polynomial time," a complexity class that includes decision problems that can be solved by a non-deterministic Turing machine in polynomial time. NP-hard problems are decision problems that are at least as hard as the hardest problems in NP. NP-complete problems are considered the "hardest" problems in NP and are used as benchmarks for measuring the difficulty of other problems in the class. To date, no efficient algorithm has been found for solving NP-complete problems, and it is widely believed that no efficient algorithm exists. The “traveling salesperson problem” (
Figure 2B) is an example of a problem that on the surface seems simple but requires tremendous computational resources: finding the shortest route between cities that touches each city only once. The estimated timeframe for solving such problems through traditional deterministic computational approaches is impractical due to the sheer number of possible combinations and configurations; however, they can be solved in a practical timeframe by employing neural networks and heuristics that mimic the way humans solve such problems. Medical decision trees can be thought of as medical diagnosis and treatment (MDT) algorithms, isomorphic to an algorithm running on a non-deterministic computer (
Figure 2C). Although MDT is NP-complete, it is nevertheless amenable to neural network approaches [
31].
The application of Artificial Intelligence (AI) in the healthcare industry is revolutionizing medical imaging, allowing medical professionals to diagnose and treat medical illnesses more efficiently and accurately [
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69]. With the advances in AI technology, medical imaging is becoming more sophisticated and offers more accurate diagnoses which can lead to improved patient outcomes. AI applications in medical imaging are being applied in several ways across various specialties, including nuclear medicine and radiology [
32,
33,
34,
35,
36,
37,
38,
39], oncology [
40], and cardiology [
37]. One of the most common applications of AI in medical imaging is in radiology, where deep learning algorithms are used to recognize potentially cancerous lesions in radiology images. DL algorithms can recognize subtle anomalies that are not easily detectable by the human eye, which can lead to earlier and more accurate medical diagnoses. Another application of AI in medical imaging is in oncology. Here, AI algorithms are used to detect cancerous cells in medical imaging scans, such as MRI and PET scans. These algorithms can recognize patterns in imaging data and can detect cancerous cells much earlier than traditional methods, increasing the survival rates of cancer patients. Cardiology is another area where AI is being used in medical imaging. With AI algorithms, cardiologists can diagnose and treat cardiovascular diseases more accurately and efficiently. AI algorithms can recognize changes in the heart’s anatomy and physiology, enabling physicians to diagnose and treat heart conditions more effectively. AI can also be useful in neurology and neurosurgery [
41,
42]. It can help decode neural signals in amputees that use bionic limbs, leading to less need for neurorehabilitation and reliance on neuroplasticity; likewise, it can help to better interpret EEGs [
42], which run into the “inverse problem” that multiple brain states can generate the same output, rendering them indistinguishable. In the surgical specialties, it can inform image-guided operations. AI applications offer healthcare professionals the opportunity to diagnose and treat medical illnesses with greater accuracy, efficiency, and speed. With the advancement of AI technology, the future of medical imaging looks bright, providing hope for better health outcomes and improved patient care. In summary, AI is transforming medical imaging, providing healthcare professionals with innovative solutions to medical problems. With its potential for earlier and more accurate diagnoses, AI has the potential to revolutionize the field of medical imaging, leading to better patient care and higher survival rates. While challenges remain, the promise of AI in medical imaging is too great to ignore, and the healthcare industry should continue to invest in its development to realize its full potential in improving patient outcomes.
Bayesian reasoning, based on probabilistic calculation, is the ideal approach for science and evidence-based clinical decision-making, so it should serve as a framework for any medical AI (Equation (1)). In clinical decision-making, Type I reasoning tends to be used far more often than Type II reasoning due to time and other constraints. Type I reasoning relies predominantly on pattern recognition based on data collection from history and physical, labs, and imaging. This is followed by problem presentation to make sense of the data (like identifying key elements, classifying, using semantic qualifiers, and developing context or framing), then accessing numerous memorized illness scripts (epidemiology, typical disease time course, clinical features and clinical pearls, pathophysiology) to optimize search for a potential match which is the diagnosis. Sometimes reaction to treatment is used as part of diagnosis. By contrast, Type II reasoning, the more scientific approach, is based on hypothesis generation and refinement, diagnostic testing, and causal reasoning, followed by diagnostic verification. Type I is fast and unconscious but requires experience and is less effective for rare diseases; type II has a low error rate even for a less experienced physician or a rare disease but is slow and takes deliberate conscious effort. Medical AI can leverage both types of reasoning, since computers are inherently adept at rapidly performing the logical calculations required for Type II reasoning and the memorization needed for Type I, and DL can provide the pattern recognition horsepower needed for Type I. Integration with structured knowledge graphs can aid in the abstract thinking and critical reasoning needed for Type II, when DL falls short. AI also needs to understand thresholds to test and treat, pre-test and post-test probability, likelihood ratios, sensitivity and specificity, and false positives and negatives to generate a valid differential diagnosis and treatment plan.
Moravec’s paradox [
2] foresees that nurse robots and truly autonomous robot surgeons are far off because skills that require manual dexterity and object recognition are hard tasks for machines, while analyzing a CT scan or financial transactions is easier for computers and more difficult for humans. Thus, the non-surgical specialties, particularly radiology, are more likely to be automated sooner. And a sufficiently sophisticated medical AI could in theory manage simple common diagnoses if properly trained and provided with all the necessary data derived from clinical, imaging, and laboratory tests, and then recommend standard treatments based on algorithms that follow the latest evidence-based clinical guidelines. AI can be used in “precision” medical and science education that adapts to each student’s personal learning style and needs [
32,
43,
44], and surgical (and pipetting) robots can be used as teaching tools for budding physician-scientists. (Yet it can also backfire as some students will inevitably use it to cheat.) Furthermore, a webcam-equipped robot that follows medical students during clinical rotations (and new graduate students in the lab), and guides and answers basic questions, could take some of the teaching or training burden off others. However, the truly complex medical cases and rare diagnoses that lie outside “textbook medicine” and require “outside-the-box” thinking and deep knowledge and insight, not just brute-force memorization, and simple pattern recognition, will prove extremely challenging to compute. While we may one day have passable AI radiologists, and eventually perhaps in several decades’ time even licensed robot surgeons and registered nurses, for better or worse we might never have a Dr. House “medical genius” AI, or the clinical equivalent of an omniscient Oracle of Delphi.
Figure 3.
Venn diagram depicting the concept of the triad of modern warfare. A. Cyberwarfare = attacks on computer networks themselves to take down servers and websites, and/or attacks on internet-connected infrastructure like financial, transportation, and communication systems or the energy grid. B. Biowarfare = introducing either naturally occurring biological agents or synthetic biological weapons (genetically engineered viruses, bacteria, etc.) that can cause harm to a target population and spread by contagion. C. Infowarfare (alt: netwar) = disinformation attacks conducted against an adversary population via any network, not necessarily the internet, intended to deceive by disseminating propaganda and conspiracy theories. Note the various combinations of overlapping regions D, E, F, and G. The most effective and untraceable attacks (and thus the least likely to receive retribution) may lie at the intersections of these three approaches, e.g., automated and weaponized anti-vaccine health communication.
Figure 3.
Venn diagram depicting the concept of the triad of modern warfare. A. Cyberwarfare = attacks on computer networks themselves to take down servers and websites, and/or attacks on internet-connected infrastructure like financial, transportation, and communication systems or the energy grid. B. Biowarfare = introducing either naturally occurring biological agents or synthetic biological weapons (genetically engineered viruses, bacteria, etc.) that can cause harm to a target population and spread by contagion. C. Infowarfare (alt: netwar) = disinformation attacks conducted against an adversary population via any network, not necessarily the internet, intended to deceive by disseminating propaganda and conspiracy theories. Note the various combinations of overlapping regions D, E, F, and G. The most effective and untraceable attacks (and thus the least likely to receive retribution) may lie at the intersections of these three approaches, e.g., automated and weaponized anti-vaccine health communication.