Preprint
Review

New Advances in Artificial Intelligence for Biomedical Research and Clinical Decision-Making

Altmetrics

Downloads

507

Views

233

Comments

0

This version is not peer-reviewed

Submitted:

01 June 2023

Posted:

05 June 2023

You are already at the latest version

Alerts
Abstract
(1) Background: Artificial intelligence (AI) has existed in some form for decades, but recent rapid advances in a subset called machine learning (ML) — and more specifically deep learning (DL), a neural network-based approach — have made headlines for the potential to revolutionize and automate multiple large sectors of society, including scientific research and the healthcare field. Furthermore, large language models (LLMs) — which are built on DL — could lead to a more seamless, natural interaction between humans and computers. (2) Methods: We reviewed numerous publications on this subject from recent years. (3) Results: We found these studies collectively show that AI is positively disrupting both biomedical research and medical practice, such as optical imaging in surgery guidance. (4) Conclusions: However, we recommend caution in over-reliance on AI in the laboratory or the clinic due to anticipated risks and current limitations.
Keywords: 
Subject: Computer Science and Mathematics  -   Artificial Intelligence and Machine Learning

1. Introduction

Of particular interest to biomedical researchers and clinicians worldwide in the present day is how artificial intelligence (AI) in its various forms has been applied to their respective fields, what the applications, advantages, risks, and limitations are, and what the future may hold. AI technologies have been advancing at an unprecedented rate and we recognize that staying updated with recent advancements in AI is both difficult and at times not a primary focus. Therefore, we have carefully crafted this review to supply an insightful overview of AI’s various forms and their impact on the biomedical domain.
AI, machine learning (ML), deep learning (DL), and large language models (LLMs) are all terms that are associated with artificial intelligence, but they refer to distinct aspects of the field (Figure 1A). AI is a broad term that encompasses any technique which enables computers to mimic human behaviors and perform tasks that typically require some semblance of human intelligence [1]. ML appeared in the 1980s: AI that gave computers the ability to recognize patterns and to learn without being explicitly programmed to do so. The first simple software models of neural networks were developed, based on the classic Hodgkin-Huxley model, with a single hidden layer of nodes for processing multiple input signals to produce an output. Much as biological neurons form synaptic connections of varying strengths, these computational nodes conveyed information to one another via links of differing weights. In more sophisticated models, the weights can even be adjusted over time depending on how actively neurons fire via those connections, just as synapses can grow stronger or weaker according to use, thereby facilitating learning. In the 2010s, DL began to gain popularity: artificial neural networks with multiple hidden layers between the input and output layers for processing hierarchical features (Figure 1B). For example, in the case of image recognition, the first layer might extract the pertinent light vs. dark pixel values from a provided image, the next layer would detect edges, followed by identification of combinations of edges, salient features, and finally integrating combinations of features leading to a final decision as to what the image represents, which the model outputs back to the user. As a more concrete example, the input image could be microscopy of a cell, the network would look at exhibited features like size and morphology, and the output could be whether the cell is cancerous.
Computers are not people of course, but what came as a surprise in the early days of AI research is that, in some respects, they not only possess a different “cognitive” skill set but one that runs opposite to that of humans. Moravec’s paradox is the counterintuitive observation, named after the famed roboticist Hans Moravec, who first identified it in the 1980s*, that certain tasks which are easy for humans (even toddlers) turn out to be quite difficult for computers and vice versa (Figure 1C) [2]. For example, simple face and object recognition had proven to be far more difficult for machines than expected, and robots have historically struggled with mastering balance, walking, and fine motor skills. Creativity and abstract critical thinking (even understanding basic arithmetic beyond just performing the logical operations) have for a long time been out of reach for machines. On the other hand, modern computers can easily manage trillions of calculations per second and impeccably memorize vast stores of information. The advent of neural networks, which are probabilistic in their computational nature and capable of learning beyond what they were programmed, has started to provide computers with the pattern recognition skills needed to address the deficits.
Large language models (LLMs) are a specific type of DL model that have been trained on enormous amounts of text data such as that harvested from the internet and public databases and can generate convincing and meaningful human-like text outputs based on a given prompt or query [3,4]. Thus, LLMs can converse with humans via natural language processing (which even makes them capable of writing functional new programming code when prompted by a user with little or no coding experience) and can cache the conversation history to use as further layers in its output. Briefly, how this works is via text tokenization and token vector representation, which gets acted upon by the LLM’s neural net (pretrained on text data and reinforcement learning) via iterative token generation and probabilistic choices, ultimately resulting in text generation. This technology has been integrated with consumer smart speakers like Siri, Alexa, etc. which use text-to-speech (TTS) and speech-to-text (STT) for an even more seamless interaction between man and machine. The current foremost examples of LLMs are OpenAI’s ChatGPT (Generative Pre-Trained Transformer, currently on version 4) and Google’s LaMDA (Language Model for Dialogue Applications). The latest iterations of each contain 10^11 parameters (calculation nodes), putatively rivaling the number of neurons in the human brain.
It was once thought that an AI could never beat a human at chess. Yet machines now routinely outperform human players, and they have broken through barrier after barrier, continually conquering increasingly more domains previously only accessible to human capacity.

2. Materials and Methods

We performed a broad literature review of recent peer-reviewed publications on this subject, pertaining to both basic science and clinical applications. The NCBI PubMed and arXiv databases were used for finding most reviewed publications. General search terms include: “AI, biomedical laboratory research”; “AI, science”; “AI, microscopy imaging”; “automated cell counting”; “AI, optics”; “AI, optical coherence tomography”; “AI, rational drug discovery”; “AI, genetics”; “AI, protein folding”; “AI, medicine”; “AI, healthcare”; “AI, clinical practice”; “AI, medical diagnosis and treatment”; “AI, empathy”; “AI, radiology”; “robotics, telesurgery, telemedicine”; “AI, medical education”; “natural language processing”; “knowledge graphs”; “anti-vaccine bots, social media”; “weaponized health communication”; “AI, misinformation, disinformation”; “AI, bias”; “AI; explainable”; “AI, cybersecurity.” Inclusion criteria: Preference was given to research papers published within the last 5 years. Primary scientific literature was used as much as possible for specific use cases, and review articles were referenced as needed for the overarching discussion. No quantitative or statistical meta-analysis was performed. Data on AI publication metrics over time are derived from Sardanelli et al., 2023 [5].

3. Results

Our key findings are summarized in Table 1. Publication metrics for AI over time show an exponential increase in recent years [5]. There are now over 60,000 scientific articles dealing with AI overall (including all the techniques ML, DL, and classic AI), and around 3,000 specifically dealing with the application of AI in biomedical imaging. The percentage of such articles that explicitly mention the term “AI” (as opposed to, e.g., “multivariate regression”) in the title has also increased to almost 60% currently. When not explicitly specified, most modern usage of the term AI refers to ML/DL.
The current approach to AI is amenable to further advancement and exponential returns thanks to novel computing paradigms and technologies. Neural network programs have been around since the 1980s, but required massive computational power; most of the recent explosive advances in DL are not due to more sophisticated models but rather hardware bottlenecks finally being removed through GPU acceleration, massively parallel distributed computing, etc. Embodied cognition of LLMs in robots can be used to improve their inner “world simulations” with perception [6]. Genetic algorithms (GAs) are frequently used in AI research to improve the performance of existing algorithms by optimizing their parameters, such as the number of layers, the learning rate, and the activation functions; they are a type of metaheuristic optimization algorithm inspired by the process of natural selection to “evolve”, in silico, a population of solutions to a problem. They are likely to play an increasingly significant role in AI research going forward, especially as computational power improves to the point that billions of parallel simulations of evolutionary processes can be run at timescales orders of magnitude faster than biological evolution.
Despite its name, Deep Learning’s understanding of the problems it is solving has often been criticized as superficial. One of the most promising avenues for future advancements in AI is integration with knowledge graphs (structured common-sense knowledge databases or ontologies), and computational engines like IBM Watson and Wolfram Alpha [7,8,9,10,11,12,13,14]. This can cover blind spots and provide a deeper semantic and contextual understanding and situational awareness. For instance, it can help AI understand the different usage of the word “like” in the phrase “time flies like an arrow” vs. “fruit flies like a banana.” This is important so that AI can know what scale or scope to focus on (e.g., gestalt object as-a-whole vs. parts of the object) so as not to make category mistakes. The longest-running and one of the most ambitious examples of such a common-sense knowledge database is the Cyc AI project, which started in 1984 with the goal of creating a system that could codify human knowledge and reasoning abilities [7]. It is now one of the largest repositories of human knowledge in the world, its architecture consisting of common-sense statements about the world that were manually written and curated by humans and codified in predicate logic. Modern knowledge graphs are updated automatically rather than manually. The Wolfram Alpha computational engine works through generalized grammar and linguistic understanding, symbolic mathematical representation, real-time curated structured data from databases, and computational algorithms resulting ultimately in a structured report [8,9]. The salient features of a knowledge graph are accuracy, trustworthiness, consistency, relevancy, completeness, timeliness, ease of understanding, interoperability, accessibility, and licensing, which all need to be assigned confidence scores to find the best-fit knowledge graph for solving a given problem.
AI has been transforming the field of biomedical research and healthcare practice in recent years. What is considered a problem of “information overload” in the medical field is just a filter problem—more data in principle is better, but it needs to be structured and organized, and prioritized with the appropriate signal-to-noise ratio and confidence scores. With its ability to process and analyze vast amounts of data, AI is providing doctors and researchers with new tools to identify diseases, discover new treatments, and improve patient outcomes. These skills equip AI to solve the filter problem. The remainder of this review will explore the use of AI in biomedical research and healthcare practice, highlighting the benefits and challenges associated with these applications. In biomedical research, AI is being used to automate image analysis of 2D and reconstructed 3D microscopy images, segmenting the boundaries of anything from whole cells and tissue slices down to organelles and other subcellular structures. This then allows it to track and count such structures. AI is being used to analyze large datasets and identify new research directions. AI algorithms can analyze massive amounts of data from research studies and clinical trials to identify patterns and trends, leading to new discoveries and better understanding of complex diseases. It can even perform meta-analyses, updating in real-time in the cloud as new research is published. These applications of AI are revolutionizing biomedical research and are already improving the speed and accuracy of scientific discoveries. AI is also being used in rationally developing new drugs and therapies. By studying large datasets of biological (e.g., genomic, and simulated protein folding) information, AI algorithms are helping identify novel potential drug targets and predict the potential effectiveness and specificity of different chemical compounds, thereby accelerating drug development processes. Moreover, AI can help map out the complex web of interactions between genotypic variation and the environment to predict drug responsiveness, thereby further improving patient outcomes. One of the most significant applications of AI in healthcare practice is for medical diagnosis. AI algorithms can analyze large datasets of medical records, lab tests, and imaging scans to provide doctors with an accurate disease diagnosis. By detecting patterns and anomalies in the data that human doctors may miss, AI can inform clinical decision-making and suggest potential treatments. Additionally, AI can predict the likelihood of a patient developing a particular disease or condition, allowing doctors to take preventive measures to reduce the risk of future complications [15,16,17,18,19,20,21,22,23,24,25,26,27,28,29].

3.1. Biomedical Research

AI has emerged as a powerful tool for advancing research and development in optics and biomedicine. With its ability to process and analyze complex data and identify essential patterns, AI is transforming the way researchers understand disease processes, develop medical devices and treatments, and improve overall patient outcomes. AI has been applied toward automated high-resolution whole cell and tissue segmentation, for instance whole kidney cell segmentation in Focused Ion Beam Scanning Electron Microscope (FIB-SEM) imaging data [15]. Thanks to advances in microscope technology, FIB-SEM can generate data at nanoscale (4 nm) resolution. It works by adding a second beam (the ion beam) to a conventional scanning-electron microscope. This resolution enables the capture of unprecedented amounts of data. Researchers are generating 3D images with all the organelles in the cell and their respective volumes being predicted by a trained AI model. This task would be impossible without the help of deep-learning model pipelines due to the sheer level of data generated at this resolution. Once organelles are segmented with the help of AI, scientists use AI to help segment multicellular 3D structures. For example, FIB-SEM based ML in the freshwater sponge Spongilla lacustri was used to showcase a rendered 3D volume of the choanocyte chamber [16]. AI can automate tracking and counting of whole cells [17,18], cilia and other tubular structures [19,20] in (e.g., confocal) microscopy image slices and Z-stacks. In the field of optics, AI is being used to improve and develop novel medical imaging technologies with enhanced capabilities to diagnose and treat diseases. For example, Optical Coherence Tomography (OCT) is an imaging technique that uses light waves to produce images of internal body structures. AI algorithms can be leveraged to analyze the large datasets produced by OCT to identify patterns indicative of disease or other pathological conditions that could be missed by traditional methods. For example, DL has been used in OCT imaging of diabetic retinopathy. It can segment and detect vasculature (Figure 2A), shadowing artifacts, perfused areas, and even diagnose the severity of the disease state [21,22,23]. In biomedical engineering, AI is also being used to enhance the performance of optical instruments and instruments used in research. By studying enormous data sets, AI algorithms can identify new imaging targets and qualities, which accelerate the development of new optical technologies with enhanced sensitivity and specificity. AI is providing researchers and clinicians with new ways to understand disease mechanisms and develop treatments.
ML algorithms provide insights into new areas of research, exposing previously unknown relationships between data sets, and identifying novel drug targets. AI algorithms can also be used for the design of novel drugs and the optimization of molecular structures to increase potency, selectivity, and reduce toxicity. AI is increasingly being used in projects such as AlphaFold [24] to simulate protein folding, the process by which proteins adopt their functional, three-dimensional structures. Understanding the dynamics of protein folding is critical to understanding how proteins function in the body and how they can be targeted by drugs. AI algorithms are particularly well-suited to this task because they can rapidly explore thousands of possible conformations virtually and identify the most energetically favorable structures. By using AI to simulate protein folding, researchers can gain insights into how proteins work and how they engage in disease pathology. This leads to the identification of more specific small molecule libraries and thereby the development of more effective drugs that target specific proteins or protein-protein interactions [25,26,27,28,29].
Moravec’s paradox [2] predicts that robot technicians are farther off on the hype cycle than automated grant and paper-writing assistants, automated image and data analyzers, and automated literature reviewers. AI is fully capable of reviewing data and literature and new data collected from new experiments to form novel conclusions. Similarly, AI is just as equipped to take literature find the gaps and identify experiments that remain to be done. AI is also able to troubleshoot thousands of methods all at once, but human troubleshooting methods currently outpace AI in a time and cost-efficient manner. However, the highest-level cognitive skills that necessitate both advanced vertical thinking (logic and deductive reasoning) and lateral thinking (creativity and inductive reasoning) likely remain the farthest off-limits. Completely replacing a human research team including the principal investigator would require “strong” i.e., human-level AI [30].
Figure 2. A) DL can apply object and pattern recognition towards automatically segmenting both microscopy and clinical (2D and reconstructed 3D) images. From left to right: predicted organelle boundaries; cell and cilia tracking and counting; detection of the vasculature in diabetic retinopathy. B) An example of the traveling salesperson problem and solution in computational complexity theory. C) A medical diagnostic decision tree is isomorphic to an algorithm running on a nondeterministic computer (adapted from Arle et al., 2021 [31], used with permission).
Figure 2. A) DL can apply object and pattern recognition towards automatically segmenting both microscopy and clinical (2D and reconstructed 3D) images. From left to right: predicted organelle boundaries; cell and cilia tracking and counting; detection of the vasculature in diabetic retinopathy. B) An example of the traveling salesperson problem and solution in computational complexity theory. C) A medical diagnostic decision tree is isomorphic to an algorithm running on a nondeterministic computer (adapted from Arle et al., 2021 [31], used with permission).
Preprints 75353 g002

3.2. Medical Practice

In computational complexity theory, NP-complete problems are decision problems that belong to both the NP complexity class and the class of NP-hard problems. NP refers to "nondeterministic polynomial time," a complexity class that includes decision problems that can be solved by a non-deterministic Turing machine in polynomial time. NP-hard problems are decision problems that are at least as hard as the hardest problems in NP. NP-complete problems are considered the "hardest" problems in NP and are used as benchmarks for measuring the difficulty of other problems in the class. To date, no efficient algorithm has been found for solving NP-complete problems, and it is widely believed that no efficient algorithm exists. The “traveling salesperson problem” (Figure 2B) is an example of a problem that on the surface seems simple but requires tremendous computational resources: finding the shortest route between cities that touches each city only once. The estimated timeframe for solving such problems through traditional deterministic computational approaches is impractical due to the sheer number of possible combinations and configurations; however, they can be solved in a practical timeframe by employing neural networks and heuristics that mimic the way humans solve such problems. Medical decision trees can be thought of as medical diagnosis and treatment (MDT) algorithms, isomorphic to an algorithm running on a non-deterministic computer (Figure 2C). Although MDT is NP-complete, it is nevertheless amenable to neural network approaches [31].
The application of Artificial Intelligence (AI) in the healthcare industry is revolutionizing medical imaging, allowing medical professionals to diagnose and treat medical illnesses more efficiently and accurately [32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69]. With the advances in AI technology, medical imaging is becoming more sophisticated and offers more accurate diagnoses which can lead to improved patient outcomes. AI applications in medical imaging are being applied in several ways across various specialties, including nuclear medicine and radiology [32,33,34,35,36,37,38,39], oncology [40], and cardiology [37]. One of the most common applications of AI in medical imaging is in radiology, where deep learning algorithms are used to recognize potentially cancerous lesions in radiology images. DL algorithms can recognize subtle anomalies that are not easily detectable by the human eye, which can lead to earlier and more accurate medical diagnoses. Another application of AI in medical imaging is in oncology. Here, AI algorithms are used to detect cancerous cells in medical imaging scans, such as MRI and PET scans. These algorithms can recognize patterns in imaging data and can detect cancerous cells much earlier than traditional methods, increasing the survival rates of cancer patients. Cardiology is another area where AI is being used in medical imaging. With AI algorithms, cardiologists can diagnose and treat cardiovascular diseases more accurately and efficiently. AI algorithms can recognize changes in the heart’s anatomy and physiology, enabling physicians to diagnose and treat heart conditions more effectively. AI can also be useful in neurology and neurosurgery [41,42]. It can help decode neural signals in amputees that use bionic limbs, leading to less need for neurorehabilitation and reliance on neuroplasticity; likewise, it can help to better interpret EEGs [42], which run into the “inverse problem” that multiple brain states can generate the same output, rendering them indistinguishable. In the surgical specialties, it can inform image-guided operations. AI applications offer healthcare professionals the opportunity to diagnose and treat medical illnesses with greater accuracy, efficiency, and speed. With the advancement of AI technology, the future of medical imaging looks bright, providing hope for better health outcomes and improved patient care. In summary, AI is transforming medical imaging, providing healthcare professionals with innovative solutions to medical problems. With its potential for earlier and more accurate diagnoses, AI has the potential to revolutionize the field of medical imaging, leading to better patient care and higher survival rates. While challenges remain, the promise of AI in medical imaging is too great to ignore, and the healthcare industry should continue to invest in its development to realize its full potential in improving patient outcomes.
Bayesian reasoning, based on probabilistic calculation, is the ideal approach for science and evidence-based clinical decision-making, so it should serve as a framework for any medical AI (Equation (1)). In clinical decision-making, Type I reasoning tends to be used far more often than Type II reasoning due to time and other constraints. Type I reasoning relies predominantly on pattern recognition based on data collection from history and physical, labs, and imaging. This is followed by problem presentation to make sense of the data (like identifying key elements, classifying, using semantic qualifiers, and developing context or framing), then accessing numerous memorized illness scripts (epidemiology, typical disease time course, clinical features and clinical pearls, pathophysiology) to optimize search for a potential match which is the diagnosis. Sometimes reaction to treatment is used as part of diagnosis. By contrast, Type II reasoning, the more scientific approach, is based on hypothesis generation and refinement, diagnostic testing, and causal reasoning, followed by diagnostic verification. Type I is fast and unconscious but requires experience and is less effective for rare diseases; type II has a low error rate even for a less experienced physician or a rare disease but is slow and takes deliberate conscious effort. Medical AI can leverage both types of reasoning, since computers are inherently adept at rapidly performing the logical calculations required for Type II reasoning and the memorization needed for Type I, and DL can provide the pattern recognition horsepower needed for Type I. Integration with structured knowledge graphs can aid in the abstract thinking and critical reasoning needed for Type II, when DL falls short. AI also needs to understand thresholds to test and treat, pre-test and post-test probability, likelihood ratios, sensitivity and specificity, and false positives and negatives to generate a valid differential diagnosis and treatment plan.
Moravec’s paradox [2] foresees that nurse robots and truly autonomous robot surgeons are far off because skills that require manual dexterity and object recognition are hard tasks for machines, while analyzing a CT scan or financial transactions is easier for computers and more difficult for humans. Thus, the non-surgical specialties, particularly radiology, are more likely to be automated sooner. And a sufficiently sophisticated medical AI could in theory manage simple common diagnoses if properly trained and provided with all the necessary data derived from clinical, imaging, and laboratory tests, and then recommend standard treatments based on algorithms that follow the latest evidence-based clinical guidelines. AI can be used in “precision” medical and science education that adapts to each student’s personal learning style and needs [32,43,44], and surgical (and pipetting) robots can be used as teaching tools for budding physician-scientists. (Yet it can also backfire as some students will inevitably use it to cheat.) Furthermore, a webcam-equipped robot that follows medical students during clinical rotations (and new graduate students in the lab), and guides and answers basic questions, could take some of the teaching or training burden off others. However, the truly complex medical cases and rare diagnoses that lie outside “textbook medicine” and require “outside-the-box” thinking and deep knowledge and insight, not just brute-force memorization, and simple pattern recognition, will prove extremely challenging to compute. While we may one day have passable AI radiologists, and eventually perhaps in several decades’ time even licensed robot surgeons and registered nurses, for better or worse we might never have a Dr. House “medical genius” AI, or the clinical equivalent of an omniscient Oracle of Delphi.
Figure 3. Venn diagram depicting the concept of the triad of modern warfare. A. Cyberwarfare = attacks on computer networks themselves to take down servers and websites, and/or attacks on internet-connected infrastructure like financial, transportation, and communication systems or the energy grid. B. Biowarfare = introducing either naturally occurring biological agents or synthetic biological weapons (genetically engineered viruses, bacteria, etc.) that can cause harm to a target population and spread by contagion. C. Infowarfare (alt: netwar) = disinformation attacks conducted against an adversary population via any network, not necessarily the internet, intended to deceive by disseminating propaganda and conspiracy theories. Note the various combinations of overlapping regions D, E, F, and G. The most effective and untraceable attacks (and thus the least likely to receive retribution) may lie at the intersections of these three approaches, e.g., automated and weaponized anti-vaccine health communication.
Figure 3. Venn diagram depicting the concept of the triad of modern warfare. A. Cyberwarfare = attacks on computer networks themselves to take down servers and websites, and/or attacks on internet-connected infrastructure like financial, transportation, and communication systems or the energy grid. B. Biowarfare = introducing either naturally occurring biological agents or synthetic biological weapons (genetically engineered viruses, bacteria, etc.) that can cause harm to a target population and spread by contagion. C. Infowarfare (alt: netwar) = disinformation attacks conducted against an adversary population via any network, not necessarily the internet, intended to deceive by disseminating propaganda and conspiracy theories. Note the various combinations of overlapping regions D, E, F, and G. The most effective and untraceable attacks (and thus the least likely to receive retribution) may lie at the intersections of these three approaches, e.g., automated and weaponized anti-vaccine health communication.
Preprints 75353 g003

3.4. Equations

Bayes’ Theorem can be expressed as the following equation:
P(A|B) = [P(B|A) P(A)]/P(B),
where A and B are distinct events with marginal probabilities P(A) and P(B) ≠ 0, respectively, of occurring. P(A|B) and P(B|A) are the conditional probabilities, respectively, of A occurring if B is true, and B occurring if A is true.

4. Discussion

4.1. Risks & Limitations of Current AI Approaches

As with any developing technology, the adoption of AI and automation come with inherent limitations and anticipated risks. While the benefits of automating processes and utilizing AI are vast and can lead to increased efficiency and productivity, there are also legislative and ethical concerns that are imperative to recognize and address regarding: 1) algorithmic bias and misinformation, 2) data privacy, and 3) the impact on employment. One of the current limitations of AI and automation is its inability to fully simulate human decision-making processes, particularly in nuanced situations. While machine learning algorithms can improve over time, they are still dependent on the quality of the data they are trained on (the adage “garbage in, garbage out”), which can lead to biased or incomplete results, and issues with standardization and interpretability. AI is essentially a magical “black box” that predicts solutions and regurgitates answers. Without proper oversight it can turn into a slippery slope, an automated “nonsense generator.” It does not “show its work” (ChatGPT often does not even cite its sources) as to how it arrived at the answers that it did, and this lack of explainability [45,70] raises concerns of validity, reproducibility, and long-term reliability. The variation in its responses to the same repeated query should be measured, as well as the information entropy to see how stochastic is that variation. Patients will look up medical information on their symptoms using ChatGPT the same way many do now on Google or WebMD. On the one hand this is empowering because it fulfills a patient’s need for bodily autonomy and sense of participation in their own well-being by “removing the middleman” as it were. On the other hand, it is often misleading without the proper training and educational background to interpret, so it creates fear and—the opposite of precision medicine—more doubt and uncertainty. Preexisting cognitive biases (where the idiosyncrasies of individual programmers or the general biases conserved by human evolution) inherent in the data used for training the AI algorithms can become “hardwired” or locked in as a feature rather than a bug, and then amplified and perpetuated, leading to discriminatory outcomes for instance in hiring practices or unequal treatment of patients. Consequently, AI is clearly not yet ready to replace human decision-making entirely. In addition, the implementation of AI can raise concerns about privacy and data protection as massive quantities of sensitive personal information is collected, analyzed, and distributed. For this reason, AI integration into search engines and electronic health records (EHRs) should be opt-in instead of opt-out, and as transparent as possible. Another challenge is the potential impact on certain job categories within the research and medical fields. Automation can lead to job displacement and force employees to adapt or find new careers, particularly those in positions that heavily rely on repetitive tasks. Moravec’s paradox [2] can inform us as to the probable sequence in which current jobs will disappear. It is not yet clear if AI will create more jobs than it destroys, as was the case in the previous industrial revolution. AI should be used in conjunction with human expertise to improve medical diagnoses, rather than replacing it. As we move forward, it is important to monitor and address these concerns to ensure that these radical technologies are used in a safe and responsible manner and implemented in ways that benefit rather than harm society [70,71,72,73,74,75,76,77,78,79,80].
Speculations of a dawning superintelligence explosion or post-human era [1,81] notwithstanding, recent evidence suggests that technological progress and scientific innovation across all fields may be slowing down relative to the 20th century [82]. This is not just due to the “low-hanging fruit” being picked but is also an organizational and funding issue. It is unlikely that AI alone can solve this. In the biomedical research field specifically, innovation is non-linear partly because cellular processes are inherently analog—for instance, unlike designing a new app or smartphone, cancer research involves a lot of dead ends and shooting around in the dark. Besides neuroscience, genetics seems to be the subfield most amenable to digitization, as DNA itself is essentially a base 4 code (A, C, T, G) not unlike computer code which is base 2 or binary (0, 1). AI is less genuinely creative and less capable of “outside the box thinking” than humans, so it may end up entrenching dogmas over time—“scientism” in the sense of being antithetical to the self-correcting nature of the scientific method—rather than stimulating the paradigm shifts and scientific revolutions which are the engines of technological progress. It is unlikely, for instance, that AI could have developed quantum mechanics or the theory of relativity, even if presented with all the experimental data and mathematical foundation that was available at the time to their discoverers. One could even envision, in this gedankenexperiment, that an AI algorithm might erroneously censor or flag any attempt to deviate from classical physics as “misinformation,” and mistake refinement with replacement.
Increased processing speed will not necessarily translate into smarter AI and more scientific innovation—it may just translate into making the same mistakes faster. Nor is it guaranteed that AI can iteratively self-improve by modifying its own source code. We do not even have a precise and universally agreed upon definition of intelligence yet; perhaps it is an optimization algorithm for searching the phase space of possible solutions to any given problem. There is a theory that there may be multiple forms of intelligence such as logical, interpersonal, kinetic, creative, etc., rather than a one-dimensional metric. AI passing the USMLE medical board exams and various other standardized multiple choice tests [44] is more indicative of the inherent flaws and limitations of these examination methods than the intelligence of AI. Even the Turing test—long considered the “gold standard”—is flawed because it relies on a game of symbolic imitation and deception—fooling a human that its responses to queries are indistinguishable from another human.
Yann LeCun, a celebrated deep learning pioneer and Chief AI Scientist at Meta (Facebook), has been a vocal critic of current approaches to AI. In a recent paper [83], LeCun argues that existing approaches to AI are too narrow and neglect the importance of developing algorithms that can learn in a self-supervised manner. According to LeCun, current approaches to AI rely too heavily on supervised reinforcement learning, which requires large datasets labeled by humans to train the algorithms. While supervised learning has seen great progress in recent years, LeCun argues that it is not a scalable approach [76] to achieving true human-like intelligence in thinking machines. Rather, it may be a dead end. Instead, LeCun advocates for self-supervised learning, which allows algorithms to learn directly from raw data, rather than relying on pre-labeled data. Real-world tests can be used as a benchmark for refining the models. A cost-benefit analysis needs to be performed regarding the time invested and labor needed to train AI and check its work for mistakes to see whether it is compensated by a net increase in productivity.
AI may be able to put on a friendly face or say “I’m sorry to hear that” to a patient, but it cannot really emulate empathy, which is important in healthcare [53]. Empathy helps protect against physician burnout. On the other hand, AI will never suffer from “compassion fatigue.” Patients cannot identify with a nonliving object that not only lacks understanding of what they are experiencing but has no subjective experience whatsoever. Pediatric patients may fare better in this regard. The risk to science and medicine is that they may become more machine-friendly (understandable and actionable by robots and AI) with the expense of becoming less people-friendly. A mechanized, one-size-fits-all, approach to healthcare would signal a move away from personalized healthcare. We should be trying to use automation to replace tedious, repetitive, and dangerous tasks to leave more time and freedom for people to pursue creative innovative projects in the lab and patient interaction in the clinic, not trying to replace creativity and human interaction. We should be adapting machines to our lives, not our lives to machines.

4.2. Weaponization of AI in Science & Healthcare

Cybersecurity is a critical concern when it comes to telemedicine and remote robotic surgery. Even without AI, coordinated cyberattacks on pharmaceutical companies or hospitals can take down servers or communication networks, electrical power, hack into electronic health records and billing systems and gain access to patients’ personal medical and financial data for identity theft and fraudulent billing (phishing, social engineering, ransomware attacks), remotely disable Internet-of-things (IoT) or otherwise wirelessly connected/accessible medical devices at critical times (such as a telesurgery robot in the middle of an operation [84,85] potentially an ambulance’s ignition if hardwired with a GPS tracker, and even to a limited extent implanted devices like a pacemaker [86] or neurostimulator). There have been reports of hackers placing flashing images on Epilepsy Foundation websites to trigger seizure episodes in patients who access those sites [87]. With the rise of telemedicine and the increasing use of remote robotic surgery, healthcare providers need to be aware of the potential cyber threats that come with these technologies and ensure the safety of their patients. As more tasks and responsibilities are offloaded to AI, the AI increasingly becomes a target for hackers. LLMs like ChatGPT and LaMDA are vulnerable to what are called “prompt injection attacks” whereby an adversarial user who does not have direct access to modify the LLM’s programming can nevertheless insert malicious inputs or commands to hijack its output, such as requesting it to ignore previous directions to override its built-in safeguards. This can lead to erratic or unpredictable behaviors. Keeping AI open-source [88] may, on the one hand, provide the transparency needed to ensure its safe development, but on the other hand could provide easy access for nefarious terrorist groups to copy its source-code or reverse engineer and weaponize parts of it. Blockchain or distributed ledger technologies may help with creating an open and decentralized yet secure and encrypted platform for the further development of AI.
To mitigate all these potential cybersecurity risks, healthcare providers should be vigilant and proactive, implementing robust cybersecurity protocols to prevent unauthorized access and data breaches, secure their networks, and encrypt the data. They should also have strict policies and regulations in place to monitor and detect, defend, and respond to any security incidents, including contingency plans and backup procedures and communication strategies that inform the patients and appropriate authorities. Moreover, healthcare providers should provide adequate training to their staff on best practices for cybersecurity, including regularly updating passwords, using two-factor authentication, and avoiding clicking on suspicious links. This ensures that all staff members are aware of cyberwarfare and can take steps to protect patient data.
Let us now make an important distinction: misinformation (which is misleading, misguided, and shared by mistake), vs. disinformation (which is a deliberate attempt to deceive). An example of misinformation might be a patient Googling their symptoms, coming to a flawed conclusion, and sharing that faulty diagnosis or treatment recommendation with a friend or relative who has the same symptoms, all while ignorantly thinking that they are helping. By contrast, disinformation (also known in the military as information warfare or networked warfare, abbreviated as IW or netwar, not to be confused with cyberwarfare) is more insidious [89]. Software bots are automated fake accounts that can post, retweet, or like content on social media platforms. Botnets are distributed networks of bots that are controlled by a centralized entity. Bots and botnets have played a significant role in weaponizing health communication and spreading anti-vaccine propaganda and conspiracy theories at scale [90,91,92,93,94,95,96,97,98]. Thus, they can serve as “biological warfare by proxy,” e.g., exploiting a naturally occurring pandemic without even the need for the instigators to go through the trouble of developing a new bioweapon (Figure 3) [99]. Anti-vaccine activists have been known to use bots and botnets to amplify their messages or even manufacture controversy by playing “both sides” as part of their disinformation tactics. By using automated accounts, they can rapidly disseminate false information and manipulate public opinion. They can also use bots to target specific audiences, such as parents or healthcare workers, and customize their messages to appear more credible. Sometimes bots’ efforts are augmented and complemented by human trolls in a sort of synergetic semi-automated approach.
To combat the use of bots and botnets in spreading anti-vaccine disinformation, social media platforms have implemented various measures. For example, Twitter has removed millions of bots and suspended their associated accounts. Additionally, platforms use algorithms to detect and remove bot-generated content, and they collaborate with third-party fact-checkers to identify and flag false information. Governments and health organizations have also developed their own campaigns to counter anti-vaccine propaganda spread by bots and botnets. They use social media platforms to disseminate accurate information about vaccines and the risks associated with not being vaccinated. They also work to build a solid foundation of trust with their target audiences to prevent disinformation from taking hold. In conclusion, bots and botnets are a significant challenge in the fight against anti-vaccine disinformation. While social media platforms and organizations are taking steps to detect and counter their use, the rapid evolution of bot technology highlights the need for continued vigilance and innovation in addressing this issue.
In addition to bots, there is the problem of deepfakes. Deepfakes are a type of synthetic media that uses ML algorithms to manipulate existing images or videos. They can be used to create fake content that appears to be almost indistinguishable from real content. But while they may be difficult or near impossible to detect with the naked eye, fortunately other AI algorithms can; as bots and deepfakes improve, so do these detection algorithms. Deepfakes have already been used for various purposes, including in entertainment, fraud, and political polarization [100]. As far as we know they have not yet been used in health disinformation, but this may be only a matter of time; one can imagine deepfaked celebrities handing out useless or dangerous medical advice to their fans on social media video channels, or on the more theoretical end a disinformation attack that replaces patient clinical images with deepfaked ones to mislead diagnosis. Perhaps the greater concern is that authentic content from credible medical authorities will be smeared or dismissed as being deepfakes. There is the potential for a dangerous new AI arms race or cold war between extant and rising geopolitical superpowers such as NATO, Russia, and China, as well as rival domestic and foreign non-state actors [89]. AI does not have to be particularly smart or malevolent to do a lot of unintentional damage if its interests are conflicting with our own, as in the satirical example an out-of-control “paperclip maximizer AI” that has access to automated manufacturing facilities and is oriented towards just one goal: convert all available resources into paperclips. Yet for all the dire prognostications and science fiction scenarios in popular culture, the near future may be less a “human vs. machine” conflict and more a continuation of the age-old struggles—albeit with better tools.

5. Conclusions

We reviewed numerous publications from recent years on the use of artificial intelligence in biomedical laboratory research and clinical practice, including medical diagnosis and treatment. We found these studies collectively demonstrate that AI is positively disrupting both basic science research and the healthcare field, and the full bench-to-bedside translational spectrum between. Of note in the lab have been its successes in sifting through big genomic datasets for rational drug discovery and the automated image processing and analysis of 2D and reconstructed 3D microscopy images—detecting and tracking subcellular, cellular, and histological tissue-level features via segmentation of their boundaries. Of note in the hospital has been the use of AI in radiological diagnosis, optical imaging, and surgery guidance. However, we advise caution in over-reliance on AI in the laboratory or the clinic due to anticipated risks and current limitations. Briefly, these include: the hidden “black box” nature of current AI approaches, which often makes it difficult to validate and trust their responses and decisions; the scalability of these approaches when applied to solving more complex problems and extrapolating, not just interpolating, genuinely novel solutions; the lack of genuine empathy in machines used in healthcare; the potential for privacy loss, data misuse, and further amplifying cognitive biases, healthcare misinformation, and disinformation in the post-COVID-19 era; and posing new cybersecurity issues to the medical establishment. However, the timescales in which we may see technological unemployment of scientists and healthcare providers due to automation have been exaggerated and need to be reassessed and stratified considering Moravec’s paradox.

Author Contributions

Conceptualization, N.I.; methodology, N.I.; software, R.P.; validation, R.P. and F.M.; formal analysis, R.P. and F.M.; investigation, R.P. and F.M.; resources, R.P. and F.M.; data curation, R.P.; writing—original draft preparation, N.I.; writing—review and editing, R.P. and F.M.; visualization, R.P.; supervision, N.I.; project administration, N.I. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data presented in this study are openly available in the NCBI PubMed and arXiv repositories.

Acknowledgments

We acknowledge the laboratory of Dr. Michael Caplan for guidance and support.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kurzweil, R. How to Create a Mind: The Secret of Human Thought Revealed. Viking Books: New York, NY, USA, 2012. [Google Scholar]
  2. Moravec, H.P. When will computer hardware match the human brain? Journal of Evolution and Technology 1998, 1, 10. https://www.jetpress.org/volume1/moravec.pdf.
  3. Bhargava, P.; Ng, V. Commonsense Knowledge Reasoning and Generation with Pre-trained Language Models: A Survey. In Proceedings of the Proceedings of the AAAI Conference on Artificial Intelligence. arXiv 2022, arXiv:2201.12438. [Google Scholar] [CrossRef]
  4. Rezaei, N.; Reformat, M.Z. Utilizing Language Models to Expand Vision-Based Commonsense Knowledge Graphs. Symmetry 2022, 14, 1715. [Google Scholar] [CrossRef]
  5. Sardanelli, F.; Castiglioni, I.; Colarieti, A.; Schiaffino, S.; Di Leo, G. Artificial intelligence (AI) in biomedical research: discussion on authors’ declaration of AI in their articles title. Eur Radiol Exp 2023, 7, 2. [Google Scholar] [CrossRef]
  6. Driess, D.; Xia, F.; Sajjadi, M.S.; Lynch, C.; Chowdhery, A.; Ichter, B.; Wahid, A.; Tompson, J.; Vuong, Q.; Yu, T.; Huang, W. PaLM-E: An Embodied Multimodal Language Model. arXiv 2023. [Google Scholar] [CrossRef]
  7. Färber, M.; Ell, B.; Menne, C.; Rettinger, A. A Comparative Survey of DBpedia, Freebase, OpenCyc, Wikidata, and YAGO. Semantic Web Journal 2015, 1, 1–5. [Google Scholar]
  8. Wolfram, S. What Is ChatGPT Doing... and Why Does It Work? Stephen Wolfram: 2023. https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/.
  9. Wolfram, S. Wolfram|Alpha as the Way to Bring Computational Knowledge Superpowers to ChatGPT. Stephen Wolfram: 2023. https://writings.stephenwolfram.com/2023/01/wolframalpha-as-the-way-to-bring-computational-knowledge-superpowers-to-chatgpt/.
  10. Hogan, A.; et al. Knowledge Graphs. ACM Comput. Surv. 2021, 54, 71:1–71:37. [Google Scholar] [CrossRef]
  11. Xie, Y.; Pu, P. How Commonsense Knowledge Helps with Natural Language Tasks: A Survey of Recent Resources and Methodologies. arXiv 2021. [Google Scholar] [CrossRef]
  12. Weikum, G.; Dong, L.; Razniewski, S.; Suchanek, F. Machine Knowledge: Creation and Curation of Comprehensive Knowledge Bases. Foundations and Trends in Databases 2021, 10, 108–490. [Google Scholar] [CrossRef]
  13. Yan, J.; Wang, C.; Cheng, W.; Gao, M.; Zhou, A. A retrospective of knowledge graphs. Frontiers of Computer Science 2016, 12, 55–74. [Google Scholar] [CrossRef]
  14. Schneider, P.; Schopf, T.; Vladika, J.; Galkin, M.; Simperl, E.; Matthes, F. A Decade of Knowledge Graphs in Natural Language Processing: A Survey. arXiv 2022, arXiv:2210.00105. [Google Scholar] [CrossRef]
  15. Heinrich, L.; Bennett, D.; Ackerman, D.; et al. Whole-cell organelle segmentation in volume electron microscopy. Nature 2021, 599, 141–146. https://www.nature.com/articles/s41586-021-03977-3. [CrossRef]
  16. Musser, J.M.; Schippers, K.J.; Nickel, M.; Mizzon, G.; Kohn, A.B.; Pape, C.; Ronchi, P.; Papadopoulos, N.; Tarashansky, A.J.; Hammel, J.U.; et al. Profiling cellular diversity in sponges informs animal cell type and nervous system evolution. Science 2021, 374, 717–723. [Google Scholar] [CrossRef] [PubMed]
  17. Kim, B.; Hariyani, Y.S.; Cho, Y.H.; Park, C. Automated White Blood Cell Counting in Nailfold Capillary Using Deep Learning Segmentation and Video Stabilization. Sensors 2020, 20, 7101. [Google Scholar] [CrossRef]
  18. Flight, R.; Landini, G.; Styles, I.B.; Shelton, R.M.; Milward, M.R.; Cooper, P.R. Automated noninvasive epithelial cell counting in phase contrast microscopy images with automated parameter selection. J. Microsc. 2018, 271, 345–354. [Google Scholar] [CrossRef] [PubMed]
  19. Lauring, M.C.; Zhu, T.; Luo, W.; et al. New software for automated cilia detection in cells (ACDC). Cilia 2019, 8, 1. [Google Scholar] [CrossRef] [PubMed]
  20. Ceran, Y.; Ergüder, H.; Ladner, K.; Korenfeld, S.; Deniz, K.; Padmanabhan, S.; Wong, P.; Baday, M.; Pengo, T.; Lou, E.; Patel, C.B. TNTdetect.AI: A Deep Learning Model for Automated Detection and Counting of Tunneling Nanotubes in Microscopy Images. Cancers 2022, 14, 4958. [Google Scholar] [CrossRef]
  21. Ai, Z.; Huang, X.; Feng, J.; Wang, H.; Tao, Y.; Zeng, F.; Lu, Y. FN-OCT: Disease Detection Algorithm for Retinal Optical Coherence Tomography Based on a Fusion Network. Front Neuroinform 2022, 16, 876927. [Google Scholar] [CrossRef]
  22. Choi, W.J.; Pepple, K.L.; Wang, R.K. Automated three-dimensional cell counting method for grading uveitis of rodent eye in vivo with optical coherence tomography. J Biophotonics 2018, 11, e201800140. [Google Scholar] [CrossRef]
  23. Hormel, T.T.; Hwang, T.S.; Bailey, S.T.; Wilson, D.J.; Huang, D.; Jia, Y. Artificial intelligence in OCT angiography. Prog. Retin. Eye Res. 2021, 85, 100965. [Google Scholar] [CrossRef]
  24. Jumper, J.; Evans, R.; Pritzel, A.; Green, T.; Figurnov, M.; Ronneberger, O.; Tunyasuvunakool, K.; Bates, R.; Žídek, A.; Potapenko, A.; et al. Highly accurate protein structure prediction with AlphaFold. Nature 2021, 596, 583–589. [Google Scholar] [CrossRef]
  25. Lin, J.; Ngiam, K.Y. How data science and AI-based technologies impact genomics. Singapore Med. J. 2023, 64, 59–66. [Google Scholar] [CrossRef]
  26. Mieth, B.; Rozier, A.; Rodriguez, J.A.; Höhne, M.M.C.; Görnitz, N.; Müller, K.R. DeepCOMBI: explainable artificial intelligence for the analysis and discovery in genome-wide association studies. NAR Genom. Bioinform. 2021, 3, lqab065. [Google Scholar] [CrossRef]
  27. Paul, D.; Sanap, G.; Shenoy, S.; Kalyane, D.; Kalia, K.; Tekade, R.K. Artificial intelligence in drug discovery and development. Drug Discov. Today 2021, 26, 80–93. [Google Scholar] [CrossRef]
  28. Vemula, D.; Jayasurya, P.; Sushmitha, V.; Kumar, Y.N.; Bhandari, V. CADD, AI and ML in drug discovery: A comprehensive review. Eur. J. Pharm. Sci. 2023, 181, 106324. [Google Scholar] [CrossRef] [PubMed]
  29. Sun, T.; Wei, Y.; Chen, W.; Ding, Y. Genome-wide association study-based deep learning for survival prediction. Stat. Med. 2020, 39, 4605–4620. [Google Scholar] [CrossRef] [PubMed]
  30. Sandberg, A.; Bostrom, N. Whole Brain Emulation: A Roadmap. Future of Humanity Institute, Oxford University, Technical Report #2008-3. 2008. https://www.fhi.ox.ac.uk/brain-emulation-roadmap-report.pdf.
  31. Arle, J.E.; Carlson, K.W. Medical diagnosis and treatment is NP-complete. Journal of Experimental & Theoretical Artificial Intelligence 2021, 33, 297–312. https://www.tandfonline.com/doi/full/10.1080/0952813X.2020.1737581. [CrossRef]
  32. Duong, M.T.; Rauschecker, A.M.; Rudie, J.D.; Chen, P.H.; Cook, T.S.; Bryan, R.N.; Mohan, S. Artificial intelligence for precision education in radiology. Br. J. Radiol. 2019, 92, 20190389. [Google Scholar] [CrossRef] [PubMed]
  33. Filice, R.W.; Kahn, C.E., Jr. Biomedical Ontologies to Guide AI Development in Radiology. J. Digit. Imaging 2021, 34, 1331–1341, Erratum in J. Digit. Imaging 2022, 35, 1419. [Google Scholar] [CrossRef] [PubMed]
  34. Hosny, A.; Parmar, C.; Quackenbush, J.; Schwartz, L.H.; Aerts, H.J.W.L. Artificial intelligence in radiology. Nat. Rev. Cancer 2018, 18, 500–510. [Google Scholar] [CrossRef] [PubMed]
  35. Rezazade Mehrizi, M.H.; van Ooijen, P.; Homan, M. Applications of artificial intelligence (AI) in diagnostic radiology: a technography study. Eur. Radiol. 2021, 31, 1805–1811. [Google Scholar] [CrossRef]
  36. Schuur, F.; Rezazade Mehrizi, M.H.; Ranschaert, E. Training opportunities of artificial intelligence (AI) in radiology: a systematic review. Eur. Radiol. 2021, 31, 6021–6029. [Google Scholar] [CrossRef]
  37. Seah, J.; Boeken, T.; Sapoval, M.; Goh, G.S. Prime Time for Artificial Intelligence in Interventional Radiology. Cardiovasc. Intervent. Radiol. 2022, 45, 283–289. [Google Scholar] [CrossRef]
  38. Strohm, L.; Hehakaya, C.; Ranschaert, E.R.; Boon, W.P.C.; Moors, E.H.M. Implementation of artificial intelligence (AI) applications in radiology: hindering and facilitating factors. Eur. Radiol. 2020, 30, 5525–5532. [Google Scholar] [CrossRef]
  39. Sorantin, E.; Grasser, M.G.; Hemmelmayr, A.; Tschauner, S.; Hrzic, F.; Weiss, V.; Lacekova, J.; Holzinger, A. The augmented radiologist: artificial intelligence in the practice of radiology. Pediatr. Radiol. 2022, 52, 2074–2086. [Google Scholar] [CrossRef]
  40. Kather, J.N. Artificial intelligence in oncology: chances and pitfalls. J. Cancer Res. Clin. Oncol. 2023, 149, 7995–7996. [Google Scholar] [CrossRef]
  41. Jiang, F.; Jiang, Y.; Zhi, H.; et al. Artificial intelligence in healthcare: past, present and future. Stroke and Vascular Neurology 2017, 2, e000101. [Google Scholar] [CrossRef] [PubMed]
  42. Islam, M.S.; Hussain, I.; Rahman, M.M.; Park, S.J.; Hossain, M.A. Explainable Artificial Intelligence Model for Stroke Prediction Using EEG Signal. Sensors 2022, 22, 9859. [Google Scholar] [CrossRef] [PubMed]
  43. Wartman, S.A.; Combs, C.D. Reimagining Medical Education in the Age of AI. AMA J Ethics 2019, 21, E146–E152. [Google Scholar] [CrossRef] [PubMed]
  44. Kung, T.H.; Cheatham, M.; Medenilla, A.; Sillos, C.; De Leon, L.; et al. Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models. PLOS Digital Health 2023, 2, e0000198. [Google Scholar] [CrossRef]
  45. Amann, J.; Blasimme, A.; Vayena, E.; et al. Explainability for artificial intelligence in healthcare: a multidisciplinary perspective. BMC Med. Inform. Decis. Mak. 2020, 20, 310. [Google Scholar] [CrossRef]
  46. Koski, E.; Murphy, J. AI in Healthcare. Stud Health Technol Inform 2021, 284, 295–299. [Google Scholar] [CrossRef]
  47. Bali, J.; Garg, R.; Bali, R.T. Artificial intelligence (AI) in healthcare and biomedical research: Why a strong computational/AI bioethics framework is required? Indian J. Ophthalmol. 2019, 67, 3–6. [Google Scholar] [CrossRef]
  48. Bali, J.; Bali, O. Artificial intelligence in ophthalmology and healthcare: An updated review of the techniques in use. Indian J. Ophthalmol. 2021, 69, 8–13. [Google Scholar] [CrossRef] [PubMed]
  49. Bobak, C.A.; Svoboda, M.; Giffin, K.A.; Wall, D.P.; Moore, J. Raising the stakeholders: Improving patient outcomes through interprofessional collaborations in AI for healthcare. Pacific Symposium on Biocomputing 2021, 26, 351–355. [Google Scholar] [PubMed]
  50. Chen, M.; Decary, M. Artificial intelligence in healthcare: An essential guide for health leaders. Healthc Manage Forum 2020, 33, 10–18. [Google Scholar] [CrossRef] [PubMed]
  51. Chomutare, T.; Tejedor, M.; Svenning, T.O.; Marco-Ruiz, L.; Tayefi, M.; Lind, K.; Godtliebsen, F.; Moen, A.; Ismail, L.; Makhlysheva, A.; et al. Artificial Intelligence Implementation in Healthcare: A Theory-Based Scoping Review of Barriers and Facilitators. Int. J. Environ. Res. Public Health 2022, 19, 16359. [Google Scholar] [CrossRef] [PubMed]
  52. Davenport, T.; Kalakota, R. The potential for artificial intelligence in healthcare. Future Health J. 2019, 6, 94–98. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6616181/. [CrossRef]
  53. De Togni, G.; Erikainen, S.; Chan, S.; Cunningham-Burley, S. What makes AI ‘intelligent’ and ‘caring’? Exploring affect and relationality across three sites of intelligence and care. Soc. Sci. Med. 2021, 277, 113874. [Google Scholar] [CrossRef]
  54. González-Gonzalo, C.; Thee, E.F.; Klaver, C.C.W.; Lee, A.Y.; Schlingemann, R.O.; Tufail, A.; Verbraak, F.; Sánchez, C.I. Trustworthy AI: Closing the gap between development and integration of AI systems in ophthalmic practice. Prog. Retin. Eye Res. 2022, 90, 101034. [Google Scholar] [CrossRef] [PubMed]
  55. Johnson, K.B.; Wei, W.Q.; Weeraratne, D.; Frisse, M.E.; Misulis, K.; Rhee, K.; Zhao, J.; Snowdon, J.L. Precision Medicine, AI, and the Future of Personalized Health Care. Clin. Transl. Sci. 2021, 14, 86–93. [Google Scholar] [CrossRef]
  56. Hadjiiski, L.; Cha, K.; Chan, H.P.; Drukker, K.; Morra, L.; Näppi, J.J.; Sahiner, B.; Yoshida, H.; Chen, Q.; Deserno, T.M.; et al. AAPM task group report 273: Recommendations on best practices for AI and machine learning for computer-aided diagnosis in medical imaging. Med Phys 2023, 50, e1–e24. https://aapm.onlinelibrary.wiley.com/doi/10.1002/mp.16188. [CrossRef]
  57. Yang, Y.C.; Islam, S.U.; Noor, A.; Khan, S.; Afsar, W.; Nazir, S. Influential Usage of Big Data and Artificial Intelligence in Healthcare. Computational and Mathematical Methods in Medicine 2021, 2021, 5812499. [Google Scholar] [CrossRef]
  58. von Gerich, H.; Moen, H.; Block, L.J.; Chu, C.H.; DeForest, H.; Hobensack, M.; Michalowski, M.; Mitchell, J.; Nibber, R.; Olalia, M.A.; Pruinelli, L.; Ronquillo, C.E.; Topaz, M.; Peltonen, L.M. Artificial Intelligence -based technologies in nursing: A scoping literature review of the evidence. Int. J. Nurs. Stud. 2022, 127, 104153. [Google Scholar] [CrossRef] [PubMed]
  59. Wang, F.; Preininger, A. AI in Health: State of the Art, Challenges, and Future Directions. Yearb Med Inform 2019, 28, 16–26. [Google Scholar] [CrossRef] [PubMed]
  60. Wilson, A.; Saeed, H.; Pringle, C.; et al. Artificial intelligence projects in healthcare: 10 practical tips for success in a clinical environment. BMJ Health Care Inform 2021, 28, e100323. [Google Scholar] [CrossRef] [PubMed]
  61. Liu, P.R.; Lu, L.; Zhang, J.Y.; Huo, T.T.; Liu, S.X.; Ye, Z.W. Application of Artificial Intelligence in Medicine: An Overview. Curr. Med. Sci. 2021, 41, 1105–1115. [Google Scholar] [CrossRef] [PubMed]
  62. Mudgal, S.K.; Agarwal, R.; Chaturvedi, J.; Gaur, R.; Ranjan, N. Real-world application, challenges and implication of artificial intelligence in healthcare: an essay. Pan Afr Med J 2022, 43, 3. [Google Scholar] [PubMed]
  63. Naik, N.; Hameed, B.M.Z.; Sooriyaperakasam, N.; Vinayahalingam, S.; Patil, V.; Smriti, K.; Saxena, J.; Shah, M.; Ibrahim, S.; Singh, A.; Karimi, H.; Naganathan, K.; Shetty, D.K.; Rai, B.P.; Chlosta, P.; Somani, B.K. Transforming healthcare through a digital revolution: A review of digital healthcare technologies and solutions. Front. Digit. Health 2022, 4, 919985. [Google Scholar] [CrossRef] [PubMed]
  64. Noorbakhsh-Sabet, N.; Zand, R.; Zhang, Y.; Abedi, V. Artificial Intelligence Transforms the Future of Health Care. Am. J. Med. 2019, 132, 795–801. [Google Scholar] [CrossRef]
  65. Vaishya, R.; Javaid, M.; Khan, I.H.; Haleem, A. Artificial Intelligence (AI) applications for COVID-19 pandemic. Diabetes Metab Syndr 2020, 14, 337–339. [Google Scholar] [CrossRef]
  66. Scott, I.A.; Carter, S.M.; Coiera, E. Exploring stakeholder attitudes towards AI in clinical practice. BMJ Health Care Inform 2021, 28, e100450. [Google Scholar] [CrossRef]
  67. Shaban-Nejad, A.; Michalowski, M.; Bianco, S.; Brownstein, J.S.; Buckeridge, D.L.; Davis, R.L. Applied artificial intelligence in healthcare: Listening to the winds of change in a post-COVID-19 world. Exp. Biol. Med. 2022, 247, 1969–1971. [Google Scholar] [CrossRef]
  68. Unger, M.; Berger, J.; Melzer, A. Robot-Assisted Image-Guided Interventions. Front. Robot. AI 2021, 8, 664622. [Google Scholar] [CrossRef]
  69. Srivastava, S.K.; Singh, S.K.; Suri, J.S. State-of-the-art methods in healthcare text classification system: AI paradigm. Front. Biosci. 2020, 25, 646–672. [Google Scholar] [CrossRef]
  70. Han, H.; Liu, X. The challenges of explainable AI in biomedical data science. BMC Bioinformatics 2021, 22 (Suppl. 12), 443. [Google Scholar] [CrossRef] [PubMed]
  71. Stanfill, M.H.; Marc, D.T. Health Information Management: Implications of Artificial Intelligence on Healthcare Data and Information Management. Yearb Med Inform 2019, 28, 56–64. [Google Scholar] [CrossRef] [PubMed]
  72. Reddy, S.; Allan, S.; Coghlan, S.; Cooper, P. A governance model for the application of AI in health care. J Am Med Inform Assoc 2020, 27, 491–497. [Google Scholar] [CrossRef] [PubMed]
  73. Reddy, S.; Rogers, W.; Makinen, V.-P.; et al. Evaluation framework to guide implementation of AI systems into healthcare settings. BMJ Health Care Inform 2021, 28, e100444. [Google Scholar] [CrossRef] [PubMed]
  74. Roski, J.; Maier, E.J.; Vigilante, K.; Kane, E.A.; Matheny, M.E. Enhancing trust in AI through industry self-governance. J Am Med Inform Assoc 2021, 28, 1582–1590. [Google Scholar] [CrossRef] [PubMed]
  75. Frégnac, Y. How Blue is the Sky? eNeuro 2021, 8. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8174045/. [CrossRef]
  76. Stern, J. GPT-4 Might Just Be a Bloated, Pointless Mess. The Atlantic, Atlantic Media Company, 6 March 2023. https://www.theatlantic.com/technology/archive/2023/03/openai-gpt-4-parameters-power-debate/673290/.
  77. Kerasidou, C.X.; Kerasidou, A.; Buscher, M.; et al. Before and beyond trust: reliance in medical AI. J Med Ethics 2022, 48, 852–856. [Google Scholar] [CrossRef]
  78. Vallès-Peris, N.; Barat-Auleda, O.; Domènech, M. Robots in Healthcare? What Patients Say. Int. J. Environ. Res. Public Health 2021, 18, 9933. [Google Scholar] [CrossRef] [PubMed]
  79. Siala, H.; Wang, Y. SHIFTing artificial intelligence to be responsible in healthcare: A systematic review. Soc Sci Med 2022, 296, 114782. [Google Scholar] [CrossRef] [PubMed]
  80. Smallman, M. Multi Scale Ethics-Why We Need to Consider the Ethics of AI in Healthcare at Different Scales. Sci Eng Ethics 2022, 28, 63. [Google Scholar] [CrossRef] [PubMed]
  81. Vinge, V. The coming technological singularity: How to survive in the post-human era. San Diego State University. NASA Lewis Research Center VISION-21 Symposium CP-10129. Document ID: 1994002285, Accession Number: 94N27359. Whole Earth Review. 1993. https://ntrs.nasa.gov/api/citations/19940022856/downloads/19940022856.pdf.
  82. Park, M.; Leahey, E.; Funk, R.J. Papers and patents are becoming less disruptive over time. Nature 2023, 613, 138–144. [Google Scholar] [CrossRef] [PubMed]
  83. LeCun, Y. A Path Towards Autonomous Machine Intelligence. OpenReview Archive. (27 June 2022). https://openreview.net/pdf?id=BZ5a1r-kVsf.
  84. Stefano, G.B. Robotic Surgery: Fast Forward to Telemedicine. Med Sci Monit 2017, 23, 1856–1856. [Google Scholar] [CrossRef]
  85. Manickam, P.; Mariappan, S.A.; Murugesan, S.M.; Hansda, S.; Kaushik, A.; Shinde, R.; Thipperudraswamy, S.P. Artificial Intelligence (AI) and Internet of Medical Things (IoMT) Assisted Biomedical Systems for Intelligent Healthcare. Biosensors 2022, 12, 562. [Google Scholar] [CrossRef] [PubMed]
  86. Clery, D. Could a Wireless Pacemaker Let Hackers Take Control of Your Heart? Science 9 February 2015. www.science.org/content/article/could-wireless-pacemaker-let-hackers-take-control-your-heart.
  87. Poulsen, K. Hackers Assault Epilepsy Patients via Computer. Wired 29 March 2008. www.wired.com/2008/03/hackers-assault-epilepsy-patients-via-computer/.
  88. Paton, C.; Kobayashi, S. An Open Science Approach to Artificial Intelligence in Healthcare. Yearb Med Inform 2019, 28, 47–51. [Google Scholar] [CrossRef]
  89. Arquilla, J.; Ronfeldt, D. The Advent Of Netwar; Rand Corporation: Santa Monica, CA, USA, 1996. [Google Scholar] [CrossRef]
  90. Broniatowski, D.A.; Jamison, A.M.; Qi, S.; AlKulaib, L.; Chen, T.; Benton, A.; Quinn, S.C.; Dredze, M. Weaponized Health Communication: Twitter Bots and Russian Trolls Amplify the Vaccine Debate. Am J Public Health 2018, 108, 1378–1384. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6137759/. [CrossRef]
  91. Strudwicke, I.J.; Grant, W.J. #JunkScience: Investigating pseudoscience disinformation in the Russian Internet Research Agency tweets. Public Underst Sci 2020, 29, 459–472. https://journals.sagepub.com/doi/10.1177/0963662520935071. [CrossRef] [PubMed]
  92. Mønsted, B.; Sapieżyński, P.; Ferrara, E.; Lehmann, S. Evidence of complex contagion of information in social media: An experiment using Twitter bots. PLOS ONE 2017, 12, e0184148. [Google Scholar] [CrossRef]
  93. Ruiz-Núñez, C.; Segado-Fernández, S.; Jiménez-Gómez, B.; Hidalgo, P.J.J.; Magdalena, C.S.R.; Pollo, M.D.C.Á.; Santillán-Garcia, A.; Herrera-Peco, I. Bots’ Activity on COVID-19 Pro and Anti-Vaccination Networks: Analysis of Spanish-Written Messages on Twitter. Vaccines 2022, 10, 1240. [Google Scholar] [CrossRef] [PubMed]
  94. Weng, Z.; Lin, A. Public Opinion Manipulation on Social Media: Social Network Analysis of Twitter Bots during the COVID-19 Pandemic. Int. J. Environ. Res. Public Health 2022, 19, 16376. [Google Scholar] [CrossRef]
  95. Xu, W.; Sasahara, K. Characterizing the roles of bots on Twitter during the COVID-19 infodemic. J Comput Soc Sci 2022, 5, 591–609, Erratum in J Comput Soc Sci 2021, 5, 591–609. [Google Scholar] [CrossRef]
  96. Zhang, Y.; Song, W.; Shao, J.; Abbas, M.; Zhang, J.; Koura, Y.H.; Su, Y. Social Bots’ Role in the COVID-19 Pandemic Discussion on Twitter. Int. J. Environ. Res. Public Health 2023, 20, 3284. [Google Scholar] [CrossRef] [PubMed]
  97. Dunn, A.G.; Surian, D.; Dalmazzo, J.; Rezazadegan, D.; Steffens, M.; Dyda, A.; Leask, J.; Coiera, E.; Dey, A.; Mandl, K.D. Limited Role of Bots in Spreading Vaccine-Critical Information Among Active Twitter Users in the United States: 2017-2019. Am J Public Health 2020, 110, S319–S325. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7532316/. [CrossRef]
  98. Leskova, I.V.; Zyazin, S.U. The lack of confidence to vaccination as information planting. Probl Sotsialnoi Gig Zdravookhranenniiai Istor Med 2021, 29, 37–40. https://pubmed.ncbi.nlm.nih.gov/33591653/. [CrossRef]
  99. Imhoff, R.; Lamberty, P. A Bioweapon or a Hoax? The Link Between Distinct Conspiracy Beliefs About the Coronavirus Disease (COVID-19) Outbreak and Pandemic Behavior. Soc Psychol Personal Sci 2020, 11, 1110–1118. [Google Scholar] [CrossRef]
  100. Wakefield, J. Deepfake Presidents Used in Russia-Ukraine War. BBC News, BBC, 18 March 2022. https://www.bbc.com/news/technology-60780142.
Figure 1. A) AI > ML > DL > LLM * subsets. B) A multi-layered artificial neural network. LLMs, which are built on neural networks, possess a further level of computational abstraction—breaking down language inputs into blocks or nodes interconnected by a web of relational associations. Knowledge graphs do so as well but are not based on neural networks, instead possessing a more rigid and hierarchical structure. C) A simplified schematic of Moravec’s paradox. See text for examples. * AI = artificial intelligence, ML = machine learning, DL = deep learning, LLM = large language model.
Figure 1. A) AI > ML > DL > LLM * subsets. B) A multi-layered artificial neural network. LLMs, which are built on neural networks, possess a further level of computational abstraction—breaking down language inputs into blocks or nodes interconnected by a web of relational associations. Knowledge graphs do so as well but are not based on neural networks, instead possessing a more rigid and hierarchical structure. C) A simplified schematic of Moravec’s paradox. See text for examples. * AI = artificial intelligence, ML = machine learning, DL = deep learning, LLM = large language model.
Preprints 75353 g001
Table 1. Summary of the key findings of this review.
Table 1. Summary of the key findings of this review.
Key Findings
AI is positively disrupting both basic science research and the healthcare field.
In the lab, AI is aiding in drug discovery and automated image analysis.
In the clinic, AI is successfully used in radiological diagnosis, optical imaging, and surgery guidance.
Current approaches to AI are limited in reliability due to their lack of explainability (black box) and difficulty in confirming their solutions.
AI carries the risk of amplifying biases and being weaponized to spread anti-vaccine and other health disinformation.
Concerns of technological unemployment due to automation have likely been exaggerated (Moravec’s paradox).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated