Key points
AI Integration in Radiology: ChatGPT versions 3.5 and 4.0, developed by OpenAI, are advancing radiology by enhancing diagnostic accuracy and educational processes, and are promising for reducing diagnosis time and healthcare costs.
Enhanced Diagnostic Accuracy: ChatGPT 4.0 shows promise in diagnostic quizzes, suggesting that it could support radiologists in clinical decision-making.
ChatGPT in Communication: Studies indicate ChatGPT's potential in patient communication, offering accurate answers, and highlighting the need for simplification and human oversight in clinical settings.
Introduction and Background
Artificial Intelligence (AI) is transforming radiology with the help of ChatGPT, a language model developed by OpenAI (San Francisco, California, USA). ChatGPT versions 3.5 and 4.0 are the latest innovations in AI for medical diagnosis and education, powered by a large dataset of radiological articles. ChatGPT can analyze and explain medical images and generate reports and summaries for different audiences [
1,
2,
3,
4]. ChatGPT has shown impressive results in medical tests and quizzes, indicating its potential to enhance medical diagnosis, education, and care [
5,
6]. ChatGPT can provide real-time image analysis and interpretation, which can benefit patients, doctors, and healthcare systems. ChatGPT can also make radiological texts more accessible and understandable for different audiences [
7,
8]. ChatGPT can also reduce diagnostic errors and streamline radiological workflows. However, it has some limitations when applied to radiology, such as data quality, ethics, and human oversight. ChatGPT requires further research and development to improve its performance and user experience [
9,
10,
11].
This review article explores how ChatGPT can assist radiologists in interpreting medical images and communicating their findings to other healthcare professionals and patients. It also highlights the potential benefits of ChatGPT for radiology and patient care as well as the future directions of artificial intelligence (AI) in medical imaging.
Methodology
We searched PubMed, Scopus, and the Web of Science for 2010–2023 radiology ChatGPT articles. The search parameters were "ChatGPT,” “artificial intelligence,” “radiology, and” “medical imaging diagnosis," and were limited to articles in English. These articles studied the medical imaging applications and effects of ChatGPT. The final review excluded duplicate studies and articles that did not specifically address ChatGPT in the field of radiology. Evidence from case studies, letters, observational studies, and randomized controlled trials was reviewed (
Table 1).
ChatGTP in Patient Communication
ChatGPT is a processing tool that can understand and respond to complex questions and commands. Recent studies have focused on how ChatGPT can answer patients' questions and translate radiology reports into plain language.
Gordon et al. evaluated how ChatGPT answers patients' questions about imaging. The authors asked 22 questions, with and without a prompt, to provide accurate and simple answers. Four radiologists and two patient advocates rated the answers for accuracy, consistency, relevance, and readability. The results showed that ChatGPT was accurate and relevant, but not very readable. The prompt improved the consistency and relevance of the answers, but not their accuracy or readability. The authors concluded that ChatGPT has the potential to help patients but requires more supervision and improvement [
12].
According to a study by Lyu et al. [
13], ChatGPT can translate radiology reports into plain language for patients and healthcare providers. The authors used ChatGPT to translate reports from chest and brain scans and had them evaluated by radiologists. The results showed that the ChatGPT can translate reports with high accuracy and provide relevant suggestions, but sometimes it can be too simple or miss some information. The authors also compared ChatGPT with GPT-4, a new model, and found that GPT-4 can improve the quality of translation. The authors concluded that large language models can be useful for clinical education but need more improvement.
Li et al. evaluated the ability of ChatGPT to simplify diagnostic radiology reports to an 8th-grade reading level. After analyzing 400 reports, it was found that the original reports had a mean Flesch-Kincaid reading level (FKRL) above the targeted grade level, especially for CT and MRI. ChatGPT significantly reduced the FKRL to 5.8, increased the Flesch reading-ease score, and shortened the length of the reports, successfully simplifying all outputs to the desired reading level. This indicates the potential of ChatGPT as a tool for making radiology reports more accessible to patients [
14].
Jeblick et al. used ChatGPT to simplify radiology reports, with 15 radiologists evaluating their quality. The results were positive, with reports generally considered correct and harmless. However, errors and omissions were noted, indicating a potential risk to the patients. The study concluded that while ChatGPT shows promise for enhancing patient-centered care, it requires medical oversight and further refinement for safe use in healthcare. The rise of patients using ChatGPT for medical explanations suggests significant implications for patient-doctor interactions, presenting both opportunities and challenges in clinical practice [
15].
ChatGPT Enhancing Reporting Quality, and Workflow Efficiency
ChatGPT-4 may improve the quality of radiology referral. Barash et al. ChatGPT-4 is a tool for choosing imaging examinations and writing radiology referrals in the emergency department (ED). Five clinical notes were collected from the ED for each of the following conditions: pulmonary embolism, kidney stones, appendicitis, diverticulitis, bowel obstruction, cholecystitis, hip fracture, and testicular torsion. Forty patients were included in this study. The authors entered these notes into ChatGPT-4, asking for the best imaging examinations and protocols. The chatbot was also asked to write the radiology referrals. Two radiologists rated the referrals on a scale of 1 to 5 for clarity, relevance, and diagnosis. The chatbot's imaging suggestions were compared with the ACR Appropriateness Criteria (AC) and the ED exams. Agreement between readers was measured using linear weighted Cohen's κ coefficient. The ChatGPT-4 imaging suggestions matched the ACR AC and ED examinations in all cases. Two patients (5%) showed protocol differences between ChatGPT and ACR AC. ChatGPT-4-generated referrals had average scores of 4.6 and 4.8 for clarity, 4.5 and 4.4, relevance, and 4.9 both respectively. The agreement between readers was moderate for relevance and clarity, and substantial for diagnosis grading. ChatGPT-4 has shown potential for the selection of imaging examinations in some clinical cases [
16].
Mese et al. explained how artificial intelligence, especially ChatGPT, can significantly enhance radiological services by automating tasks, such as patient registration, scheduling, and image analysis, leading to improved efficiency, accuracy, and patient care. These advancements can transform radiology workflows, making complex processes more streamlined and reliable. However, the implementation of AI in healthcare faces challenges, including algorithmic bias and ethical considerations, which must be meticulously managed. Despite these hurdles, the continuous progression of AI technology like ChatGPT promises to make it an indispensable asset in radiology and the wider healthcare sector [
17].
As the volume of radiology imaging studies increases globally, new IT tools promise to enhance both the quality and the quantity of radiological reports. In a study conducted by Bosbach et al., involving nine cases of distal radius fractures, the AI tool ChatGPT was used to draft radiology reports based on templates from the RSNA and AO classifiers, with five iterations per case. ChatGPT received high scores in quality assessments and showed adaptability to changes in input commands. Despite this, challenges remain in the use of technical languages and medical interpretations. Such tools have the potential to assist radiologists in handling routine reporting tasks, thereby allowing them to concentrate on a more detailed image analysis and patient-specific pathology. ChatGPT represents a significant advancement towards this goal, indicating a promising future for AI support in radiology [
18].
Structured reporting, facilitated by large language models (LLMs), is becoming an increasingly important tool in radiology, potentially improving workflow and inter-physician communication. The rapid advancement of AI in medicine has led to testing of LLMs for their efficacy in creating structured radiological reports. Mallio et al. compared four different LLMs to assess their understanding of structured reporting and their ability to propose accurate reporting templates. While the results indicate that LLMs have substantial potential to generate reliable structured reports, this study highlights the need for further formal validation before these tools can be widely implemented in clinical settings. This validation is crucial for ensuring the accuracy and utility of LLM-generated reports in actual medical practice [
19].
Bajaj et al. highlights AI’s current role spans from administrative tasks like scheduling to clinical functions such as disease detection on scans. The potential of natural language processing and large language models such as ChatGPT is vast and promising to improve patient outcomes, streamline radiology interpretations, and optimize radiologists’ workflows [
20].
Chat GTP Accuracy on Board Examinations and Radiology Quizzes
Toyama et al. evaluated the accuracy of large language models (LLMs) like ChatGPT, GPT-4, and Google Bard, in answering the questions of the Japan Radiology Board Examination (JRBE). Using 103 JRBE questions, GPT-4 was found to be significantly more accurate, with a 65% correct response rate, compared to ChatGPT (40.8 %) and Google Bard (38.8 %). GPT-4 excelled, particularly in lower-order thinking questions and single-answer format, with notable performance in nuclear medicine compared to diagnostic radiology. The results suggest GPT-4’s superior capability in handling clinical radiology queries, indicating the potential for LLMs to assist in advanced clinical decision-making in radiology, especially in Japan [
21].
Suthar et al. highlights ChatGPT 4.0's capabilities in radiological diagnostics by analyzing its performance on American Journal of Neuroradiology's (AJNR) "Case of the Month" quizzes. The AI model tested on 140 neuroradiology cases achieved an overall diagnostic accuracy of 57.86%. Performance varied by category, with accuracy rates of 54.65% for the brain, 67.65% for the head and neck, and 55% for the spine. These results indicate that LLMs like ChatGPT 4.0 could become valuable aids in radiology, potentially improving patient outcomes and transforming medical education through augmented diagnostic processes [
3].
Bhayana et al. tested ChatGPT on 150 radiology board-style multiple-choice questions, mirroring the Canadian Royal College and American Board of Radiology exams. Without image aids, ChatGPT correctly answered 69% of the questions, excelling more in lower-order (84%) than in higher-order thinking questions (60%). It showed competence in clinical management but was less effective in higher-order tasks such as describing imaging findings, calculations, and applying concepts, especially in physics (40% accuracy), compared to clinical topics (73% accuracy). Despite its strong overall performance and confident language use, the limitations of ChatGPT in certain advanced cognitive tasks show areas for potential improvement. The model's proficiency in clinical management suggests it could be a valuable educational aid, even though it lacks radiology-specific pre-training [
22].
Patil et al. compared ChatGPT-4 and Bard by Google using the American College of Radiology's Diagnostic Radiology In-Training exam questions, ChatGPT-4 significantly outperformed Bard, achieving 87.11% accuracy versus Bard's 70.44%. Additionally, ChatGPT-4 responses were shorter on average and took longer to generate than Bard's. In the subspecialty analysis, ChatGPT-4 showed superior performance in neuroradiology, general and physics, nuclear medicine, pediatric radiology, and ultrasound. However, no significant differences were noted in other subspecialties. Both chatbots sometimes provided incorrect or illogical explanations, underscoring the importance of recognizing their limitations in educational settings [
23].
Mago et al. assessed ChatGPT-3's ability to assist in oral and maxillofacial radiology, particularly in report writing and in identifying anatomical landmarks and pathologies. With a questionnaire of 80 questions, ChatGPT-3 demonstrated 100% accuracy in identifying radiographic landmarks but was limited in detailing pathologies. Queries were rated on a 4-point Likert scale across the three categories. Although ChatGPT-3 provides accurate information on pathology and radiographic features, it is less reliable for comprehensive pathology descriptions, including causes, symptoms, and treatments. The study concluded that while ChatGPT-3 is a valuable adjunct for oral radiologists, it should not be solely relied upon because of the potential information inaccuracy and the risk of medical errors. Nonetheless, it's beneficial for educating the community and reducing patient anxiety, as professionals craft treatment plans [
24].
Chat GTP in Education and Training
Sethi et al. surveyed 286 radiology residents in India and revealed the widespread use of online resources during on-call duties, with Radiopaedia and Radiology Assistant being preferred. The IMAIOS e-anatomy was the top anatomical resource. Although 61.8% had used ChatGPT, many found it inaccurate or insufficient without images. There is a need for future versions that include images and references to improve reliability. Currently, ChatGPT is less favored for on-call education but may be useful for explaining radiology to non-medical individuals and aiding in report writing and research [
25].
Russe et al. tested the ability of chatbots to translate fractures into the Arbeitsgemeinschaft Osteosynthesefragen (AO) classification system and revealed that while chatbots process codes faster than humans, they are less accurate. The GPT-4-based chatbots outperformed the GPT-3.5-Turbo version. Importantly, chatbots with access to specific AO knowledge were more consistent and accurate than generic ones, with a context-aware GPT-4 chatbot achieving 71% accuracy in providing correct full AO codes versus 2% for the generic GPT-4. This underscores the importance of customizing ChatGPT with specialized knowledge to maximize its utility in clinical settings [
26].
Limitations of ChatGPT
ChatGPT and AI can improve the radiology diagnostic accuracy and workflow, but they also have several limitations. The need for high-quality data to train these models limits the accuracy and reliability of AI-generated reports. The confident language of ChatGPT responses can misguide users and emphasize the need to interpret AI-generated information cautiously.
ChatGPT lacks visual content, which limits its use in imaging-intensive fields such as radiology. Research is needed to improve AI tools and to ensure that they complement human expertise. Establishing clear guidelines and integrating human oversight can maximize benefits of AI in the field of radiology, while minimizing risks.
Ethical concerns include data privacy and AI algorithm biases, which may affect decision making. Teixeira da Silva et al. stresses the importance of accuracy and ethics in biomedical literature. Authors pointed out the high rate of retraction in radiology, mainly due to duplication and plagiarism, especially in China. The authors criticize the inadequate screening by journals, which allows for unclear and incorrect terminology, possibly caused by unethical AI use or editing services. This paper highlights the need to disclose the use of AI, such as ChatGPT, in research and urges strict review processes to ensure the quality of the radiological literature [
27].
Further research is needed to assess the accuracy and safety of ChatGPT in clinical practice and to develop comprehensive guidelines for its use. Additionally, advanced imaging technology transfer and collaborative efforts will enhance efficiency and innovation. Developing national databases will pave the way for integrating artificial intelligence into patient care [
28].
Conclusion
ChatGPT and AI in radiology promise to improve diagnostic precision, clinical workflow, and interpretation. These advances indicate a bright future for radiological services that are more efficient, patient-centered, and educational. Integrating these technologies requires high-quality training data and careful ethical consideration. AI tools such as ChatGPT can process language and generate reports quickly; however, their limitations, such as the lack of visual content and the possibility of incorrect assertions, must be carefully supervised. As we approach this technological revolution in healthcare, we must be enthusiastic about the potential benefits and cautious about the drawbacks. Ongoing research, user experience refinement, and ethical governance are required to ensure that AI in radiology meets and exceeds patient care standards.
Abbreviations
ACR |
American College of Radiology |
AI |
Artificial Intelligence |
AJNR |
American Journal of Neuroradiology |
ChatGPT |
Chat Generative Pre-training Transformer |
LLM |
Language Learning Model |
NLP |
Natural Language Processing |
References
- H. Grewal et al., “Radiology Gets Chatty: The ChatGPT Saga Unfolds,” Cureus, Jun. 2023. [CrossRef]
- T. F. Tan et al., “Generative Artificial Intelligence Through ChatGPT and Other Large Language Models in Ophthalmology,” Ophthalmology Science, vol. 3, no. 4, p. 100394, Dec. 2023. [CrossRef]
- P. P. Suthar, A. Kounsal, L. Chhetri, D. Saini, and S. G. Dua, “Artificial Intelligence (AI) in Radiology: A Deep Dive Into ChatGPT 4.0’s Accuracy with the American Journal of Neuroradiology’s (AJNR) ‘Case of the Month,’” Cureus, Aug. 2023. 2023. [CrossRef]
- F. Crimì and E. Quaia, “GPT-4 versus Radiologists in Chest Radiography: Is It Time to Further Improve Radiological Reporting?,” Radiology, vol. 308, no. 2, Aug. 2023. [CrossRef]
- T. H. Kung et al., “Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models,” PLOS Digital Health, vol. 2, no. 2, p. e0000198, Feb. 2023. [CrossRef]
- H. Bagde, A. Dhopte, M. K. Alam, and R. Basri, “A systematic review and meta-analysis on ChatGPT and its utilization in medical and dental research,” Heliyon, p. e23050, Nov. 2023. [CrossRef]
- S. Srivastav et al., “ChatGPT in Radiology: The Advantages and Limitations of Artificial Intelligence for Medical Imaging Diagnosis,” Cureus, Jul. 2023. [CrossRef]
- M. Javaid, A. Haleem, and R. P. Singh, “ChatGPT for healthcare services: An emerging stage for an innovative perspective,” BenchCouncil Transactions on Benchmarks, Standards and Evaluations, vol. 3, no. 1, p. 100105, Feb. 2023. [CrossRef]
- S. Perera Molligoda Arachchige, “Empowering radiology: the transformative role of ChatGPT,” Clin Radiol, vol. 78, no. 11, pp. 851–855, Nov. 2023. [CrossRef]
- Y. K. Dwivedi et al., “Opinion Paper: ‘So what if ChatGPT wrote it?’ Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy,” Int J Inf Manage, vol. 71, p. 102642, Aug. 2023. [CrossRef]
- T. Dave, S. A. Athaluri, and S. Singh, “ChatGPT in medicine: an overview of its applications, advantages, limitations, future prospects, and ethical considerations,” Front Artif Intell, vol. 6, May 2023. [CrossRef]
- E. B. Gordon et al., “Enhancing patient communication with Chat-GPT in radiology: evaluating the efficacy and readability of answers to common imaging-related questions.,” J Am Coll Radiol, Oct. 2023. [CrossRef]
- Q. Lyu et al., “Translating radiology reports into plain language using ChatGPT and GPT-4 with prompt learning: results, limitations, and potential.,” Vis Comput Ind Biomed Art, vol. 6, no. 1, p. 9, May 2023. [CrossRef]
- H. Li et al., “Decoding radiology reports: Potential application of OpenAI ChatGPT to enhance patient understanding of diagnostic reports,” Clin Imaging, vol. 101, pp. 137–141, Sep. 2023. [CrossRef]
- K. Jeblick et al., “ChatGPT makes medicine easy to swallow: an exploratory case study on simplified radiology reports.,” Eur Radiol, Oct. 2023. [CrossRef]
- Y. Barash, E. Klang, E. Konen, and V. Sorin, “ChatGPT-4 Assistance in Optimizing Emergency Department Radiology Referrals and Imaging Selection.,” J Am Coll Radiol, vol. 20, no. 10, pp. 998–1003, Oct. 2023. [CrossRef]
- Mese, C. A. Taslicay, and A. K. Sivrioglu, “Improving radiology workflow using ChatGPT and artificial intelligence.,” Clin Imaging, vol. 103, p. 109993, Nov. 2023. [CrossRef]
- W. A. Bosbach et al., “Ability of ChatGPT to generate competent radiology reports for distal radius fracture by use of RSNA template items and integrated AO classifier.,” Curr Probl Diagn Radiol, Apr. 2023. [CrossRef]
- C. A. Mallio, A. C. Sertorio, C. Bernetti, and B. Beomonte Zobel, “Large language models for structured reporting in radiology: performance of GPT-4, ChatGPT-3.5, Perplexity and Bing.,” Radiol Med, vol. 128, no. 7, pp. 808–812, Jul. 2023. [CrossRef]
- S. Bajaj, D. Gandhi, and D. Nayar, “Potential Applications and Impact of ChatGPT in Radiology.,” Acad Radiol, Oct. 2023. [CrossRef]
- Y. Toyama et al., “Performance evaluation of ChatGPT, GPT-4, and Bard on the official board examination of the Japan Radiology Society.,” Jpn J Radiol, Oct. 2023. [CrossRef]
- R. Bhayana, S. Krishna, and R. R. Bleakney, “Performance of ChatGPT on a Radiology Board-style Examination: Insights into Current Strengths and Limitations.,” Radiology, vol. 307, no. 5, p. e230582, Jun. 2023. [CrossRef]
- N. S. Patil, R. S. Huang, C. B. van der Pol, and N. Larocque, “Comparative Performance of ChatGPT and Bard in a Text-Based Radiology Knowledge Assessment.,” Can Assoc Radiol J, p. 8465371231193716, Aug. 2023. [CrossRef]
- Mago and M. Sharma, “The Potential Usefulness of ChatGPT in Oral and Maxillofacial Radiology.,” Cureus, vol. 15, no. 7, p. e42133, Jul. 2023. [CrossRef]
- H. S. Sethi, S. Mohapatra, C. Mali, and R. Dubey, “Online for On Call: A Study Assessing the Use of Internet Resources Including ChatGPT among On-Call Radiology Residents in India,” Indian Journal of Radiology and Imaging, vol. 33, no. 04, pp. 440–449, Oct. 2023. [CrossRef]
- M. F. Russe et al., “Performance of ChatGPT, human radiologists, and context-aware ChatGPT in identifying AO codes from radiology reports,” Sci Rep, vol. 13, no. 1, p. 14215, Aug. 2023. [CrossRef]
- A. Teixeira da Silva, “Linguistic precision, and declared use of ChatGPT, needed for radiology literature.,” Eur J Radiol, vol. 170, p. 111212, Nov. 2023. [CrossRef]
- Pepe et al., “Medical Radiology: Current Progress,” Diagnostics, vol. 13, no. 14, p. 2439, Jul. 2023. [CrossRef]
Table 1.
ChatGPT- Studies.
Table 1.
ChatGPT- Studies.
ChatGTP in Patient Communication |
Gordon et al. J Am Coll Radiol 2023 [12] |
- •
ChatGPT provides mostly accurate and relevant responses to patient questions on imaging, with improved consistency and relevance when prompted, though readability remains low.
|
Lyu et al. Vis Comput Ind Biomed Art 2023 [13] |
- •
ChatGPT effectively translates radiology reports into layman's terms, with radiologists affirming high accuracy yet noting occasional oversimplifications or omissions.
|
Li et al. Clin Imaging 2023[14] |
- •
ChatGPT effectively simplifies radiology reports to an 8th-grade level, improving readability, especially for complex CT and MRI reports.
|
Jeblick et al. Eur Radiol 2023[15] |
- •
ChatGPT's simplification of radiology reports was generally positive, but noted errors pose risks, highlighting the need for medical oversight.
|
ChatGPT Enhancing Reporting Quality, and Workflow Efficiency |
Barash et al. J Am Coll Radiol 2023 [16] |
- •
-ChatGPT-4 aligned with ACR AC in 95% of cases for imaging protocols, scored highly (avg. 4.5+) in clarity and diagnosis in ED radiology referrals.
- •
-Demonstrated potential in aiding exam selection, with moderate to substantial inter-rater agreement, suggesting utility in clinical decision-making.
|
Mese al. Clin Imaging 2023[17] |
- •
AI and its NLP technologies like ChatGPT are set to transform radiology by automating tasks and increasing efficiency, though challenges like bias and ethics remain.
|
Bosbach al. Curr Probl Diagn Radiol 2023 [18] |
- •
Growing radiology studies drive demand for IT tools like ChatGPT, which shows promise in drafting high-quality radiology reports but faces challenges in technical accuracy.
|
Mallio al. Radiol Med 2023 [19] |
- •
ChatGPT could improving workflows and communication but require further validation for clinical use.
|
Bajaj et al. Acad Radiol 2023 [20] |
- •
AI in radiology, from scheduling to disease detection, is expanding with tools like ChatGPT to improve outcomes and workflow efficiency.
|
Chat GTP Accuracy on Board Examinations and Radiology Quizzes |
Toyama et al. Jpn J Radiol 2023 [21] |
- •
GPT-4 excelled in the Japan Radiology Board Exam, especially in nuclear medicine, showcasing potential in medical education and clinical decisions.
|
Suthar et al. Cureus 2023[3] |
- •
ChatGPT 4.0's diagnostic accuracy on 140 AJNR quizzes was 57.86%, suggesting its potential utility in radiological diagnostics and education.
|
Bhayana et al Radiology 2023[22] |
- •
ChatGPT answer 69% of radiology exam-style questions correctly, with better performance on lower-order (84%) than on higher-order (60%) questions.
|
Patil et al. Can Assoc Radiol J 2022[23] |
- •
ChatGPT-4 surpassed Google's Bard in accuracy (87.11% vs. 70.44%) on radiology exams, with more concise but slower responses.
|
Mago et al. Cureus 2023 [24] |
- •
ChatGPT-3 adept in identifying radiographic landmarks in oral radiology, yet its pathology descriptions were less comprehensive.
|
Chat GTP in Education and Training |
Sethi et al. Indian J Radiol Imaging 2023[25] |
- •
Indian radiology residents often use online resources; ChatGPT was used by 61.8% but seen as lacking due to no images or references.
|
Russe al. Sci Rep 2023 [26] |
- •
ChatGPT assigns AO fracture codes faster than radiologists but with less accuracy
|
|
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).