Version 1
: Received: 11 October 2024 / Approved: 14 October 2024 / Online: 15 October 2024 (11:49:05 CEST)
How to cite:
Dai, C.-P.; Nuder, A. Extended Reality and Multimodal Artificial Intelligence for Human Performance: A Review of Current Status and Future Outlook. Preprints2024, 2024101023. https://doi.org/10.20944/preprints202410.1023.v1
Dai, C.-P.; Nuder, A. Extended Reality and Multimodal Artificial Intelligence for Human Performance: A Review of Current Status and Future Outlook. Preprints 2024, 2024101023. https://doi.org/10.20944/preprints202410.1023.v1
Dai, C.-P.; Nuder, A. Extended Reality and Multimodal Artificial Intelligence for Human Performance: A Review of Current Status and Future Outlook. Preprints2024, 2024101023. https://doi.org/10.20944/preprints202410.1023.v1
APA Style
Dai, C. P., & Nuder, A. (2024). Extended Reality and Multimodal Artificial Intelligence for Human Performance: A Review of Current Status and Future Outlook. Preprints. https://doi.org/10.20944/preprints202410.1023.v1
Chicago/Turabian Style
Dai, C. and Azibun Nuder. 2024 "Extended Reality and Multimodal Artificial Intelligence for Human Performance: A Review of Current Status and Future Outlook" Preprints. https://doi.org/10.20944/preprints202410.1023.v1
Abstract
Advanced technologies have had a transformative impact on education. In this paper, we explored the current status and future outlook of the use of AI-supported multimodal extended reality for human performance. Using a systematic scoping review design and a machine learning-based semi-automatic approach supplemented by pattern review, we derived several insights into AI-supported multimodal extended reality for human performance. Text mining and topic modeling revealed an optimal twenty-six topics from the included studies. These classifications are salient in the extended reality technologies used (i.e., virtual and augmented reality), the multimodal techniques involved (i.e., haptic, eye, and brain tracking), and the AI leveraged (i.e., machine learning accuracy). Through pattern review, we distilled topical patterns on 1) Goals and Outcomes of AI-supported Multimodal Extended Reality for Human Performance; 2) Disentangling the Dynamics of User Interactions in Virtual Environments with Multimodal Strategies; 3) Synergistic Multimodality with Emerging AI Technologies Using Machine Learning, LLMs, and VLMs; 4) Fostering Engaging, Interactive and Immersive Human Experiences through Ambient Intelligence. These nuanced details in AI-supported multimodal extended reality are emerging, yet not established enough to be classified through text mining and topic modeling. We discussed the implications of these findings for AI-supported multimodal extended reality for human performance in future research and practice.
Keywords
artificial intelligence; education; extended reality; human learning; immersive technologies; multimodality
Subject
Computer Science and Mathematics, Artificial Intelligence and Machine Learning
Copyright:
This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.