Preprint
Review

Enhancing Robot Behavior with EEG, Reinforcement Learning and Beyond: A Review of Techniques in Collaborative Robotics

Altmetrics

Downloads

221

Views

145

Comments

0

A peer-reviewed article of this preprint also exists.

Submitted:

14 February 2024

Posted:

19 February 2024

You are already at the latest version

Alerts
Abstract
Collaborative robotics is a major topic in current robotics research posing new challenges, especially in Human-Robot Interaction. The crucial aspect in this area of research focuses on understanding the behavior of robots when engaging with humans, where reinforcement learning is a key discipline that allows us to explore sophisticated emerging reactions. This review aims to delve into the relevance of different sensors and techniques, with special attention to EEG (electroencephalography data on brain activity) and its influence on the behavior of robots interacting with humans. In addition, mechanisms available to mitigate potential risks during the experimentation process such as virtual reality will also be addressed. In the final part of the paper, future lines of research combining the areas of collaborative robotics, reinforcement learning, virtual reality and human factors will be explored, as this last aspect is vital to ensure safe and effective Human-Robot Interactions.
Keywords: 
Subject: Computer Science and Mathematics  -   Robotics

1. Introduction

Today’s industry is advancing at a very fast pace, which is why there is a growing demand to integrate collaborative robots, often called cobots, into work environments. These robots are specifically designed to work alongside humans, sharing their workspace and even collaborating on various tasks [1]. However, the introduction of collaborative robots brings with it some new paradigms and problems that traditional industrial robots do not generate. That is why it becomes important to take into account aspects such as the work environment, human safety and people’s emotional well-being. [2,3]
In collaborative robotics is if great importance to consider the different levels of cooperation between a human worker and a robot. The Figure 1 [4] shows a very detailed representation of the different collaborative scenarios. The first collaborative model is the Coexistence, where humans and cage-free robots work alongside each other but do not share a workspace, maintaining separate areas of operation. The second model is Synchronized interaction, where both human workers and robots share a workspace, but only one party is present in the workspace at any given time, ensuring a coordinated but separate workflow. The third model, Cooperation, takes this a step further by allowing both human and robot to perform tasks simultaneously in the same workspace, yet they do not work concurrently on the same product or component. Finally, the most integrated model is Collaboration, where the human worker and robot work in unison, simultaneously on the same product or component, exemplifying the highest level of partnership and synchronization in shared tasks.
All these emerging uncertainties discussed in the previous paragraph have increased research efforts in the area of collaborative robotics, with the goal of understanding the impact of humans sharing most, if not all, tasks with robots. It is important to take into account aspects such as efficiency, trust, ergonomics (Strain Index [5]) and the emotional experiences of humans in shared environments with collaborative robots [6]. These challenges being addressed these days are key for developing a safe workspace [7].
In the field of collaborative robotics, a leading idea revolves around the application of Reinforcement Learning (RL) to solve some of these challenges [8]. In the context of machine learning, reinforcement learning stands out as an approach in which an agent iteratively improves its behavior by interacting with its environment, receiving feedback through rewards and penalties to fine-tune its decision-making process. Through these techniques, cobots can exhibit emergent behavior and they can learn to handle simple tasks, which can be as basic as lifting a box or assembling a series of components. While it is true that through reinforcement learning, cobots can perform more difficult actions and exhibit more complex behaviors, these kinds of training tend to consume a great deal of time and computational resources. [9,10]
As previously mentioned, the purpose of applying reinforcement learning to collaborative robotics is optimizing specific routines and potentially discovering new behaviors that otherwise are unobtainable. Furthermore, it allows a direct handling of complex procedures, with already pre-trained behaviors allowing the robot to overcome new, unseen situations; making real-time decisions while interacting with the environment. The ability of a reinforcement learning agent to face unforeseen situations and respond to them is crucial in collaborative robotics, where adaptability to evolving scenarios, including unexpected factors such as human reactions, is critical. The key factors that have already been mentioned highlight why reinforcement learning is one of the most appealing areas for shaping the behavior of robots in collaborative environments. [11,12]
In classical reinforcement learning, the learning process is conceptualized as a loop where an agent interacts with its environment in discrete time steps. Every step, the agent observes the current state of the environment and selects an action based on its policy, a strategy defining its behavior. The action, when executed by the agent in the specific environment, leads to a new state and provides a reward signal to the agent. The agent’s goal is to learn a policy that maximizes the cumulative reward over time. This classical loop of observation, action, reward, and learning proposed by Richard S. Sutton in Figure 2, continues until the agent achieves satisfactory performance or the environment changes significantly.[13]
It is simple to see that most approaches ignore the human aspect in training reinforcement learning agents when considering the two fields of collaborative robotics and reinforcement learning outlined in the opening of this article. As mentioned earlier, factors like human well-being and workspace are significant variables. The approach of integrating reinforcement learning training with the human element via the EEG sensors to positively influence the collaborative robot’s behavior has a lot of potential. This review seeks to explore the different approaches when merging collaborative robotics, reinforcement learning, and the human element. It is important to note that quantifying human interaction, especially in factors like trust in the robot or task satisfaction, can be challenging [14]. There are numerous research that develop new techniques and methods available for quantifying the different variables with measurement devices such as EEG sensors.
EEG sensors are devices used to measure the electrical activity in the brain. They are important tools for monitoring neural signals and gaining insights into the cognitive process of human reasoning. The use of these sensors has been increased in the recent stages of human-robot monitoring for different measurements such as error detecting or human-comfort [15]. In addition to EEG sensors, other methods are used in other research areas to assess human confidence and related factors, including computer vision, eye tracking, and post-experiment human evaluations. Throughout this review article the sensors and techniques applied in collaborative robotics that have been or can be applied to reinforcement learning to improve human well-being will be explored [16].
It is also notable to highlight the growing trend of using virtual reality in collaborative robotics. Human safety is always needed and thanks to controlled or virtual environments, the experiments can be carried out without putting any person at risk. Along this paper different possibilities for carrying out studies using collaborative robots and human cognitive signals will be analyzed [17].
Connecting the exploration of EEG sensors, virtual reality, and collaborative robotics, this article aims to bridge the gap between cutting-edge technologies and key aspects essential for understanding the human aspect in reinforcement learning. The following definitions for these relevant terms will be used:
  • Reinforcement Learning: A machine learning approach where an agent refines decision-making skills by interacting with the environment, guided by rewards or penalties for actions, progressively improving decisions [18].
  • Emergent Behavior: For Reinforcement Learning agents, emergent behaviors relate to unpredictable actions arising from the combination of simpler actions; in collaborative robotics, it’s the robot’s ability to exhibit unplanned responses from interactions [9].
  • EEG Sensors: Devices measuring brain electrical activity, valuable for monitoring neural signals and cognitive processes, particularly in assessing the mental state of human operators in collaborative robotics [19].
  • Human-Robot Interaction (HRI): Dynamic interactions between humans and robots in collaborative settings, encompassing communication, cooperation, and the study of their collaboration in shared workspaces [20]. In construction, innovative categorizations have been developed leading to better approaches in Human-Robot Interaction problems[21].
  • Virtual Reality (VR): An immersive technology that transports users to computer-generated environments, offering a multisensory experience that can simulate real-world scenarios, impacting fields such as education, robotics, gaming, and therapy [22].

2. Materials and Methods

To conduct this research, a comprehensive literature survey was undertaken, utilizing the most relevant academic platforms. Key resources were meticulously sourced from esteemed databases such as Google Scholar [23] and Scopus [24]. The adopted approach ensures a diverse selection of articles and books, focusing on the latest and most significant research in the field. The integrity of the analysis is supported by the selection of sources from reliable platforms.
Before undertaking all the research, it is relevant to analyze the number of articles and the annual growth in the number of publications, among other factors, since this usually reflects the growing relevance of the selected topics in society. The main purpose is to support the review and future research proposed at the end of the article. It is important to keep in mind that the number of articles is indicative, since the mention of certain technologies does not guarantee a direct relationship with the topic in question.
In Figure 3, the number of documents per year related to the topics of reinforcement learning and collaborative robotics is shown. It is evident from the graph that there is a clear upward trend in the number of documents, meaning an increasing significance of reinforcement learning and collaborative robotics in the research field.
Despite the relatively modest annual document count in the previous graph, it becomes clear that there is an upward trend. Moreover, when combining the keywords reinforcement learning and "EEG," the number of documents increases even further. This can be observed in the Figure 4, and the increasing trend in the annual document count is similar as the one presented in the preceding paragraph.
Incorporating an additional search concept into the analysis, encompassing topics like virtual reality, reinforcement learning, and collaborative robotics, leads to a noticeable reduction in the number of papers per year. Despite this reduction, the robust upward trend in annual paper counts continues to be a feature. This observation underscores the enduring relevance and growing interest in the intersection of these fields, even as the search criteria is narrowed to encompass multiple, highly specialized subjects. The presented graph can be observed in Figure 5.
Similarly, when maintaining three search criteria but substituting EEG for virtual reality, our results exhibit a comparable pattern, presented in Figure 6. The reduction in the number of papers remains even more pronounced in this configuration, yet the noticeable upward trend in annual paper counts persists. This consistent trend underscores again the enduring significance and growing attention in these research domains.
In summary, it becomes evident that as the selected technologies are combined, the number of papers decreases while the annual upward trend remains consistent. Particularly noteworthy is the scenario where the four primary topics introduced at the outset of this article - reinforcement learning, virtual reality, collaborative robotics, and human factor sensors such as EEG - are considered together. In this case, the number of articles is minimal, with the majority concentrated in the year 2023. This highlights the significance of exploring this emerging research path where these four diverse technologies intersect. The convergence of these areas signifies a promising direction for future investigations and projects that will be discussed in the future direction section.
It should be noted that during this review of the state of the art, those papers that are directly related to the proposed topic have been read and contrasted. Even though in proposed graphs the number of papers is higher than those referenced during the article, only the ones directly related to the scope of the review have been mentioned.

3. State of the Art

Most relevant research studies in the field of reinforcement learning applied to collaborative robotics will be highlighted, with a permanent focus on the human factors and brain activity signals. Solely relevant research in the intersection of the aforementioned subjects will be analyzed during this section.

3.1. EEG based Brain-Computer Interface Approaches in Collaborative Robot Control

During this sub-section it is intended to analyze different research and technological advances in the area of brain signal capture through EEG applied to collaborative robotics. The purpose is to address the possibility of improving the understanding and execution of robotic systems to make them more intuitive and adaptive to a human being.
One of the most recent studies in the field of collaborative robotics, combining brain activity measurement through EEG and machine learning, is the research published by the University of Pennsylvania in 2022. The paper discusses the use of EEG signals to measure peoples’ trust levels in collaborative construction robots. EEG signals provide valuable information about human brain activity and cognitive states, including trust, during human-robot collaboration [29]. EEG signals are also used for determining mental states, electroencephalography sensors are able to measure brainwave frequencies such as delta, theta, alpha, beta, and gamma waves. Theta waves (4–8 Hz) appear during REM sleep, deep meditation, or flow states, whereas delta waves (0.5–4 Hz) signify dreamless sleep. Beta waves (12-38 Hz) include low beta (12-15 Hz) for idle states, beta (15-23 Hz) for attention, and high beta (23-38 Hz) for stress and difficult activities. Alpha waves (8-12 Hz) are associated with relaxed concentration. Moreover, gamma waves (25–45 Hz) provide important insights related to intentional activities, multitasking, and creativity [29]. (See Figure 7)
However, EEG signals can be contaminated by other frequency signals, both intrinsic and extrinsic, resulting in a reduction in the original trust signal quality. In order to address this, some studies used a fixed-gain filtering method to reduce extrinsic components and utilized independent component analysis (ICA) to remove intrinsic components from EEG signals. Once the filtering was done, the results in the EEG measurements were significantly cleaner [29,31].
After the reduction was done, 12 trust related characteristics from the EEG signals that span the temporal and frequency domains were extracted. These features were calculated from segmented EEG data, and this information was then used for machine learning model training.
In order to evaluate the amount of trust levels in collaborative robots several supervised learning algorithms [32] are used, including k-nearest neighbors (k-NN), support vector machine (SVM) [33], and random forest. Among the supervised learning algorithms used, the k-NN outperforms the others showing the highest accuracy at approximately 88%.
Once machine learning algorithms are applied, a test with human participants involving building tasks can be conducted to determine trust levels in different robot collaboration scenarios. The results show that higher levels of trust are achieved while working with semi-autonomous robots. However, working with an autonomous robots lead to lower levels of trust due to the sense of not having any control on the robot. These research findings remark the potential of EEG-based trust measurement in a human-robot collaboration [29,31].
The conclusion is that using EEG brainwaves it is possible to determine a person’s trust in a robot while it completes a task. It is important to highlight that these experiments are carried out in a controlled virtual reality environment. It is also relevant to note that even if no reinforcement learning strategies are used, it is still significant to bring attention to the potential use of these kinds of signals in reinforcement learning training for collaborative robot environments [34].
Reinforcement learning in the contexts of collaborative robots and brain signals was not specifically addressed. However, a very recent study also explores the use of reinforcement learning to enhance human-robot collaboration in assembly tasks, focusing on dynamic task allocation, effectively balancing the workload between humans and robots [35]. Building upon this foundation, research in the field continued to evolve modeling discomfort in human-robot collaboration and making the robot meet individual preferences [36]. However, a different investigation secured its status as an innovator in the discipline, constructing the initial steps for the assimilation of EEG signals into reinforcement learning and robotics. [37].
Expanding further on its findings, the study [37]explores the application of reinforcement learning algorithms in robotics, specifically in the context of robots learning to solve tasks based on reward signals obtained during task execution. In many other research, these reward signals are either modeled by programmers or provided through human supervision. However, there are situations where encoding these rewards can be challenging, resulting in the suggestion of using EEG-based brain activity as reward signals.
The core idea of this proposed article is to extract rewards from brain activity while observing a robot performing a task, this eliminates the need for an explicit reward model. The paper introduces a new idea for using brain activity signals through EEG sensors to provide correctness feedback to a collaborative robot about an specific task. This demonstrates the ability to identify and classify different error levels based on the brain signals [37].
Brain-computer interfaces (BCI) [38] in robotics have been identified as a hot topic [14]. EEG is highlighted as the recording method of choice due to its portability and high temporal resolution. The research also remarks the use of event-related potentials (ERPs) [39] in error detection and shows how these ERPs may be automatically categorized using machine learning and signal processing methods.
The study also offers a reinforcement learning framework for learning tasks based on reward signals obtained from monitored brain activity. Q-learning [40] is a reinforcement learning method that uses a Q-function to optimize sequential decision-making and was the algorithm of choice to demonstrate learning in real-time tasks in collaborative robotics scenarios. The results of the research suggest that EEG-based reward signals hold great potential in robot learning and task adaptation. However, the study was carried out in 2010, which is why several lines of improvement for future research are presented in the article.
Following the line of research presented where robots learn to adapt their behavior based on error signals generated by brain waves measured by EEG sensors, there is an investigation carried out in 2021 [41] that improves the previous one by proposing an approach in which a robot arm is trained to play a game and then uses the learning to teach different children. The training process involves automatic detection of rewards and penalties based on EEG signals, probability-based action planning, and imitation of human actions for training children.
In this research a specific reinforcement learning scheme is presented. In the case of the research carried out to teach the robot, the planning is not done as in traditional RL. Normally in reinforcement learning, actions are planned based on a partial learning of the environment, which means that agents make decisions based on the partial knowledge they have acquired so far in the environment. In the case of the proposed research, action planning takes place after the RL algorithm has converged (convergence happens when the RL algorithm has reached a state of knowledge where it has learned enough about the environment and the actions).This approach can be very beneficial in situations where fast and accurate decisions are required, one of those could be what the article is describing: the use of RL for training a robot to play a specific game [41].
Regarding the learning approach based on error signals and continuing with the review proposed above, it is necessary to highlight that the error potential-related events (ErrP) [42] signals represent the subjective errors when a subject observes an error either in a robot or even in itself [43]. In the proposed learning case, if no error is detected, a small positive reward will be given, however, if an error is detected, a negative reward will be applied. The rewards will be used to update a table of probabilities that consider states and actions. Once the entire learning phase is completed, the agent will have acquired a behavior based on the probabilities in that table.
In the training phase, the objective is to update the State-Action Probability Matrix (SAPM) to optimize actions for given states. This requires error signal detection and management using classifiers [44]. Unlike traditional BCI systems, training occurs both online and offline. On one hand, offline training involves subjects performing sessions to gather data, with a portion used to train classifiers, using around 12,000 instances of brain signals. On the other hand, online training then adapts the SAPM using reinforcement learning. After training, the agent’s behavior, particularly the robot arm’s action planning, is tested. This process involves data acquisition, offline classifier training, and SAPM adaptation[41].
The study conducts a two-stage training with the Jaco robot arm. The first phase is offline, using EEG data for classifier training with 18,000 instances, including ERD/ERS and ErrP signals. The second phase is online, adapting the SAPM with visual and audio stimuli for learning and correction. The test phase compares the performance of children trained by the robot to those trained by humans [41].
In general, the study is quite innovative since it allows detecting when a user performs an experiment in the wrong way. Then the robot’s behavior will be modified in order to teach the user to perform the experiment correctly. One of the aspects to highlight in this case is that the behavior of the robot is not directly influenced by the EEG signals, rather it will choose an action to be taken when the agent determines, either to replicate one of the movements or to throw the ball again. One of the future lines to point out could be the training of the agent in the task of throwing the ball and then, depending on the degree of error from the user, modify its behavior in a more direct way [41].
Although there are points that could be improved in the previous article, there are other very notable studies that aim to modify the total behavior of a robot based on the measured EEG signals. In addition, the authors only explored the possibility of detecting errors in certain tasks to carry out training of different agents. However, the possibility of detecting different feelings and emotions is something that can be distinctive to modify the behavior of a robot [45].
Other interesting approach was published in 2021 under the name “Emotion-Driven Analysis and Control of Human-Robot Interactions in Collaborative Applications” [29]. The authors focus on the behavior of a robot and the ability of the robot to adapt to different situations depending on the brain signals that are received from an EEG sensor. The research is based on the application of fuzzy logic rules to modify certain critical variables of the robot motion such as speed or motion delay. The rules are created from the beginning by focusing on stress. Nevertheless, a trial-and-error process will be carried out to establish representative relationships between the robot’s speed and the user’s emotions.
The interesting thing about this study is that it is possible to modify the robot’s behavior in a relatively simple way. While it is true that only motion-related variables are modified through the applied rules, the robot can modify its behavior based on measured EEG signals. The experimentation process including the EEG sensor, the collaborative robot and the human can me appreciated in Figure 8. It is also noteworthy that no reinforcement learning is used during this research, the fact that it is possible to use brain signals and relate them directly to emotions such as stress, anxiety, and depression, is a very important aspect of this work that could lead to the use of these signals in reinforcement learning investigations.
Once it is established the possibility to modify the behavior of a robot based on the emotions that the user is feeling, new paradigms and new research areas appear. In a study carried out in June 2022 [45], it is confirmed that it is possible to modify the way a cobot behaves based on the feelings of a human being. The intention is to be able to modify the behavior of the robot to achieve a level of empathy, in this case, the robot acts in a very similar way to the emotions felt by the human. The experiment is carried out in a simple and controlled environment, but it is very useful to demonstrate that it is indeed possible to modify the behavior of a robot based on the brain measurements of a human.
After looking at the most recent studies, brainwave measurement is a reliable method for modifying a robot’s behavior. However, reinforcement learning is not an area where this approach is routinely applied, even though it allows for emergent behaviors and greater adaptability. Although it is true that most of the studies that apply RL focus only on the detection of errors in executions to reward or penalize an agent, it could be very useful to use similar techniques to modify the behavior of a collaborative robot depending on the user’s emotions.
During this review article, the filtering of brain signals has been covered on several occasions. However, it is important to note that other relevant studies propose different techniques to filter the signals and achieve the desired emotion or potential error detection. The use of CNN is proposed as a valid method to filter and classify the EEG signals. However, as the classification proposed was done in binary terms considering only if the experiment was done correctly or incorrectly; it’s difficult to relate if this could be useful for more complex experiments where different emotions need to be taken into account [46].
The filtering of EEG signals and their subsequent classification is one of the biggest challenges in this technology [37]. In addition to that, everything related to brain signals is a relatively recent topic that has emerged during the last few years. That is why several techniques have been used in the quantification of signals that can enable Human-Robot Interaction.
In this section it became clear that the brain signals measured by EEG sensors are perfectly valid to carry out different investigations around collaborative robotics and reinforcement learning. In the following section, other relevant techniques related to the capture of different valuable signals for Human-Robot Interaction will be discussed.

3.2. Additional Human State Measuring Techniques for Collaborative Robotics

In this subsection several techniques related to the measurement of human state (apart from EEG) will be described in order to apply them to a human-robot environment. As a general overview, Figure 9 provides a detailed diagram of the human body, indicating the placement of various bio-sensors. Some bio-sensor devices, as highlighted in [47], will be explored to provide an overview of their various applications within collaborative robotics and reinforcement learning.
One of the techniques that has been useful in Human-Robot Interaction is eye tracking. An eye tracker is a device that records and follows a person’s eye movements during Human-Robot Interaction, allowing them to understand how a human focuses attention and responds to visual stimuli. Numerous studies have revealed that the eyes can play a significant role in communication and anticipation of the robot’s actions [48,49].
The eye tracker has been used in different investigations to determine actions of a collaborative robot and also in stress detection. The idea behind using an eye tracker is the ease of reading the signals with the right device. The only drawback of this type of technology is the delay between reading and interpreting the signal, which can mean that the human is at a completely different state. Stress detection is one of the most interesting areas behind eye trackers, as it could allow the training of a RL agent based on the investigations discussed in the previous paragraphs. For the detection of this type of stress-related variables it is important to take into account pupil diameter and number of gaze fixations [50].
Another technology that allows the detection of different emotions and movements is computer vision. Computer vision refers to the application of different algorithms in combination with image and video processing techniques to identify gestures, postures and facial expressions in real time. The aim with this kind of techniques is to understand and analyze human behaviors and associated emotions [51].
One of the most interesting disciplines in computer vision is human activity recognition (HAR). These types of disciplines have wide applications in fields such as human-machine interaction, but also in robotics and video games, where HAR improves understanding of human intentions and emotions [51,52].
Thanks to human activity recognition, computer vision may have a place in robotics and reinforcement learning applications as shown in an interesting recent study [53]. However, computer vision research, normally targets the use of computer vision in anomaly detection, with insufficient examples on its integration with reinforcement learning. Anomaly detection implies using computer vision techniques to identify unusual behaviors and events in videos or images, this plays a significant role in security surveillance systems [52,54].
Although computer vision is a widely used research technique, it has many drawbacks. Reliability in variable lighting conditions or blurred images are usually a problem, as these systems often fail in harsh conditions. In addition, interpreting complex images or detecting objects in unusual situations still represents a significant challenge for computer vision. Finally, there is a risk of bias and discrimination in computer vision systems when training datasets are not properly represented [51].
Either eye trackers or adapted cameras for computer vision can be useful to detect different human factors related variables, however, they depend on many aspects such as light to function properly. On the other hand, another useful technique that was first used years ago to measure stress and fatigue was the cardiac rhythm. A cardiological study [55] assures that there is a direct correlation between arrhythmias and tachycardias with the level of stress and fatigue they are feeling. Correct measurement with the right sensors can determine the level of stress and therefore emotions that a person feels when interacting with a robot [55].
The variety of devices capable of measuring heart rate is quite wide. They go from wearable devices such as smartwatches to body bands or stickers placed on the chest [56]. Among all of them, it is necessary to take into account different limitations such as battery, connectivity problems, signal quality, accuracy, security when handling data and device pricing [56].
The studies mentioned above claim that heartbeat sensors are valuable tools for monitoring stress and fatigue [55]. That is because variability in heart rate can provide indications of a person’s emotional and physical state. However, when it comes to determining ErrPs or evaluating complex emotions and cognitive responses, such as those related to EEG, these sensors may not be the best choice due to their limited ability to capture detailed information about specific brain processes.
In the context of exploring methodologies relevant to medical research, several technologies have been identified as effective in evaluating an individual’s stress levels or physiological state in real-time. Notably, the measurement of cortisol, a hormone intricately associated with stress response, emerges as a significant area of interest [57]. The precision offered by cortisol as a metric is high, yet the challenges lie in the actual process of measurement, particularly in achieving real-time data acquisition. Recent advancements have been made in real-time cortisol measurement, signaling a promising yet still emerging area of scientific exploration.
As research in the field advances, there is a notable increase in the variety of methods being developed to measure human interactions and stress-related emotions. Techniques such as monitoring breathing rate [58] and observing changes in facial complexion, including blushing [59], are indicators of stress levels. These technologies are essential in the realm of Human-Robot Interaction, providing a deeper understanding of human emotional states. Their integration into robotic systems holds the potential to significantly enhance the way robots interpret and respond to human emotions, leading to more empathetic and effective interactions.
Questionnaires and scales are self-reporting tools in which people answer questions about their emotional states and level of stress after performing specific tasks. Some of the most relatable examples of self-report questionnaires are the Beck Depression Inventory (BDI) to assess depression and the Perceived Stress Questionnaire (PSS) to measure perceived stress. BDI is a broadly used tool for measuring the severity of depression in individuals. It was developed by psychologist Aaron T. Beck in 1961 [60,61]. The Perceived Stress Questionnaire is used to assess how stressed a person feels in relation to his or her life experiences, circumstances and tasks [62]. Despite the existence of numerous questionnaires that are consistent and achieve the proposed result, it is always possible to create questionnaires to reveal different variables of interest in a particular experiment.
In conclusion, this subsection highlights various techniques for measuring human states in the context of Human-Robot Interaction. While EEG remains prominent, alternative methods like eye tracking, computer vision, and cardiac rhythm analysis offer valuable insights. Each method has its strengths and limitations, making their selection context dependent.

3.3. Immersive Technologies as a Safe Training Ground for Reinforcement Learning in Human-Interactive Robotics

Since the field of robotics often involves risk to humans since robots are in direct contact with the user, it is necessary to determine the practicality of virtual reality in experiments related to collaborative robotics. Moreover, the aim of this section is to make a distinction between virtual reality, mixed reality and augmented reality in order to anylize the possibility of applying each of them to future research projects.
Virtual Reality refers to a computer-generated environment that immerses the user in a simulated reality, often using a headset or other sensory input devices [63]. In VR, users can interact with and experience a digitally created world that can be entirely different from the physical world [64,65]. Augmented Reality (AR) overlays digital information or virtual elements onto the real-world environment [66]. AR enhances the user’s perception of the physical world by providing additional digital content or information, often through a mobile device’s camera or specialized glasses [67,68].
Mixed Reality (MR) combines elements of both virtual reality and augmented reality. In MR, digital objects or information are integrated into the real world in a way that allows them to interact with physical objects and the user’s environment [69].
In the field of augmented reality and collaborative robotics, a novel solution was proposed, involving a human-robot collaboration method using augmented reality to enhance construction waste sorting, aiming to improve both efficiency and safety [70]. Mixed reality is also becoming more popular due to some recently released products, some interesting research ensure the possibility of applying mixed reality in robotic environments for enhancing the interactions between humans and robots [71].
Virtual reality is widely used to simulate real environments, aiming to prevent harm to humans and train users for specific tasks. It’s especially prevalent in the medical field, where virtual reality trains medical students to perform complex surgeries [72]. Additionally, this technology can be used to learn how to interact with robots in medical settings or other environments [73].
The intersection between virtual reality and human interaction is also a hot topic. A system that combines robotics with virtual reality to improve welding by interpreting the welder’s intentions for robotic execution was newly introduced. Utilizing virtual reality for intention recognition enables precise and smooth robotic welding. This highlights the potential of integrating robotics and virtual reality in skilled tasks [74].
On the other hand, in order to apply reinforcement learning for training a robotic system to take into account different emotions or states of a human beings, the ideal solution is virtual reality, as these environments meet the safety requirements to guarantee humans are not harmed during the experimentation. A recent study [75] describes a method where the virtual reality environment adapts to participant behavior in real-time through reinforcement learning, focusing on encouraging helping behavior towards a victim in a simulated violent assault scenario.
If the robotic training has already been carried out, virtual reality is also a reliable validation technique to test the performance of the system in real life environments complying with the required safety conditions. Thanks to the virtual reality glasses that are on the market these days, environments can be generated in a very simple way. One of the examples in which this type of technology can be useful is when interacting with a factory or robotic facility, where the moving parts that can endanger human safety are simulated by the devices [31,76].
In several studies presented in previous sections that implemented virtual reality to provide a safe environment for experimentation, it became clear that the results are very similar to those obtainable in a real environment. Safety in robotic environments is critical, as the integrity of the users a major concern [76].

4. Discussion

The purpose of this section is to offer a general overview of the aforementioned studies. This is an important contribution to the world of reinforcement learning in collaborative environments where the human is taken into account and virtual reality is the key for testing.
For this reason, the first part of the section will provide a categorization of the most relevant studies and the areas they target. This organizational structure can be useful for future projects where the main objectives are focused on the intersection of the four areas proposed during the article: reinforcement learning, virtual reality, collaborative robotics, and human-robot sensors, such as EEG. A table of the principal research studies directly related to these topics will be presented in Table 1.
In the table where the four main areas of the review article intersect, clear patterns and trends are observed. One notable finding is the high prevalence of studies combining human factor sensors with Collaborative Robotics. This trend suggests a strong synergy and interest in the interaction between human-centered robotics applications. Furthermore, the integration of Reinforcement Learning in several of these studies indicates an inclination towards exploring advanced machine learning techniques in the context of Human-Robot Interaction.
Immersive technologies, on the other hand, seem to be a less explored area in comparison. However, when immersive technologies are mentioned, it is often done in combination with Reinforcement Learning. This could suggest an emerging field of interest that integrates augmented or virtual realities with adaptive learning strategies. The absence of certain thematic combinations, such as Immersive Technologies and Human Factors Sensing with the inclusion of other topics, highlights possible areas of opportunity for future research. The table effectively underscores the significance of the research, emphasizing the key intersections among the various themes. Despite the rising trend in these themes, it becomes apparent that there is a notable gap in substantial research involving the interaction of all four themes, underscoring their importance and potential for groundbreaking study.
After conducting the analysis, it is critical to investigate potential future directions and challenges that the researchers could find. This section acts as an essential guide, directing individuals and organizations to well-informed choices and innovative methods that will define the path ahead in several areas of research in collaborative robotics, reinforcement learning and human factors.
One of the key future directions for the development of high-impact projects involves the ability to adapt a robot’s behavior according to the emotions experienced by a human [37,75]. This approach is of particular importance in the medical field, where robot adaptation to patient stress and anxiety is essential to provide effective and comfortable support [48]. However, this adaptation is not limited to the medical field; in any environment with collaborative robots, controlling and adjusting the level of stress to which the user is subjected becomes imperative to ensure efficient and respectful interactions with people’s emotional and psychological needs.
Building on the approach outlined in the previous paragraph, the next emerging studies in the field could focus on achieving substantial changes in robots’ behavior depending on the emotions and stress level of a specific user. This would require parametrization of stress signals to determine when a subject is in a non-optimal emotional state. Once it is possible to detect in real time whether a user is under high levels of stress or not [29], reinforcement learning can be applied to achieve emergent behaviors from the robot. When detecting in real time whether a user is under high levels of stress, reinforcement learning can be applied to achieve emergent behaviors from the robot. For training, different strategies can be considered, either real-time training with human subjects or off-line training [46,77] with models that replicate previously recorded human behaviors and emotions.
For the recording of a user’s emotions, the various techniques discussed throughout the review are available. Although it is true that all of them can be applied successfully, there are pros and cons to each of them. That is why it is necessary to consider which one is the most suitable for each investigation.
Recording the feelings and emotions of a human when interacting with a robot can be done in a real environment or in a virtual environment. In terms of human safety, a virtual environment can be considered, which can also easily be adapted to the needs of the experiments [76].
In addition, for the results to be truly relevant, the experimental phase must be carried out with enough subjects to reaffirm a valid theory and behavior. That is the reason why a big group of people should be considered in order to train the robot. However, when testing the behavior of the robot, it is necessary to test with different subjects than those of the training to verify the results obtained and the training process.
Future research projects in the area of robotics, reinforcement learning and human factors can follow the lines mentioned above. However, there are still some unresolved questions that can be addressed in order to carry out a much more promising project. For example, can multiple collaborative robots, under a single controlling brain, work alongside a user while considering the user’s emotional state?
One of the most interesting prospects for the future of reinforcement learning is the possibility of conducting training in real environments using the technique known as "sim2real" (simulation to real world). This promising direction looks to apply knowledge acquired in virtual environments, where iteration and learning are safer and more efficient, to real robots operating in the physical world [79]. This will open up new opportunities for automation and robotics in a wide range of applications, from manufacturing to space exploration, while addressing the challenges inherent in transferring skills from simulation to the real world. This sim2real transition has the potential to transform the way robots learn and adapt to real-world environments.
The integration of collaborative robotics and reinforcement learning in the workplace holds the promise to significantly impact economic and social aspects. These technologies offer the promise of automating repetitive and laborious tasks, thereby enhancing worker well-being and productivity [80]. However, this shift towards automation also presents challenges and opportunities, particularly in terms of its effects on job markets and the emergence of new types of employment [81]. Understanding the implications of these changes on job markets is noteworthy, as it can shed light on the potential effects on job satisfaction and workplace injuries [82]. By focusing on the human aspect of technological integration, it is possible to pave the way for a future where technology complements human capabilities, fostering collaboration and well-being [83]. Future investigations and projects may focus on such human-robot collaboration to improve human welfare and achieve better productivity. An interesting idea could be to achieve the proposed through reinforcement learning and different measurement devices as discussed throughout the paper.
Another hot topic is the concept of Human Digital Twins (HDT), that represents a relatively new and powerful approach in the field of Human-Robot Interaction. This means creating detailed digital representations of human beings, offering a new opportunity to enhance how humans interact with robotic systems. The technology is in its early stages, however, holds the promise of facilitating more intuitive, efficient, and personalized interactions between humans and robots [78].
As venturing deeper into the realm of collaborative robotics, reinforcement learning and human-interaction, it is imperative to address the ethical considerations that come with these technological advancements. The integration of robots into human-centric environments raises important questions about privacy, autonomy, and the potential for unintended consequences [84]. How do we ensure that these technologies respect individual privacy and autonomy? What measures do we take to prevent misuse or abuse of such technologies, particularly in sensitive areas like healthcare and personal assistance? [85] Furthermore, the potential emotional impact on humans interacting with robots—especially those designed to mimic human behaviors and emotions—must be carefully considered [45]. It is essential to develop and adhere to ethical guidelines and standards that prioritize human well-being, ensuring that the deployment of these technologies enhances, rather than diminishes, human experiences and values.
The application of collaborative robotics with reinforcement learning presents a very large horizon of possibilities. As further research continues to advance in this area, new paradigms and applications will emerge aiming to transform a variety of sectors, from manufacturing and logistics to health care and space exploration. This article has explored some of the most exciting possibilities for the future of these technologies. In this context, the article proposed a series of actions that can serve as a guide for professional researchers interested in harnessing the full potential of human factors, collaborative robotics and reinforcement learning.

5. Conclusions

In conclusion, this literature review on reinforcement learning applied to collaborative robots has demonstrated the potential of the intersection between robotics, reinforcement learning, virtual reality and the human factors. Throughout this review, a variety of investigations have been explored that addressed the intricate task of incorporating user emotions and feelings into the design of collaborative artificial intelligent systems.
It is important to note that a wide variety of studies and research journals have been examined. There is additional relevant research not explicitly mentioned, because the effort has been focused on providing a cohesive and focused view of the topic, excluding research that, although valuable, did not directly relate to the intersection between collaborative robotics, reinforcement learning and the understanding of human emotions.
Finally, this review underscores the importance of further research and development of collaborative technologies that can understand and respond appropriately to the emotional complexities, leading to a future in which human-machine interaction becomes more intuitive and empathetic. With that being said, future research should focus on developing collaborative robots that adapt their behavior based on real-time detection of users’ emotional states such stress levels. Therefore, reinforcement learning can significantly enhance the development of dynamic and empathetic Human-Robot Interactions. Additionally, selecting suitable methods for recording and detecting human-related variables is essential. This step, whether in real or simulated environments, is vital to ensure the validity of the research findings in this field.
This concluding section emphasizes the significance of exploring future directions and challenges in collaborative robotics, reinforcement learning, and human interaction. The need to adapt robot behavior to human emotions is an interesting area, for example, in medical contexts. The importance of recording user emotions, whether in real or virtual environments, should be a key focus for future research. It will be necessary to adapt to a future where the relationship between humans and robots must be appropriate to maintain stability in both personal and work environments.
As a final note from this analysis, it is clear that the synergy between collaborative robotics, reinforcement learning, and human emotions is not only an intriguing field of study, but also fundamental for future technological advances. As progress is made, the focus will need to be on building collaborative robots capable of sensing and responding to human emotions in real time. The systematic logging of human emotion responses, in both real and virtual environments, is indispensable for the authenticity and applicability of these new advances. This review, which encompasses a variety of studies, highlights the significant potential of these technologies to transform the way humans and machines collaborate.

Acknowledgments

This research was supported by the project ACROBA, which has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 101017284, and the project EGIA which has received funding from the ELKARTEK programme from the Basque Government.

Abbreviations

The following abbreviations are used in this manuscript:
EEG Electroencephalography
VR Virtual Reality
HRI Human-Robot Interaction
ICA Independent Component Analysis
SVM Support Vector Machine
KNN K-Nearest Neighbors
RL Reinforcement Learning
BCI Brain Computer Interfaces
ERP Error Related Potentials
ErrP Error Potential-related events
HAR Human Robot Recognition
BDI Beck Depression Inventory
PSS Perceived Stress Questionnaire
AR Augmented Reality
MR Mixed Reality
HDT Human Digital Twins

References

  1. Weiss, A.; Wortmeier, A.K.; Kubicek, B. Cobots in industry 4.0: A roadmap for future practice studies on human–robot collaboration. IEEE Trans. Hum. Mach. Syst. 2021, 51, 335–345. [Google Scholar] [CrossRef]
  2. Sherwani, F.; Asad, M.M.; Ibrahim, B. Collaborative Robots and Industrial Revolution 4.0 (IR 4.0). 2020 International Conference on Emerging Trends in Smart Technologies (ICETST). IEEE, 2020. [CrossRef]
  3. Parsons, H. Human factors in industrial robot safety. Journal of Occupational Accidents 1986, 8, 25–47. [Google Scholar] [CrossRef]
  4. Bauer, W.; Bender, M.; Braun, M.; Rally, P.; Scholtz, O. Lightweight robots in manual assembly – best to start simply! Examining companies’ initial experiences with lightweight robots; 2016.
  5. Pearce, M.; Mutlu, B.; Shah, J.; Radwin, R. Optimizing makespan and ergonomics in integrating collaborative robots into manufacturing processes. IEEE Trans. Autom. Sci. Eng. 2018, 15, 1772–1784. [Google Scholar] [CrossRef]
  6. Simone, V.D.; Pasquale, V.D.; Giubileo, V.; Miranda, S. Human-Robot Collaboration: an analysis of worker’s performance. Procedia Comput. Sci. 2022, 200, 1540–1549. [Google Scholar] [CrossRef]
  7. Kragic, D.; Gustafson, J.; Karaoguz, H.; Jensfelt, P.; Krug, R. Interactive, Collaborative Robots: Challenges and Opportunities. Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence; International Joint Conferences on Artificial Intelligence Organization: California, 2018. [Google Scholar]
  8. Sheridan, T.B. Human-robot interaction: Status and challenges. Hum. Factors 2016, 58, 525–532. [Google Scholar] [CrossRef]
  9. Kober, J.; Bagnell, J.A.; Peters, J. Reinforcement learning in robotics: A survey. Int. J. Rob. Res. 2013, 32, 1238–1274. [Google Scholar] [CrossRef]
  10. Kober, J.; Bagnell, J.A.; Peters, J. Reinforcement learning in robotics: A survey. The International Journal of Robotics Research 2013, 32, 1238–1274. [Google Scholar] [CrossRef]
  11. Kormushev, P.; Calinon, S.; Caldwell, D. Reinforcement Learning in Robotics: Applications and Real-World Challenges. Robotics 2013, 2, 122–148. [Google Scholar] [CrossRef]
  12. Brunke, L.; Greeff, M.; Hall, A.W.; Yuan, Z.; Zhou, S.; Panerati, J.; Schoellig, A.P. Safe Learning in Robotics: From Learning-Based Control to Safe Reinforcement Learning. Annual Review of Control, Robotics, and Autonomous Systems 2022, 5, 411–444. [Google Scholar] [CrossRef]
  13. Sutton, R.S.; Barto, A.G. Reinforcement learning: An introduction; MIT press, 2018.
  14. Maurtua, I.; Ibarguren, A.; Kildal, J.; Susperregi, L.; Sierra, B. Human–robot collaboration in industrial applications: Safety, interaction and trust. International Journal of Advanced Robotic Systems 2017, 14, 172988141771601. [Google Scholar] [CrossRef]
  15. Wang, W.; Chen, Y.; Li, R.; Jia, Y. Learning and comfort in human–robot interaction: A review. Appl. Sci. (Basel) 2019, 9, 5152. [Google Scholar] [CrossRef]
  16. Sawangjai, P.; Hompoonsup, S.; Leelaarporn, P.; Kongwudhikunakorn, S.; Wilaiprasitporn, T. Consumer Grade EEG Measuring Sensors as Research Tools: A Review. IEEE Sensors Journal 2020, 20, 3996–4024. [Google Scholar] [CrossRef]
  17. Burdea, G. Invited review: the synergy between virtual reality and robotics. IEEE Transactions on Robotics and Automation 1999, 15, 400–410. [Google Scholar] [CrossRef]
  18. Kaelbling, L.P.; Littman, M.L.; Moore, A.W. Reinforcement learning: A survey. J. Artif. Intell. Res. 1996, 4, 237–285. [Google Scholar] [CrossRef]
  19. Salazar-Gomez, A.F.; DelPreto, J.; Gil, S.; Guenther, F.H.; Rus, D. Correcting robot mistakes in real time using EEG signals. 2017 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2017.
  20. Goodrich, M.A.; Schultz, A.C. Human-Robot Interaction: A Survey. Foundations and Trends® in Human-Computer Interaction 2007, 1, 203–275. [Google Scholar] [CrossRef]
  21. Rodrigues, P.B.; Singh, R.; Oytun, M.; Adami, P.; Woods, P.J.; Becerik-Gerber, B.; Soibelman, L.; Copur-Gencturk, Y.; Lucas, G.M. A multidimensional taxonomy for human-robot interaction in construction. Automation in Construction 2023, 150, 104845. [Google Scholar] [CrossRef]
  22. Slater, M.; Sanchez-Vives, M.V. Enhancing our lives with immersive virtual reality. Front. Robot. AI 2016, 3. [Google Scholar] [CrossRef]
  23. Google Scholar Search Engine. https://scholar.google.com. Accessed on January 1, 2024.
  24. Scopus Database. https://www.scopus.com. Accessed on January 1, 2024.
  25. Scopus. Search results for ’reinforcement learning AND collaborative robotics’ with publications between 2012 and 2024. https://www.scopus.com, 2024. Accessed on January 1, 2024.
  26. Scopus. Search results for ’reinforcement learning AND EEG’ with publications between 2012 and 2024. https://www.scopus.com, 2024. Accessed on January 1, 2024.
  27. Scopus. Search results for ’reinforcement learning AND virtual reality AND collaborative robotics’ with publications between 2012 and 2024. https://www.scopus.com, 2024. Accessed on January 1, 2024.
  28. Scopus. Search results for ’reinforcement learning AND EEG AND collaborative robotics’ with publications between 2012 and 2024. https://www.scopus.com, 2024. Accessed on January 1, 2024.
  29. Toichoa Eyam, A.; Mohammed, W.M.; Martinez Lastra, J.L. Emotion-driven analysis and control of human-robot interactions in collaborative applications. Sensors (Basel) 2021, 21, 4626. [Google Scholar] [CrossRef]
  30. Alarcão, S.M.; Fonseca, M.J. Emotions Recognition Using EEG Signals: A Survey. IEEE Transactions on Affective Computing 2019, 10, 374–393. [Google Scholar] [CrossRef]
  31. Shayesteh, S.; Ojha, A.; Jebelli, H. Workers’ trust in collaborative construction robots: EEG-based trust recognition in an immersive environment. In Automation and Robotics in the Architecture, Engineering, and Construction Industry; Springer International Publishing: Cham, 2022; pp. 201–215. [Google Scholar]
  32. Caruana, R.; Niculescu-Mizil, A. An empirical comparison of supervised learning algorithms. Proceedings of the 23rd international conference on Machine learning - ICML ’06. ACM Press, 2006, ICML ’06. [CrossRef]
  33. Pontil, M.; Verri, A. Properties of Support Vector Machines. Neural Computation 1998. [Google Scholar] [CrossRef]
  34. Akinola, I.; Wang, Z.; Shi, J.; He, X.; Lapborisuth, P.; Xu, J.; Watkins-Valls, D.; Sajda, P.; Allen, P. Accelerated Robot Learning via Human Brain Signals, 2019. [CrossRef]
  35. Zhang, R.; Lv, Q.; Li, J.; Bao, J.; Liu, T.; Liu, S. A reinforcement learning method for human-robot collaboration in assembly tasks. Robot. Comput. Integr. Manuf. 2022, 73, 102227. [Google Scholar] [CrossRef]
  36. Lagomarsino, M.; Lorenzini, M.; Constable, M.D.; De Momi, E.; Becchio, C.; Ajoudani, A. Maximising Coefficiency of Human-Robot Handovers through Reinforcement Learning 2023. [CrossRef]
  37. Iturrate, I.; Montesano, L.; Minguez, J. Robot reinforcement learning using EEG-based reward signals. 2010 IEEE International Conference on Robotics and Automation. IEEE, 2010.
  38. Millán, J.D.R. Combining brain-computer interfaces and assistive technologies: state-of-the-art and challenges. Frontiers in Neuroscience 2010, 1. [Google Scholar] [CrossRef] [PubMed]
  39. Gehring, W.J.; Goss, B.; Coles, M.G.H.; Meyer, D.E.; Donchin, E. A Neural System for Error Detection and Compensation. Psychological Science 1993, 4, 385–390. [Google Scholar] [CrossRef]
  40. Watkins, C.J.C.H.; Dayan, P. Q-learning. Machine Learning 1992, 8, 279–292. [Google Scholar] [CrossRef]
  41. Kar, R.; Ghosh, L.; Konar, A.; Chakraborty, A.; Nagar, A.K. EEG-induced autonomous game-teaching to a robot arm by human trainers using reinforcement learning. IEEE Trans. Games 2022, 14, 610–622. [Google Scholar] [CrossRef]
  42. Yeung, N.; Botvinick, M.M.; Cohen, J.D. The neural basis of error detection: Conflict monitoring and the error-related negativity. Psychol. Rev. 2004, 111, 931–959. [Google Scholar] [CrossRef]
  43. Ferrez, P.W.; del R Millan, J. Error-related EEG potentials generated during simulated brain-computer interaction. IEEE Trans. Biomed. Eng. 2008, 55, 923–929. [Google Scholar] [CrossRef]
  44. Lotte, F.; Congedo, M.; Lécuyer, A.; Lamarche, F.; Arnaldi, B. A Review of Classification Algorithms for EEG-based Brain–computer Interfaces. Journal of Neural Engineering 2007. [Google Scholar] [CrossRef]
  45. Borboni, A.; Elamvazuthi, I.; Cusano, N. EEG-based empathic safe cobot. Machines 2022, 10, 603. [Google Scholar] [CrossRef]
  46. Luo, T.J.; Fan, Y.C.; Lv, J.T.; Zhou, C.L. Deep reinforcement learning from error-related potentials via an EEG-based brain-computer interface. 2018 IEEE International Conference on Bioinformatics and Biomedicine (BIBM). IEEE, 2018.
  47. Shu, L.; Xie, J.; Yang, M.; Li, Z.; Li, Z.; Liao, D.; Xu, X.; Yang, X. A Review of Emotion Recognition Using Physiological Signals. Sensors 2018, 18, 2074. [Google Scholar] [CrossRef]
  48. Onose, G.; Grozea, C.; Anghelescu, A.; Daia, C.; Sinescu, C.J.; Ciurea, A.V.; Spircu, T.; Mirea, A.; Andone, I.; Spânu, A.; Popescu, C.; Mihăescu, A.S.; Fazli, S.; Danóczy, M.; Popescu, F. On the feasibility of using motor imagery EEG-based brain-computer interface in chronic tetraplegics for assistive robotic arm control: a clinical test and long-term post-trial follow-up. Spinal Cord 2012, 50, 599–608. [Google Scholar] [CrossRef]
  49. Onnasch, L.; Schweidler, P.; Schmidt, H. The potential of robot eyes as predictive cues in HRI-an eye-tracking study. Front. Robot. AI 2023, 10, 1178433. [Google Scholar] [CrossRef]
  50. Mariscal, M.A.; Ortiz Barcina, S.; García Herrero, S.; López Perea, E.M. Working with collaborative robots and its influence on levels of working stress. Int. J. Comput. Integr. Manuf. 2023, 1–20. [Google Scholar]
  51. Pérez, L.; Rodríguez, Í.; Rodríguez, N.; Usamentiaga, R.; García, D.F. Robot guidance using machine vision techniques in industrial environments: A comparative review. Sensors (Basel) 2016, 16, 335. [Google Scholar] [CrossRef]
  52. Beddiar, D.R.; Nini, B.; Sabokrou, M.; Hadid, A. Vision-based human activity recognition: a survey. Multimed. Tools Appl. 2020, 79, 30509–30555. [Google Scholar] [CrossRef]
  53. Zhu, X.; Liang, Y.; Sun, H.; Wang, X.; Ren, B. Robot obstacle avoidance system using deep reinforcement learning. Ind. Rob. 2022, 49, 301–310. [Google Scholar] [CrossRef]
  54. Mohindru, V.; Singla, S. A review of anomaly detection techniques using computer vision. In Lecture Notes in Electrical Engineering; Lecture notes in electrical engineering, Springer Singapore: Singapore, 2021; pp. 669–677. [Google Scholar]
  55. Stamler, J.S.; Goldman, M.E.; Gomes, J.; Matza, D.; Horowitz, S.F. The effect of stress and fatigue on cardiac rhythm in medical interns. J. Electrocardiol. 1992, 25, 333–338. [Google Scholar] [CrossRef] [PubMed]
  56. Xintarakou, A.; Sousonis, V.; Asvestas, D.; Vardas, P.E.; Tzeis, S. Remote cardiac rhythm monitoring in the era of smart wearables: Present assets and future perspectives. Front. Cardiovasc. Med. 2022, 9, 853614. [Google Scholar] [CrossRef] [PubMed]
  57. Hellhammer, D.H.; Wüst, S.; Kudielka, B.M. Salivary cortisol as a biomarker in stress research. Psychoneuroendocrinology 2009, 34, 163–171. [Google Scholar] [CrossRef]
  58. CARERE, C.; VANOERS, K. Shy and bold great tits (Parus major): body temperature and breath rate in response to handling stress. Physiology amp; Behavior 2004, 82, 905–912. [Google Scholar] [CrossRef]
  59. Leary, M.R.; Britt, T.W.; Cutlip, W.D.; Templeton, J.L. Social blushing. Psychol. Bull. 1992, 112, 446–460. [Google Scholar] [CrossRef] [PubMed]
  60. Jackson-Koku, G. Beck depression inventory. Occup. Med. (Lond.) 2016, 66, 174–175. [Google Scholar] [CrossRef] [PubMed]
  61. Beck, A.T.; Steer, R.A.; Brown, G. Beck Depression Inventory–II, 2011. Title of the publication associated with this dataset: PsycTESTS Dataset.
  62. Jumani, A.K.; Siddique, W.A.; Laghari, A.A.; Abro, A.; Khan, A.A. Virtual reality and augmented reality for education. In Multimedia Computing Systems and Virtual Reality; CRC Press: Boca Raton, 2022; pp. 189–210. [Google Scholar]
  63. LaValle, S.M. Virtual reality; Cambridge University Press: Cambridge, England, 2023. [Google Scholar]
  64. Parong, J.; Mayer, R.E. Learning science in immersive virtual reality. J. Educ. Psychol. 2018, 110, 785–797. [Google Scholar] [CrossRef]
  65. Brenneis, D.J.A.; Parker, A.S.; Johanson, M.B.; Butcher, A.; Davoodi, E.; Acker, L.; Botvinick, M.M.; Modayil, J.; White, A.; Pilarski, P.M. Assessing Human Interaction in Virtual Reality With Continually Learning Prediction Agents Based on Reinforcement Learning Algorithms: A Pilot Study 2021. [CrossRef]
  66. Caudell, T.P.; Mizell, D.W. Augmented reality: an application of heads-up display technology to manual manufacturing processes. Proceedings of the Twenty-Fifth Hawaii International Conference on System Sciences. IEEE, 1992.
  67. Craig, A.B. Understanding Augmented Reality: Concepts and Applications; Morgan Kaufmann, 2013.
  68. Berryman, D.R. Augmented reality: a review. Med. Ref. Serv. Q. 2012, 31, 212–218. [Google Scholar] [CrossRef] [PubMed]
  69. Hughes, C.; Stapleton, C.; Hughes, D.; Smith, E. Mixed reality in education, entertainment, and training. IEEE Computer Graphics and Applications 2005, 25, 24–30. [Google Scholar] [CrossRef] [PubMed]
  70. Chen, J.; Fu, Y.; Lu, W.; Pan, Y. Augmented reality-enabled human-robot collaboration to balance construction waste sorting efficiency and occupational safety and health. Journal of Environmental Management 2023, 348, 119341. [Google Scholar] [CrossRef]
  71. Szczurek, K.A.; Cittadini, R.; Prades, R.M.; Matheson, E.; Di Castro, M. Enhanced Human–Robot Interface With Operator Physiological Parameters Monitoring and 3D Mixed Reality. IEEE Access 2023, 11, 39555–39576. [Google Scholar] [CrossRef]
  72. Covaciu, F.; Crisan, N.; Vaida, C.; Andras, I.; Pusca, A.; Gherman, B.; Radu, C.; Tucan, P.; Al Hajjar, N.; Pisla, D. Integration of Virtual Reality in the Control System of an Innovative Medical Robot for Single-Incision Laparoscopic Surgery. Sensors 2023, 23, 5400. [Google Scholar] [CrossRef]
  73. Lee, J.Y.; Mucksavage, P.; Kerbl, D.C.; Huynh, V.B.; Etafy, M.; McDougall, E.M. Validation Study of a Virtual Reality Robotic Simulator—Role as an Assessment Tool? Journal of Urology 2012, 187, 998–1002. [Google Scholar] [CrossRef]
  74. Wang, Q.; Jiao, W.; Yu, R.; Johnson, M.T.; Zhang, Y. Virtual Reality Robot-Assisted Welding Based on Human Intention Recognition. IEEE Transactions on Automation Science and Engineering 2020, 17, 799–808. [Google Scholar] [CrossRef]
  75. Rovira, A.; Slater, M. Encouraging bystander helping behaviour in a violent incident: a virtual reality study using reinforcement learning. Sci. Rep. 2022, 12, 3843. [Google Scholar] [CrossRef]
  76. Badia, S.B.i.; Silva, P.A.; Branco, D.; Pinto, A.; Carvalho, C.; Menezes, P.; Almeida, J.; Pilacinski, A. Virtual reality for safe testing and development in collaborative robotics: Challenges and perspectives. Electronics (Basel) 2022, 11, 1726. [Google Scholar] [CrossRef]
  77. Ghadirzadeh, A.; Chen, X.; Yin, W.; Yi, Z.; Björkman, M.; Kragic, D. Human-centered collaborative robots with deep reinforcement learning 2020. arXiv:cs.RO/2007.01009.
  78. Wang, B.; Zhou, H.; Li, X.; Yang, G.; Zheng, P.; Song, C.; Yuan, Y.; Wuest, T.; Yang, H.; Wang, L. Human Digital Twin in the context of Industry 5.0. Robot. Comput. Integr. Manuf. 2024, 85, 102626. [Google Scholar] [CrossRef]
  79. Zhao, W.; Queralta, J.P.; Westerlund, T. Sim-to-Real Transfer in Deep Reinforcement Learning for Robotics: a Survey. 2020 IEEE Symposium Series on Computational Intelligence (SSCI). IEEE, 2020. [CrossRef]
  80. Marmpena, M.; Garcia, F.; Lim, A.; Hemion, N.; Wennekers, T. Data-driven emotional body language generation for social robotics 2022.
  81. Paolillo, A.; Colella, F.; Nosengo, N.; Schiano, F.; Stewart, W.; Zambrano, D.; Chappuis, I.; Lalive, R.; Floreano, D. How to compete with robots by assessing job automation risks and resilient alternatives. Sci. Robot. 2022, 7, eabg5561. [Google Scholar] [CrossRef] [PubMed]
  82. Koster, S.; Brunori, C. What to do when the robots come? Non-formal education in jobs affected by automation. Int. J. Manpow. 2021, 42, 1397–1419. [Google Scholar] [CrossRef]
  83. Dunstan, B.J.; Koh, J.T.K.V. A cognitive model for human willingness in human-robot interaction development. SIGGRAPH Asia 2014 Designing Tools For Crafting Interactive Artifacts; ACM: New York, NY, USA, 2014. [Google Scholar]
  84. van Maris, A.; Zook, N.; Caleb-Solly, P.; Studley, M.; Winfield, A.; Dogramadzi, S. Designing ethical social robots-A longitudinal field study with older adults. Front. Robot. AI 2020, 7, 1. [Google Scholar] [CrossRef]
  85. Draper, H.; Sorell, T. Ethical values and social care robots for older people: an international qualitative study. Ethics Inf. Technol. 2017, 19, 49–68. [Google Scholar] [CrossRef]
Figure 1. The various levels of co-operations between a human worker and a robot. [4]
Figure 1. The various levels of co-operations between a human worker and a robot. [4]
Preprints 98934 g001
Figure 2. Classical RL-Loop. [13]
Figure 2. Classical RL-Loop. [13]
Preprints 98934 g002
Figure 3. Number of documents per year - Reinforcement learning and collaborative robotics (2012-2023) [25].
Figure 3. Number of documents per year - Reinforcement learning and collaborative robotics (2012-2023) [25].
Preprints 98934 g003
Figure 4. Number of documents per year - Reinforcement learning and EEG (2012-2023) [26].
Figure 4. Number of documents per year - Reinforcement learning and EEG (2012-2023) [26].
Preprints 98934 g004
Figure 5. Number of documents per year - Reinforcement learning, collaborative robotics and virtual reality (2012-2023) [27].
Figure 5. Number of documents per year - Reinforcement learning, collaborative robotics and virtual reality (2012-2023) [27].
Preprints 98934 g005
Figure 6. Number of documents per year - Reinforcement learning, EEG and collaborative robotics (2012-2023) [28].
Figure 6. Number of documents per year - Reinforcement learning, EEG and collaborative robotics (2012-2023) [28].
Preprints 98934 g006
Figure 7. The five different brain waves: Delta, theta, alpha, beta, and gamma [30].
Figure 7. The five different brain waves: Delta, theta, alpha, beta, and gamma [30].
Preprints 98934 g007
Figure 8. Real experimentation using EEG for an assembly task [29].
Figure 8. Real experimentation using EEG for an assembly task [29].
Preprints 98934 g008
Figure 9. Different bio-sensors and their position in the human body [47].
Figure 9. Different bio-sensors and their position in the human body [47].
Preprints 98934 g009
Table 1. Categorical classification of relevant documents in the intersection of the 4 main areas of the review article.
Table 1. Categorical classification of relevant documents in the intersection of the 4 main areas of the review article.
Document Reference Reinforcement Learning Immersive technologies Human-Factor Sensors Collaborative Robotics
Toichoa Eyam, A. et al. (2021) [29]
Borboni, A. et al. (2022) [45]
Shayesteh,S. et al. (2022) [31]
Zhang, R. et al. (2022) [35]
Lagomarsino, M. et al (2023) [36]
Brenneis, D.J.A. et al. (2021) [65]
Rovira, A. et al. (2022) [75]
Iturrate, I. et al. (2010) [37]
Kar , R. et al. (2022) [41]
Luo, T.J. et al. (2018) [46]
Badia, S.B.i. et al. (2022) [76]
Ghadirzadeh, A. et al. (2020) [77]
Kragic, D. et al. (2016) [7]
Salazar-Gomez, A.F. et al. (2017) [19]
Onose, G. et al. (2012)[48]
Simone, V.D. et al. (2022) [6]
Pearce, M. et al. (2018) [5]
Sheridan, T.B. et al. (2016) [8]
Zhu, X. et al. (2022) [53]
Wang, B. et al. (2022) [78]
Note: if the topic is relevant through the document it will be marked as ✓.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated