Reports of troubles with autonomous systems and intelligent machines are not new in the scientific literature. The first evidence and discussion of negative effects produced by the interaction with these technologies were reported around the 1980s and 1990s (e.g., [
2,
54,
55]), mainly in the aircraft domain (e.g., [
56,
57]). Today, most scientists and engineers acknowledge that introducing autonomy in a task is not just a “substitution” of an intelligent system or a machine for a human activity [
58]. Autonomous systems do not automatically reduce the amount of work that a human agent needs to allocate to a task as it does not make the user’s experience necessarily easier. Fifty years of research teach us that the right balance between autonomy and human control must be considered carefully before introducing automated functions and machines [
59]. Otherwise, autonomous systems can have a substantial detrimental effect on human decision-making and performance, which can result in dramatic consequences related to performance and safety [
2]. In this section, we will review the main findings in the scientific literature on the negative effects reported when autonomous systems are introduced in human activity. In doing so, we will narrow the presentation to a quasi-strict description of the effects produced on performance and decision-making. Mediating factors and concepts (e.g., loss of situation awareness, complacency) explaining these outcomes will be discussed in the second part of the manuscript.
The first negative effect that we can discuss concerns, with some irony, multi-tasking. As we have seen in the previous section, autonomous systems are designed to perform tasks that were initially performed by a human agent, with the consequence of reducing the number of actions a user has to perform and/or helping for multi-tasking. However, this beneficial effect is true as long as autonomy is properly designed and implemented, and as long as a human agent is properly trained for its use. Like performance decreases with manual multi-tasking, introducing too many automated-tasks to control might also have detrimental effect on performance (for review, see for example [
60]). For instance, Chen and Joyner [
61] reported that the performance on a target gunnery task in a simulated mounted combat system decreased with the introduction of an additional automated task, particularly with low level of autonomy, while perceived workload increased. Wang et al. [
62] in a search and rescue task with multiple robots, found that exploring the environment on one hand, and searching on a screen of targets to rescue on the other hand, increased with the number of robots involved in the mission (4, 8, or 12). However, the authors found that performance decreased when participants had to control both exploration and search on the screen from 8 to 12 robots, and that perceived workload increased with the number of robots in each condition (see also [
63,
64] for similar results). These examples show that the introduction of autonomous systems is definitely not enough to improve human multi-tasking performance, and that, on the contrary, an inappropriate level of automation and task allocation can lead to substantial performance decrement and more errors from the human user. Particularly, studies on multi-tasking suggest that human agents are particularly sensitive to overreliance, which is certainly one of the most important negative effects in human-autonomous systems interaction. Overreliance is broadly defined as the tendency of humans interacting with autonomous systems to use the output of the systems (e.g., information cues) as heuristics to reduce effortful activities such as searching and processing information [
65]. More specifically, overreliance is said to occur when the performance of a user decreases because of incorrect information and/or decision made by an autonomous system [
25,
66]. It is manifest in two types of errors: omission error and commission error. Omission error occurs when an autonomous system fails to inform about a significant event (e.g., a weapon not detected in a luggage screening task), which result in the user not taking the appropriate decision in that situation (e.g., not checking the screened-luggage). At the opposite, in commission error the system makes an incorrect decision about the environment or gives incorrect advices, which, in turn, result here also in an inappropriate response by the user (e.g., the system assumes that there is a weapon in a screened-luggage while there is not). Thus, overreliance corresponds to the fact that incorrect information or decision cues from the autonomous system, but not the actual environment, control the decisions and actions from a human agent. Examples of this phenomenon has been reported in almost every task and domain involving autonomy [
25,
66]. For longtime, overreliance has been associated with autonomy used in multi-tasking situations. For example, Mosier et al. [
28] found both omission and commission errors in pilots tested in a simulated flight task in which multiple flying tasks had to be monitored and supported by partially unreliable autonomous systems (see also [
67,
68,
69]). However, it seems now evident that overreliance affects also single-task environment [
70]. For example, Alberdi et al. [
71] found omission errors in a computer-assisted detection task for mammography, while Goddard et al. [
72] reported commission errors caused by clinical decision-support system in prescription task. In the command-and-control study by Rovira et al. [
21] cited above, despite the positive effect of autonomous decision on correct responses made by the participants, the authors also found incorrect responses during unreliable trials. In summary, overreliance occurs both in single- and multiple-tasks environments, in both omission and commission errors, and it is observed for both information and decision automation.
In addition to inappropriate autonomous-task allocation and overreliance, another negative effect of autonomy is the loss of skills ([
55]; or skill decay). Loss of skills refers to a deterioration in task performance (motor or cognitive) after a more or less prolonged experience of a user with automated tasks. A driver who has difficulty to drive an old car after a long period of driving a very modern one with many automated assistances is an illustration of this effect. In the scientific literature, evidence of loss of skills were found for example in fine-motor flying skills [
73] or flight planning [
74]. This effect is particularly critical when the system fails, and the human operator has to take back manually the control of the task. In this context, there is evidence of a decrement in the return-to-manual control after system failure. For example, Endsley & Kiris [
55] found that response time decision in a navigation task increases when participants had to unexpectedly respond manually after a period of automated-assistance. Similar results were found by Manzey et al. [
27] with highest return-to-manual decrement for higher level of autonomy (see also [
75,
76]). To conclude this section, we would like to discuss another intriguing aspect of human-autonomous system interaction that have shown growing interest in the recent years, namely, the effect of autonomy on human agency [
77]. Human agency (or “sense of agency”) refers to the individual experience of controlling one’s own actions and, through those actions, outcomes in the external environment [
78]. Recently, scientists have become interested in how the interaction with autonomous systems influences how people feel in control in their own actions. One of the first demonstrations of an effect of autonomy on sense of agency came from a study by Berberian et al. [
26]. In an aircraft supervision task, the authors found a decrease in agency with the introduction of task autonomy, with agency reduced at higher levels of autonomy compared to lower levels (see also [
79,
80]). This finding is relevant in the context of our review because agency is supposed to play a role in the attribution of responsibility and in the motivation of goal-directed behaviors [
81,
82]. For example, in the social domain, Caspar and colleagues [
83,
84,
85] found that a decrease in participants’ sense of agency was correlated with anti-social behaviors increment from human agents. Thus, the evidence that autonomy can reduce the sense of agency from human users lead to believe of potentially misuses, in addition to the ones described above, particularly in moral or sensitive domains. Consider again the example of a combat drone operator engaged on a battlefield, exposed to the risk of civilian losses and infrastructure damages during attacks. In this case, a decrement in the sense of agency – combined with omission or commission errors – from the human operator might have dramatic consequences in terms of human life. It is clear from that situation that the negative effects of autonomous systems are not just annoying “side-effects” without real importance. If we want to avoid such dramatic incidents, we need to understand how the interaction with autonomous systems might change our behaviors and our decisions when we have to face moral situations.