1. Introduction
The introduction of cyber-physical systems (CPSs) has become ubiquitous, raising issues of adequate design and reliable interdependencies between their different components (technical and social), being in itself of large societal impact [
1,
2]. Some of these CPSs are especially designed with interfaces, which are able to convey social communication (such as chat bots in various applications), education (e.g. LEGO [
3] or MIRO [
4] robots), as well as perform psycho-social or pedagogical rehabilitation roles [
5,
6,
7,
8].
Numerous factors of user
acceptance of such socially functioning CPSs, including cognitive and neuro-cognitive mechanisms, have been intensively studied recently [
9]. Psychological effects like
reactance to a robot [
10] or the un
canny valley phenomenon [
11] suggest that CPSs are perceived as a special type of ‘being’, sharing features of a non-living entity/machine with a living (human or animal) being. Several theoretical accounts of the emergent ‘human-robot interaction realm’ have been forwarded in the recent century in support of the following view:
it is not straightforward to predict conditions (extrinsic or intrinsic) for smooth human-robot interaction
by simply following the already available knowledge of human-human interactions. The reason is that technology in general, and robots/CPSs in particular, present a novel
perceptual,
categorical and
interactive stimulus for the cognitive system of the user. The psychophysical effects, caused by such a novel stimulus, need to be further explored (theoretically and experimentally) to establish the relevant cognitive and socio-cognitive laws, guiding user behavior in the present-day complex technologically-mediated social environment.
Some important accounts of robot
acceptance by the human are formulated in recent theories, such as: the theory of violation of predictive coding [
12,
13,
14], realism inconsistency theory [
15], distortion of categorical perception theory [
16], robot mediation in social control theory [
18], and several others. An interesting question is whether the subjective response to the robot is
independent (in probabilistic terms), from the response to a human, to any non-human, or to both. A positive answer to this question may signify the emergence and validity of a novel subjective socio-cultural
category of everyday agents, commonly called
social robots, or social CPSs [
19,
20].
The proposed in the present paper theoretical account of social CPS
acceptance follows Moore’s theory of perceptual distortion at category boundaries [
16], which is most likely to occur in the areas of overlap of the distributions of the subjective effects in the mental representation, produced by the encounters of the cognitive system with a
human, or a
non-human entity (being probabilistically independent events). We assume probabilistic independence not only between the distributions of the psychological effects, caused by the categories ‘human – non-human’, but by the categories ‘human – robot’, and ‘robot – non-human’ (
Figure 1) As it will be discussed later in the paper, the actual relation will be assumed similar to the so called ‘near independence’ in terms of the probabilistic theory of Tulving and Wiseman [
21], which is the result of the actual
distortion of the normal distribution of the psychological effects towards somewhat bigger feature overlap, resulting, as a consequence, in ‘perceptual confusion’, possibly causing a negative reaction towards the robot in cases of close perceptual resemblance of the robot to a human/living being.
A recent systematic review of human
acceptance of social robots provides evidence of the human having
generally positive attitude towards social robots, hypothesizing the following intriguing possibility - to “acknowledge the qualities that mark social robots as not just another technological development but perhaps as an
entire new social group1 with its own complexity” [
22] - citing the earlier work of one of the authors [
23]. In the latter work a new ontological category is being proposed – that of the robots as technological tools, which are perceived as
more than just machines, i.e. as entities, possessing some distinctive features of ‘agenthood’ [
24]. The present paper considers the level of the
psychophysical response to the newly emergent complex stimulus – the CPS or the robot –with its
instantly presented perceptual features of physical, technical, technological, bio-physical and social appearance.
Psychophysics is traditionally viewed as a science, concerned with projecting a mental topology onto the physical reality, obeying certain mathematical laws of transformation. Examples of such laws are: Weber’s law of detection of just noticeable differences (JNDs) [
25], Fechner’s Law of intensity discrimination [
26], Steven’s power law of transformation of the stimulus intensity into the intensity of the sensation [
27] and Tulving’s theory of trace ‘near independence’ when memorizing a stimulus [
21]. It was convincingly demonstrated in recent works that the manifestation of these cognitive effects, being the fundament of psychophysics, can be meaningfully accounted for in probabilistic terms and models (as are higher level cognitive representations) [
28,
29]. The latter assume that the
entry point of any sensation, coming from the external environment, produces not a single reaction/response, but a
distribution of sensory effects translating the physical intensity via electro-chemical reaction into a set of discrete electrical reactions of the neuron to forward the signal distribution towards the generation of complex
psychophysical and psychological phenomena [
29].
A bridging theory between sensory psychophysics and higher psychological levels of decision making can be considered the theory of Thurstone [
30,
31], who proposed the law of comparative judgement, being largely compatible with the other psychophysics laws from a probabilistic perspective, as demonstrated in [
32] (
Table 1).
These psychophysical ‘laws’ deal with human
assessment as a function of some cognitive measurement
device, performing on the abstract representation of the external stimulus in the human mental representation’s space, which may, or may not, be entirely
congruent [
37] to the underlying brain processing of the same stimulus, thereby creating a meaningful and truthful picture of the objective world. Such a possibility is being supported by studies, mapping linear to nonlinear transformations of processes in the psychophysical effects onto neuronal activities in fMRI studies like in [
38].
On the one hand, a CPS, or a robot, can be a physical entity with little, or none, biologically inspired features, but on the other – it can resemble biological tissue – and even deeper, psychological levels of performance of various creatures – real or imaginary. In such complex cases, the emotional/affective component in perception is inseparable from the cognitive component, as revealed by the so called Thatcher illusion [
39]. The illusion consists of strong emotional response to distorted faces when presented in an upright orientation, but not when inverted. It is suggested by such illusions that the psychophysics of user perception of complex physical objects/entities/counterparts reflects the emotional component as well, not just for the
objective mental projection only of the perceived stimulus. So, the psychological (cognitive and emotional) processing, when we ask people how they perceive a robot is, without any doubt,
deep,
elaborate,
complex and emotionally charged.
2. Classical Psychophysical Assumptions Relevant to CPS Acceptance as a Psychological Reality
Robot acceptance has been interpreted as a psychological reality in [
40]. Three abstract processes are outlined in the interaction with service robots: functional, informational and relational. The first two characterize the interaction with any technology, whereas the third one deals with specific
relations to be established with the new ‘social’ entity – like
benevolence,
satisfaction and
understanding. The law of comparative judgement of Thurstone has been successfully used in cases when interval scales of subjective opinions on similar abstract attributes are needed for different purposes, including for understanding, for example, of the psychological and neural mechanisms “for accepting and rejecting artificial social partners in the Uncanny Valley” (40, p. 339). The underlying psychological processes are rather
implicit, than explicit, and can be modelled in a psychophysical framework, based on the descriptions, given in
Table 1. It presents the above mentioned psychophysics laws and their psychological relevance to the issue of human-robot interaction (HRI) from a novel
acceptance perspective, which we call PRAM (Psychophysics of Robot Acceptance Model).
PRAM asserts that the classical and modern psychophysical assumptions describe the regularities of the
internal processing of the complex environment, in which the human exist – physical, technological and social - on different conceptual levels of mental abstraction, reflecting the complexities of the attributes of the objects in the world. Justification of this view can be found in [
29], referring to the classics of psychophysics from a modern perspective: “Fechner seemed to have a clear notion of what had to be done to translate the study of outer psychophysics to the study of inner psychophysics (Fechner 1860, p. 56): “Quantitative dependence of sensation on the [outer] stimulus can eventually be translated into dependence on the [neural activity] that directly underlies sensation—in short, the psychophysical processes—and the measurement of sensation will be changed to one depending on the strength of these processes.” When people coexists with a robot in various social situations at work, home, hospital, etc. – the attributes of the robot are being processed from many facets – crucially important to their survival, including some of the presented above psychophysical laws of stimulus discrimination.
3. A View on the Psychophysical Distance between the Robot and Human Agent Stimuli
Returning to the theory of Moore [
16], described in the introduction, it plots the probability of naming a stimulus a ‘robot’ as non-independent from perceiving the stimulus as a representative of the ‘non-human category’, which has a larger standard deviation (since numerous items can be perceived as non-human in type). The probability of responding by ‘human’ to a human stimulus is independent from the probability of responding ‘robot’ to a ‘non-human’ stimulus, according to [
16]. An alternative view, as proposed in the present paper, would be to assume the ‘robot’ category independent or ‘nearly’ (in Tulving’s terms [
21]) independent from
both the human and ‘non-human’ categories. If the foreseen’ ‘near independence’ is observed in experimental studies, this would provide yet another explanation of the nature of human
reactance to humanoid robots, or the emergence of the
uncanny valley phenomenon as a perceptual mismatch of subjectively incompatible, but overlapping in some features, categories, existing as new
ontologies of agents emerging in the course of the social development in the recent centuries. This will support the view, proposed in the present paper, of user reaction to a robot as a socially evolving phenomenon and its
acceptance by the human as a function of the evolution of the social brain.
As an example, illustrating the above statement, consider
Figure 2 and
Figure 3.
Figure 2 presents photos of 4 agents, which we have used in our previous research on user acceptance of robots/CPSs in various roles, such as a toy - the walking robot BigFoot [
41], a), a zoology teacher - the humanoid robot NAO [
42], b), and counseling assistant – the android type of robot SociBot, [
43], which we have called Alice c). The user reaction to videos of Alice and NAO was compared to the reaction to the video of a human actress (by the name of Violina), d), along dimensions such as
sociability or
trust, in [
6].
The main outcome from the study was the
lack of a statistical effect of type of face on the feature assessment process, which supported the idea that humanoid robots can perform tasks and roles, typical for the human, even exhibit professional skills [
6]. Interestingly, viewers assess differently the positive and negative features (main effect of factor feature type), of the presented faces – human, android or robotic (machine-looking). They tend to be cautious in negatively evaluating neutral faces, and are inclined to see positive features in these faces to a larger extent. One possibility is that the used stimuli – the robots – are selected to avoid possible repulsive features by design. This inevitably would change the form of the Mori’s function. At the same time the general tendency, in terms of the hypothetical mental distance on the feature dimensions, would be preserved.
Figure 3 plots the hypothetical
uncanny valley effect, expected at the encounter of each of the above agents, according to the proposed theoretical account of robot acceptance by the human (PRAM). The
reactance effect is assumed identical with the
uncanny valley effect in terms of its
valence, but different as an
intensity of reaction – psychological, in the first case (reactance), and visceral – in the second (uncanny valley) [
6].
The classical
uncanny valley function depicts the functional relation between 2 dimensions –
human likeness and
affinity [
11]. The horizontal axis x represents the
human likeness, whereas the vertical axis y represents the
affinity in Mori’s terms. In more recent studies it was accepted to split the affinity attribute/feature into
familiarity and
likeability, since the original
affinity feature is a complex attribute, which is easily understood in Japanese, but not possible to translate unambiguously in English. By being non-independent statistically,
familiarity and
likeability are used to complement or confirm the main assumptions on investigating various factors, which may produce the psychological effect of
uncanny valley in cases of encounter of any one of the artificial agents [
38]. Cases (a) and (b) of
Figure 3 represent hypothetically the classical view of the
uncanny valley effect, whereas cases (c) and (d) – the psychophysical view (PRAM), forwarder in the present paper.
Consider the probabilistic distributions of the subjective effects, plotted as a Mori’s function along two dimensions. The unambiguous dimension is the x axis, called human likeness. The y axis can be familiarity, or likeability – so 2 cases of y as a function of x are possible – familiarity, depending on human likeness and likeability, depending on human likeness. The effect is a sudden drop of the y value as a function of high human likeness of an artificial agent, and sharp increase of the y function when a human is displayed instead.
The machine looking toy robot BigFoot is expected to be placed at the left origin of the
human likeness dimension in all 4 cases of
Figure 3. The robot is the least resembling a human, least
familiar and, possibly, least
liked one. The position of the humanoid robot NAO, popular for being designed as cute and likable by children, will possibly be approximately in the same position in all 4 cases, too –more
human-like than BigFoot,
familiar as being quite frequent, and
likeable by design. The relative positions of the android and the human faces, however, differ, depending on the classical or the proposed PRAM cases. In the classical case the
uncanny valley function would predict a drop below the zero line of the y axis with the increase of human like features of the artificial agent Alice. This drop below the y axis signifies the
affective (negative emotional) reaction to the android, closely resembling perceptually a human agent. Alice is
unfamiliar (a) and possibly not much
liked because of this (b). In both cases the human face is the most
familiar and
likeable (though never seen before).
The proposed here account predicts probabilistic independence of the three categories – among the distributions of the psychophysical effects of the non-human agent (BigFoot), the robot agents (NAO and Alice), and the human (Violina). The crucial prediction is the similarity (or non-independence) effects of NAO and Violina. In case (c), only NAO is familiar, whereas BigFoot, Alice and Violina are never seen before and the familiarity effect will be similar for all of them. At the same time the human likeness is distinctly different in all 3 robotic cases. By applying the Thurstone’s scaling procedure, it is possible to determine the exact position of each of the agents on the human likeness scale/dimension for each of the experimental conditions.
Considering case (d), it is not quite possible to predict which agent will be most liked. All faces have neutral expressions, and the classical condition of the expected repulsion by the artificial agent will not hold to the full extent. At the same time, by being dependent on the individual internal criterion (according to Moore [
16]), and by applying the scaling method of Thurstone [
30], it will become possible to design robotic scenarios, which are tailored to the preferences of the individual user of the robot
both extrinsically (verbal report) and intrinsically (psychological/visceral reaction). Consequently, the proposed PRAM approach presents itself as an overall methodological framework, applicable for
accessibility design of CPSs, intended to support better access to knowledge with the help of CPSs/robots also by users with special learning needs.