Preprint
Review

Psychophysics of Social Cyber-Physical System Acceptance

Altmetrics

Downloads

126

Views

60

Comments

0

Submitted:

01 April 2024

Posted:

03 April 2024

You are already at the latest version

Alerts
Abstract
A review of literature, supporting the view on the psychophysical origins of some acceptance effects of cyber-physical systems (CPSs), is presented and discussed in the paper. Psychological effects like the reactance to a robot or the uncanny valley phenomenon suggest that CPSs are perceived as a special type of ‘being’, a phylogenetically emergent category, or a novel ontology in a philosophical sense, culturally developing as a function of the social brain. Justification of this view is provided by the psychophysically-relevant human responses to technologically and socially ambiguous stimuli on the one hand, and the probabilistic independence of the proposed ontology to similar ontologies, subjectively experienced by the human, on the other. The results of the presented in the paper analysis demonstrate the feasibility of the proposed hypothesis of ontological ‘near independence’ of the distribution of the probabilistic psychophysical processes in response to the cyber-physical stimulus as a socially evolving phenomenon.
Keywords: 
Subject: Computer Science and Mathematics  -   Robotics

1. Introduction

The introduction of cyber-physical systems (CPSs) has become ubiquitous, raising issues of adequate design and reliable interdependencies between their different components (technical and social), being in itself of large societal impact [1,2]. Some of these CPSs are especially designed with interfaces, which are able to convey social communication (such as chat bots in various applications), education (e.g. LEGO [3] or MIRO [4] robots), as well as perform psycho-social or pedagogical rehabilitation roles [5,6,7,8].
Numerous factors of user acceptance of such socially functioning CPSs, including cognitive and neuro-cognitive mechanisms, have been intensively studied recently [9]. Psychological effects like reactance to a robot [10] or the uncanny valley phenomenon [11] suggest that CPSs are perceived as a special type of ‘being’, sharing features of a non-living entity/machine with a living (human or animal) being. Several theoretical accounts of the emergent ‘human-robot interaction realm’ have been forwarded in the recent century in support of the following view: it is not straightforward to predict conditions (extrinsic or intrinsic) for smooth human-robot interaction by simply following the already available knowledge of human-human interactions. The reason is that technology in general, and robots/CPSs in particular, present a novel perceptual, categorical and interactive stimulus for the cognitive system of the user. The psychophysical effects, caused by such a novel stimulus, need to be further explored (theoretically and experimentally) to establish the relevant cognitive and socio-cognitive laws, guiding user behavior in the present-day complex technologically-mediated social environment.
Some important accounts of robot acceptance by the human are formulated in recent theories, such as: the theory of violation of predictive coding [12,13,14], realism inconsistency theory [15], distortion of categorical perception theory [16], robot mediation in social control theory [18], and several others. An interesting question is whether the subjective response to the robot is independent (in probabilistic terms), from the response to a human, to any non-human, or to both. A positive answer to this question may signify the emergence and validity of a novel subjective socio-cultural category of everyday agents, commonly called social robots, or social CPSs [19,20].
The proposed in the present paper theoretical account of social CPS acceptance follows Moore’s theory of perceptual distortion at category boundaries [16], which is most likely to occur in the areas of overlap of the distributions of the subjective effects in the mental representation, produced by the encounters of the cognitive system with a human, or a non-human entity (being probabilistically independent events). We assume probabilistic independence not only between the distributions of the psychological effects, caused by the categories ‘human – non-human’, but by the categories ‘human – robot’, and ‘robot – non-human’ (Figure 1) As it will be discussed later in the paper, the actual relation will be assumed similar to the so called ‘near independence’ in terms of the probabilistic theory of Tulving and Wiseman [21], which is the result of the actual distortion of the normal distribution of the psychological effects towards somewhat bigger feature overlap, resulting, as a consequence, in ‘perceptual confusion’, possibly causing a negative reaction towards the robot in cases of close perceptual resemblance of the robot to a human/living being.
A recent systematic review of human acceptance of social robots provides evidence of the human having generally positive attitude towards social robots, hypothesizing the following intriguing possibility - to “acknowledge the qualities that mark social robots as not just another technological development but perhaps as an entire new social group1 with its own complexity” [22] - citing the earlier work of one of the authors [23]. In the latter work a new ontological category is being proposed – that of the robots as technological tools, which are perceived as more than just machines, i.e. as entities, possessing some distinctive features of ‘agenthood’ [24]. The present paper considers the level of the psychophysical response to the newly emergent complex stimulus – the CPS or the robot –with its instantly presented perceptual features of physical, technical, technological, bio-physical and social appearance.
Psychophysics is traditionally viewed as a science, concerned with projecting a mental topology onto the physical reality, obeying certain mathematical laws of transformation. Examples of such laws are: Weber’s law of detection of just noticeable differences (JNDs) [25], Fechner’s Law of intensity discrimination [26], Steven’s power law of transformation of the stimulus intensity into the intensity of the sensation [27] and Tulving’s theory of trace ‘near independence’ when memorizing a stimulus [21]. It was convincingly demonstrated in recent works that the manifestation of these cognitive effects, being the fundament of psychophysics, can be meaningfully accounted for in probabilistic terms and models (as are higher level cognitive representations) [28,29]. The latter assume that the entry point of any sensation, coming from the external environment, produces not a single reaction/response, but a distribution of sensory effects translating the physical intensity via electro-chemical reaction into a set of discrete electrical reactions of the neuron to forward the signal distribution towards the generation of complex psychophysical and psychological phenomena [29].
A bridging theory between sensory psychophysics and higher psychological levels of decision making can be considered the theory of Thurstone [30,31], who proposed the law of comparative judgement, being largely compatible with the other psychophysics laws from a probabilistic perspective, as demonstrated in [32] (Table 1).
These psychophysical ‘laws’ deal with human assessment as a function of some cognitive measurement device, performing on the abstract representation of the external stimulus in the human mental representation’s space, which may, or may not, be entirely congruent [37] to the underlying brain processing of the same stimulus, thereby creating a meaningful and truthful picture of the objective world. Such a possibility is being supported by studies, mapping linear to nonlinear transformations of processes in the psychophysical effects onto neuronal activities in fMRI studies like in [38].
On the one hand, a CPS, or a robot, can be a physical entity with little, or none, biologically inspired features, but on the other – it can resemble biological tissue – and even deeper, psychological levels of performance of various creatures – real or imaginary. In such complex cases, the emotional/affective component in perception is inseparable from the cognitive component, as revealed by the so called Thatcher illusion [39]. The illusion consists of strong emotional response to distorted faces when presented in an upright orientation, but not when inverted. It is suggested by such illusions that the psychophysics of user perception of complex physical objects/entities/counterparts reflects the emotional component as well, not just for the objective mental projection only of the perceived stimulus. So, the psychological (cognitive and emotional) processing, when we ask people how they perceive a robot is, without any doubt, deep, elaborate, complex and emotionally charged.

2. Classical Psychophysical Assumptions Relevant to CPS Acceptance as a Psychological Reality

Robot acceptance has been interpreted as a psychological reality in [40]. Three abstract processes are outlined in the interaction with service robots: functional, informational and relational. The first two characterize the interaction with any technology, whereas the third one deals with specific relations to be established with the new ‘social’ entity – like benevolence, satisfaction and understanding. The law of comparative judgement of Thurstone has been successfully used in cases when interval scales of subjective opinions on similar abstract attributes are needed for different purposes, including for understanding, for example, of the psychological and neural mechanisms “for accepting and rejecting artificial social partners in the Uncanny Valley” (40, p. 339). The underlying psychological processes are rather implicit, than explicit, and can be modelled in a psychophysical framework, based on the descriptions, given in Table 1. It presents the above mentioned psychophysics laws and their psychological relevance to the issue of human-robot interaction (HRI) from a novel acceptance perspective, which we call PRAM (Psychophysics of Robot Acceptance Model).
PRAM asserts that the classical and modern psychophysical assumptions describe the regularities of the internal processing of the complex environment, in which the human exist – physical, technological and social - on different conceptual levels of mental abstraction, reflecting the complexities of the attributes of the objects in the world. Justification of this view can be found in [29], referring to the classics of psychophysics from a modern perspective: “Fechner seemed to have a clear notion of what had to be done to translate the study of outer psychophysics to the study of inner psychophysics (Fechner 1860, p. 56): “Quantitative dependence of sensation on the [outer] stimulus can eventually be translated into dependence on the [neural activity] that directly underlies sensation—in short, the psychophysical processes—and the measurement of sensation will be changed to one depending on the strength of these processes.” When people coexists with a robot in various social situations at work, home, hospital, etc. – the attributes of the robot are being processed from many facets – crucially important to their survival, including some of the presented above psychophysical laws of stimulus discrimination.

3. A View on the Psychophysical Distance between the Robot and Human Agent Stimuli

Returning to the theory of Moore [16], described in the introduction, it plots the probability of naming a stimulus a ‘robot’ as non-independent from perceiving the stimulus as a representative of the ‘non-human category’, which has a larger standard deviation (since numerous items can be perceived as non-human in type). The probability of responding by ‘human’ to a human stimulus is independent from the probability of responding ‘robot’ to a ‘non-human’ stimulus, according to [16]. An alternative view, as proposed in the present paper, would be to assume the ‘robot’ category independent or ‘nearly’ (in Tulving’s terms [21]) independent from both the human and ‘non-human’ categories. If the foreseen’ ‘near independence’ is observed in experimental studies, this would provide yet another explanation of the nature of human reactance to humanoid robots, or the emergence of the uncanny valley phenomenon as a perceptual mismatch of subjectively incompatible, but overlapping in some features, categories, existing as new ontologies of agents emerging in the course of the social development in the recent centuries. This will support the view, proposed in the present paper, of user reaction to a robot as a socially evolving phenomenon and its acceptance by the human as a function of the evolution of the social brain.
As an example, illustrating the above statement, consider Figure 2 and Figure 3. Figure 2 presents photos of 4 agents, which we have used in our previous research on user acceptance of robots/CPSs in various roles, such as a toy - the walking robot BigFoot [41], a), a zoology teacher - the humanoid robot NAO [42], b), and counseling assistant – the android type of robot SociBot, [43], which we have called Alice c). The user reaction to videos of Alice and NAO was compared to the reaction to the video of a human actress (by the name of Violina), d), along dimensions such as sociability or trust, in [6].
The main outcome from the study was the lack of a statistical effect of type of face on the feature assessment process, which supported the idea that humanoid robots can perform tasks and roles, typical for the human, even exhibit professional skills [6]. Interestingly, viewers assess differently the positive and negative features (main effect of factor feature type), of the presented faces – human, android or robotic (machine-looking). They tend to be cautious in negatively evaluating neutral faces, and are inclined to see positive features in these faces to a larger extent. One possibility is that the used stimuli – the robots – are selected to avoid possible repulsive features by design. This inevitably would change the form of the Mori’s function. At the same time the general tendency, in terms of the hypothetical mental distance on the feature dimensions, would be preserved.
Figure 3 plots the hypothetical uncanny valley effect, expected at the encounter of each of the above agents, according to the proposed theoretical account of robot acceptance by the human (PRAM). The reactance effect is assumed identical with the uncanny valley effect in terms of its valence, but different as an intensity of reaction – psychological, in the first case (reactance), and visceral – in the second (uncanny valley) [6].
The classical uncanny valley function depicts the functional relation between 2 dimensions – human likeness and affinity [11]. The horizontal axis x represents the human likeness, whereas the vertical axis y represents the affinity in Mori’s terms. In more recent studies it was accepted to split the affinity attribute/feature into familiarity and likeability, since the original affinity feature is a complex attribute, which is easily understood in Japanese, but not possible to translate unambiguously in English. By being non-independent statistically, familiarity and likeability are used to complement or confirm the main assumptions on investigating various factors, which may produce the psychological effect of uncanny valley in cases of encounter of any one of the artificial agents [38]. Cases (a) and (b) of Figure 3 represent hypothetically the classical view of the uncanny valley effect, whereas cases (c) and (d) – the psychophysical view (PRAM), forwarder in the present paper.
Consider the probabilistic distributions of the subjective effects, plotted as a Mori’s function along two dimensions. The unambiguous dimension is the x axis, called human likeness. The y axis can be familiarity, or likeability – so 2 cases of y as a function of x are possible – familiarity, depending on human likeness and likeability, depending on human likeness. The effect is a sudden drop of the y value as a function of high human likeness of an artificial agent, and sharp increase of the y function when a human is displayed instead.
The machine looking toy robot BigFoot is expected to be placed at the left origin of the human likeness dimension in all 4 cases of Figure 3. The robot is the least resembling a human, least familiar and, possibly, least liked one. The position of the humanoid robot NAO, popular for being designed as cute and likable by children, will possibly be approximately in the same position in all 4 cases, too –more human-like than BigFoot, familiar as being quite frequent, and likeable by design. The relative positions of the android and the human faces, however, differ, depending on the classical or the proposed PRAM cases. In the classical case the uncanny valley function would predict a drop below the zero line of the y axis with the increase of human like features of the artificial agent Alice. This drop below the y axis signifies the affective (negative emotional) reaction to the android, closely resembling perceptually a human agent. Alice is unfamiliar (a) and possibly not much liked because of this (b). In both cases the human face is the most familiar and likeable (though never seen before).
The proposed here account predicts probabilistic independence of the three categories – among the distributions of the psychophysical effects of the non-human agent (BigFoot), the robot agents (NAO and Alice), and the human (Violina). The crucial prediction is the similarity (or non-independence) effects of NAO and Violina. In case (c), only NAO is familiar, whereas BigFoot, Alice and Violina are never seen before and the familiarity effect will be similar for all of them. At the same time the human likeness is distinctly different in all 3 robotic cases. By applying the Thurstone’s scaling procedure, it is possible to determine the exact position of each of the agents on the human likeness scale/dimension for each of the experimental conditions.
Considering case (d), it is not quite possible to predict which agent will be most liked. All faces have neutral expressions, and the classical condition of the expected repulsion by the artificial agent will not hold to the full extent. At the same time, by being dependent on the individual internal criterion (according to Moore [16]), and by applying the scaling method of Thurstone [30], it will become possible to design robotic scenarios, which are tailored to the preferences of the individual user of the robot both extrinsically (verbal report) and intrinsically (psychological/visceral reaction). Consequently, the proposed PRAM approach presents itself as an overall methodological framework, applicable for accessibility design of CPSs, intended to support better access to knowledge with the help of CPSs/robots also by users with special learning needs.

4. Conclusion

The presented in the paper review analysed the existing approaches to model the acceptance by the user of autonomously performing artificial agents/CPSs, often referred to as social robots for being able to interact meaningfully or usefully with a human user – in rehabilitation, school, counseling, or as service robots. Several psychophysical laws are systematised from the point of view of their relevance to model the psychological processes underlying user acceptance of these technologies within a novel theoretical framework called PRAM (Psychophysics of Robot Acceptance Model). It was applied for hypothesis generation for future experimental studies, which can provide useful design guidance of CPSs, which are acceptable and individualized to the sensor and learning needs of the users.

Author Contributions

Conceptualization, M.D. and A.K methodology, N.C. and A.M.; validation, M.D., and I.C.; formal analysis, A.M.; investigation, N.C.; writing—original draft preparation, M.D.; writing—review and editing, A.M.; visualization, N.C. and A.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Bulgarian National Science Fund, grant number № КP-06-Н42/4 of 08.12.2020г. Digital Accessibility for People with Special Needs: Methodology, Conceptual Models and Innovative EcoSystems.

Data Availability Statement

No new data were created for the present paper.

Acknowledgments

The authors are grateful to Dr. Chris Harper, Dr. Dan Withey, and Dr.Virginia Ruis Garate for insightful discussions of the model and their help in programming Alice at Bristol Robotics Lab, UK.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Notes

1
Italics is ours.

References

  1. Losano, C.V.; Vijayan, K.K. Literature review on Cyber Physical Systems Design. Procedia Manuf. 2020, 45, 295–300. [Google Scholar] [CrossRef]
  2. Wang, S.; Gu, X.; Chen, J.J.; Chen, C.; Huang, X. Robustness improvement strategy of cyber-physical systems with weak interdependency. Reliab. Eng. Syst. Saf. 2023, 229. [Google Scholar] [CrossRef]
  3. Lawhead, P.B.; Duncan, M.E.; Bland, C.G.; Goldweber, M.; Schep, M.; Barnes, D.J.; Hollingsworth, R.G. A road map for teaching introductory programming using LEGO© mindstorms robots. ACM SIGCSE Bull. 2002, 35, 191–201. [Google Scholar] [CrossRef]
  4. Collins, E.C.; Prescott, T.J.; Mitchinson, B. Saying it with light: A pilot study of affective communication using the MIRO robot. In Biomimetic and Biohybrid Systems: 4th International Conference, Living Machines 2015, Barcelona, Spain, July 28-31; Proceedings 4, 243-255; Springer International Publishing: Berlin/Heidelberg, Germany, 2015. [Google Scholar] [CrossRef]
  5. Bachrach, L.L. Psychosocial rehabilitation and psychiatry in the care of long-term patients. Am. J. Psychiatry 1992, 149, 1455–1463. [Google Scholar] [CrossRef]
  6. Dimitrova, M.; Garate, V.R.; Withey, D.; Harper, C. Implicit Aspects of the Psychosocial Rehabilitation with a Humanoid Robot. In International Conference in Methodologies and intelligent Systems for Technology Enhanced Learning; Springer Nature: Cham, Switzerland, 2023; pp. 119–128, https://link.springer.com/chapter/10.1007/978-3-031-42134-1_12. [Google Scholar]
  7. Robinson, N.L.; Cottier, T.V.; Kavanagh, D.J. Psychosocial health interventions by social robots: Systematic review of randomized controlled trials. J. Med. Internet Res. 2019, 21, e13203. [Google Scholar] [CrossRef]
  8. Dimitrova, M.; Kostova, S.; Lekova, A.; Vrochidou, E.; Chavdarov, I.; Krastev, A.; Ozaeta, L. Cyber-physical systems for pedagogical rehabilitation from an inclusive education perspective. BRAIN. Broad Res. Artif. Intell. Neurosci. 2021, 11, 187–207, https://brain.edusoft.ro/index.php/brain/article/view/1135. [Google Scholar]
  9. Wolbring, G.; Diep, L.; Yumakulov, S.; Ball, N.; Yergens, D. Social robots, brain machine interfaces and neuro/cognitive enhancers: Three emerging science and technology products through the lens of technology acceptance theories, models and frameworks. Technologies 2013, 1, 3–25. [Google Scholar] [CrossRef]
  10. Ghazali, A.S.; Ham, J.; Barakova, E.; Markopoulos, P. The influence of social cues in persuasive social robots on psychological reactance and compliance. Comput. Hum. Behav. 2018, 87, 58–65. [Google Scholar] [CrossRef]
  11. Mori, M.; MacDorman, K.F.; Kageki, N. The uncanny valley [from the field]. IEEE Robot. Autom. Mag. 2012, 19, 98–100. [Google Scholar] [CrossRef]
  12. Saygin, A.P.; Chaminade, T.; Ishiguro, H.; Driver, J.; Frith, C. The thing that should not be: Predictive coding and the uncanny valley in perceiving human and humanoid robot actions. Soc. Cogn. Affect. Neurosci. 2011, 7, 413–422. [Google Scholar] [CrossRef]
  13. Urgen, B.A.; Plank, M.; Ishiguro, H.; Poizner, H.; Saygin, A.P. EEG theta and Mu oscillations during perception of human and robot actions. Front. Neurorobotics 2013, 7, 19. [Google Scholar] [CrossRef]
  14. Urgen, B.A.; Kutas, M.; Saygin, A.P. Uncanny valley as a window into predictive processing in the social brain. Neuropsychologia 2018, 114, 181–185. [Google Scholar] [CrossRef]
  15. MacDorman, K.F.; Chattopadhyay, D. Reducing consistency in human realism increases the uncanny valley effect; increasing category uncertainty does not. Cognition 2016, 146, 190–205. [Google Scholar] [CrossRef]
  16. Moore, R.K. A Bayesian explanation of the ‘Uncanny Valley’effect and related psychological phenomena. Sci. Rep. 2012, 2, 864. [Google Scholar] [CrossRef]
  17. Xu, J.; Zhang, C.; Cuijpers, R.H.; IJsselsteijn, W.A. How Might Robots Change Us? Mechanisms Underlying Health Persuasion in Human-Robot Interaction from A Relationship Perspective: A Position Paper, Persuasive 2023. In Proceedings of the 18th International Conference on Persuasive Technology, CEUR Workshop Proceedings, Eindhoven, The Netherlands, 19–21 April 2023. https://ceur-ws.org/Vol-3474/paper20.pdf.. [Google Scholar]
  18. El-Haouzi, H.B.; Valette, E.; Krings, B.J.; Moniz, A.B. Social dimensions in CPS & IoT based automated production systems. Societies 2021, 11, 98. [Google Scholar] [CrossRef]
  19. Dimitrova, M.; Wagatsuma, H. (Eds.) Cyber-Physical Systems for Social Applications; IGI Global: Hershey, PA, USA, 2019; ISBN 9781522578796. https://www.igi-global.com/book/cyber-physical-systems-social-applications/210606. [Google Scholar]
  20. Naneva, S.; Sarda Gou, M.; Webb, T.L.; Prescott, T.J. A systematic review of attitudes, anxiety, acceptance, and trust towards social robots. Int. J. Soc. Robot. 2020, 12, 1179–1201. [Google Scholar] [CrossRef]
  21. Tulving, E.; Wiseman, S. Relation between recognition and recognition failure. Bull. Psychon. Soc. 1975, 6, 79–82. [Google Scholar] [CrossRef]
  22. Prescott, T.J. Robots are not just tools. Connect. Sci. 2017, 29, 142–149. [Google Scholar] [CrossRef]
  23. Jackson, R.B.; Williams, T. A theory of social agency for human-robot interaction. Front. Robot. AI 2021, 8, 687726. [Google Scholar] [CrossRef]
  24. Deco, G.; Rolls, E.T. Decision-making and Weber's law: A neurophysiological model. Eur. J. Neurosci. 2006, 24, 901–916. [Google Scholar] [CrossRef]
  25. Laming, D. Weber’s Law. In Inside Psychology: A Science over 50 Years; Rabbitt, P., Ed.; Oxford University Press: Oxford, UK, 2009; pp. 177–189. ISBN 9780199228768. [Google Scholar]
  26. Stevens, S.S. On the psychophysical law. Psychol. Rev. 1957, 64, 153–181. [Google Scholar] [CrossRef]
  27. Sanford, E.M.; Halberda, J. A Shared Intuitive (Mis) understanding of Psychophysical Law Leads Both Novices and Educated Students to Believe in a Just Noticeable Difference (JND). Open Mind 2023, 7, 785–801. [Google Scholar] [CrossRef]
  28. Lindskog, M.; Nyström, P.; Gredebäck, G. Can the Brain Build Probability Distributions? Front. Psychol. 2021, 12, 596231. [Google Scholar] [CrossRef]
  29. Johnson, K.O.; Hsiao, S.S.; Yoshioka, T. Neural coding and the basic law of psychophysics. Neuroscientist 2002, 8, 111–121. [Google Scholar] [CrossRef]
  30. Thurstone, L.L. A law of comparative judgment. Psychol. Rev. 1927, 34, 273. [Google Scholar] [CrossRef]
  31. Torgerson, W.S. Theory and methods of scaling; Wiley: Oxford, UK, 1958. [Google Scholar]
  32. Thurstone, L.L. Attitudes can be measured. Am. J. Sociol. 1928, 33, 529–554. [Google Scholar] [CrossRef]
  33. https://www.cis.rit.edu/people/faculty/montag/vandplite/pages/chap_3/ch3p1.html.
  34. Wilkins, L. The Statistical Foundation of Colour Space. bioRxiv 2019, 849984. https://www.researchgate.net/publication/337444621_The_Statistical_Foundation_of_Colour_Space/figures?lo=1.
  35. https://www.cis.rit.edu/people/faculty/montag/vandplite/pages/chap_6/ch6p10.html.
  36. George, G. Testing for the independence of three events. Math. Gaz. 2004, 88, 568. [Google Scholar] [CrossRef]
  37. Osorina, M.V.; Avanesyan, M.O. Lev Vekker and His Unified Theory of Mental Processes. Eur. Yearb. Hist. Psychol. 2021, 7, 265–281. [Google Scholar] [CrossRef]
  38. Rosenthal-Von der Pütten, A.M.; Krämer, N.C.; Maderwald, S.; Brand, M.; Grabenhorst, F. Neural mechanisms for accepting and rejecting artificial social partners in the uncanny valley. J. Neurosci. 2019, 39, 6555–6570. [Google Scholar] [CrossRef]
  39. Milivojevic, B.; Clapp, W.C.; Johnson, B.W.; Corballis, M.C. Turn that frown upside down: ERP effects of thatcherization of misorientated faces. Psychophysiology 2003, 40, 967–978. [Google Scholar] [CrossRef]
  40. Stock, R.M.; Merkle, M. A service Robot Acceptance Model: User acceptance of humanoid robots during service encounters. In 2017 IEEE international conference on pervasive computing and communications workshops (PerCom Workshops), 2017; pp. 339–344. https://ieeexplore.ieee.org/abstract/document/7917585/.
  41. Nikolov, V.; Dimitrova, M.; Chavdarov, I.; Krastev, A.; Wagatsuma, H. Design of educational scenarios with BigFoot walking robot: A cyber-physical system perspective to pedagogical rehabilitation. In International Work-Conference on the Interplay between Natural and Artificial Computation; Springer International Publishing: Cham, Switzerland, 2022; pp. 259–269, https://link.springer.com/chapter/10.1007/978-3-031-06242-1_26. [Google Scholar]
  42. Dimitrova, M.; Wagatsuma, H.; Tripathi, G.N.; Ai, G. Learner Attitudes towards Humanoid Robot Tutoring Systems: Measuring of Cognitive and Social Motivation Influences. In Cyber-Physical Systems for Social Applications; Dimitrova, M., Wagatsuma, H., Eds.; IGI Global: Hershey, PA, USA, 2019; pp. 1–24. [Google Scholar] [CrossRef]
  43. https://wiki.engineeredarts.co.uk/SociBot.
Figure 1. Possible relations of probabilistic ‘near independence’ between the categories of the human, robot and the non-human.
Figure 1. Possible relations of probabilistic ‘near independence’ between the categories of the human, robot and the non-human.
Preprints 102852 g001
Figure 2. Previously used agents in studies on user acceptance of robots: (a) A walking robot BigFoot; (b) The humanoid robot NAO; (c) The android type of robot SociBot; (d) A volunteer actress.
Figure 2. Previously used agents in studies on user acceptance of robots: (a) A walking robot BigFoot; (b) The humanoid robot NAO; (c) The android type of robot SociBot; (d) A volunteer actress.
Preprints 102852 g002
Figure 3. Plots of the expected uncanny valley function in 4 different cases: (a) Plot of familiarity as a function of human likeness in the classical case; (b) Plot of likeability as a function of human likeness in the classical case; (c) Plot of familiarity as a function of human likeness in the proposed PRAM case; (b) Plot of likeability as a function of human likeness in the proposed PRAM case.
Figure 3. Plots of the expected uncanny valley function in 4 different cases: (a) Plot of familiarity as a function of human likeness in the classical case; (b) Plot of likeability as a function of human likeness in the classical case; (c) Plot of familiarity as a function of human likeness in the proposed PRAM case; (b) Plot of likeability as a function of human likeness in the proposed PRAM case.
Preprints 102852 g003
Table 1. Basic psychophysics laws and their psychological relevance to the issue of human-robot in-teraction from an acceptance perspective.
Table 1. Basic psychophysics laws and their psychological relevance to the issue of human-robot in-teraction from an acceptance perspective.
Name Definition Relevance to HRI
Weber’s law “If X is a stimulus magnitude and X + ∆X is the next greater magnitude that can just be distinguished from X, then Weber’s law states that ∆X bears a constant proportion to X data” (33, p. 177).
Preprints 102852 i001
Adapted from [33].
The JND depends on the psychological effect of the background stimuli, meaning that sometimes a small incremental change of a stimulus can lead to a bigger response in comparison to the same change of a larger stimulus (i.e. noise). On an abstract decision making level small JNDs may invoke large responses depending on the contexts, i.e. larger to perceptually confusing, than to unambiguous, human or robot parts (faces, hands, etc.)
Fechner’s law “… if ∆X bears a constant proportion to X, so also does X + ∆X, and ln(X + DX) – lnX = constant” (20, p= 177). Therefore, the sensation is a logarithmic function of the stimulus intensity: S = lnX + constant.
Preprints 102852 i002
Reproduced from [34] under Creative Commons Attribution (CC BY) license.
Fechner’s law accounts best for the (almost) linear part of any measurable in the lab “stimulus-response” dependency of midrange intensity. On an abstract decision making level it supports the assumption that any cognitive system, as function of the underlying neurological brain processing, is a measuring device best adapted to an Euclidean topology of representation of the external environment [28].
Steven’s law “… sensation was correctly reflected in magnitude estimation and was related to stimulus magnitude by a power law, S = aXβ … not a log law data” [35], (p.178).
Preprints 102852 i003
Adapted from [35].
Steven’s law assumes that the human cognitive system is capable of mapping adequately the ratios of the responses to the ratios of the stimulus intensities, i.e. of higher level assessment of mathematical dependencies, existing in the environment. Therefore, it translates beyond the (physical/electro-chemical) properties of the sensor to the complex analyser abilities of the integrative function of the brain. It also states that, apart from the linear part of the power function, small increments of intensity result in an exponentially high increment of the response (i.e. pain, where the power degree is > 1). With strong light, for example, the power degree is < 1 (as in the figure to the left).
Thurstone law “the distribution of attitude of a group (of people*) on a specified issue may be represented in the form of a frequency distribution…limited to those aspects of attitudes for which one can compare individuals by the "more and less" type of judgment...The scale is so constructed that two opinions separated by a unit distance on the base line seem to differ as much in the attitude variable involved as any other two opinions on the scale which are also separated by a unit distance [32] (p.529).
Preprints 102852 i004
Adapted from [32].
Thurstone demonstrated that Weber’s and Fechner’s laws are independent from each other [27]. He proposed a method of indirect scaling, allowing to devise an interval scale of seemingly non-measurable qualities such as social attitudes. This method is therefore appropriate to formally represent in quantitative ways the distances between characteristics of radically different complex items like robots or people. Moreover, it reflects the ability of the brain to perform processing over complex multidimensional probabilistic representations - physical and social.
Tulving’s law of recognition failure Tulving’s law of recognition failure postulates a slight distortion of the independence assumption of 2 cognitive processes – recognition Rn and recall Rc - operating over the mental representation of one and the same stimulus [21]. The probability of jointly recognizing and recalling a stimulus is expected to slightly violate the independence assumption, i.e.
P(Rn|Rc) = P(Rn)P(Rc) + δ, or
P(Rn|Rc)/P(Rc) = PRn + δ.
Preprints 102852 i005A hypothetical plot of the above function.
It is well known that when the conditional probability of an event P(Rn|Rc) in respect to another event P(Rc) equals its probability of occurence of P(Rn), the two processes are independent [36]. In the theory of Tulving and Wiseman [21], this independence relation is slightly violated by a fraction δ, where
δ = c[1-P(Rn)], c is a coefficient in the range (0, 1].
Tulving’s theory of ‘trace independence’ when memorizing a stimulus demonstrates that what is learnt depends of the surrounding context of the learning situation, on the one hand, and the multiple, almost independent memory traces, created during learning, on the other [21].This may apply to memorizing complex stimuli with sophisticated behavior, like robots, as well.
*Added by us.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated