Preprint
Article

Effect of AI Gender on Trust and Grounding

Altmetrics

Downloads

130

Views

66

Comments

0

A peer-reviewed article of this preprint also exists.

This version is not peer-reviewed

Submitted:

06 February 2024

Posted:

07 February 2024

You are already at the latest version

Alerts
Abstract
Artificial intelligence (AI) agents are widely used in the retail and distribution industry, often in-corporating human-like elements such as possessing names and genders. This paper examines the influence of AI agent gender and brand concepts on trust and grounding within virtual brand spaces; exploring the correlation between user identification with AI agents and subsequent brand recommendations. The study revealed that in virtual brand spaces centered around a functional concept, male AI agents generated higher levels of trust than female AI agents, whereas, when focused on an experiential concept, female AI agents induced higher levels of grounding than male AI agents. Furthermore, this research indicates that the association between customers’ iden-tification with AI agents and recommendations for actual brand purchases is mediated by trust and grounding. These findings support the idea that users who strongly identify with AI agents are more inclined to recommend brand products after engaging in conversations within the virtu-al brand space.
Keywords: 
Subject: Business, Economics and Management  -   Marketing

1. Introduction

Using artificial intelligence (AI) can significantly enhance various business functions, including the automation of business processes, extraction of marketing insights from data, engagement with customers and employees, development of strategic plans involving segmentation, analytics, and more [1]. Many retailers and service companies employ AI to discern their customers’ preferences and needs. The retail and distribution industry seeks to optimize productivity and efficiency through AI-driven data analysis because of the enormous volume of data generated daily [2]. As a result, diverse AI-based platforms have been developed and employed.
AI agents, including chatbots, are widely used technologies in the retail and distribution industry, particularly in Service and e-Commerce [3]. The proliferation of AI agents has been propelled by advances in computer technology. This research defines them as digital entities with anthropomorphic appearances, controlled by humans or software, and capable of interactions. Virtual brand space represents one of the fastest-growing domains of AI commercialization. These agents facilitate targeted advertising by linking with consumer databases and promptly responding to user queries about products and services. This automated social presence may supplement or replace human service staff, especially when addressing routine inquiries.
Anthropomorphism has often been employed by AI agent designers to enhance animacy. These agents, whether endowed with human-like behaviors or embodied in visual or physical interfaces [4], impact judgments of likeability and animacy. Studies affirm the effects of anthropomorphism on AI agents, with human-like appearances fostering a stronger sense of social presence and enabling richer human–computer interactions (HCIs) than purely functional forms [5]. Consequently, the significance of human–AI interactions is increasingly emphasized in today’s marketplace, driving the development of social AI that adheres to behavioral norms in human interactions [6].
To enhance realism, AI agents are often designed with additional human elements; previous research on HCIs revealed that individuals are influenced by gender stereotypes when interacting with computers. Manipulating a robot’s gender by altering its appearance led to perceptions where male robots were deemed more agentic, suitable for tasks requiring mathematical ability, while female robots were perceived as more communal, fitting tasks requiring verbal ability [7].
Brand strategies must explore emerging technology trends such as virtual stores to create personalized and engaging customer experiences. This study provides an overview of theoretical and practical knowledge concerning AI agents in marketing by investigating the impact of AI agent identification on brand recommendations within the virtual brand space.
This paper scrutinizes the influence of AI agent gender and brand concepts on the trust and grounding of AI agents in virtual brand spaces. From the scrutiny, this study offers valuable insights into the efficacy of AI agents in shaping virtual experiences.

2. Literature Review and Hypothesis

2.1. AI Agents and the Gender of Agents

AI encompasses “programs, algorithms, systems, and machines” [8] that emulate elements of human intelligence and behavior [9,10]. Using various technologies like machine learning, natural language processing, deep learning, big data analysis, and physical robots [1,11,12], AI has seen substantial development and become integral in consumers’ daily lives.
AI agents, defined as “computer-generated graphically displayed entities that represent either imaginary characters or real humans controlled by AI” [13], exist in diverse forms, including animated pictures, interactive 3D avatars in virtual environments [14], and human-like animated customer service agents resembling real sales representatives [15]. These agents simulate human-like interactions by comprehending user queries and executing specific tasks akin to real-person interactions.
Advancements in AI have enabled AI agents to learn and improve from each customer interaction, thereby enhancing their intelligence. They offer consumers an easily accessible means of interaction, assisting customers in online transactions by providing additional information, personalized advice, recommendations [15,16,17], and technical support. Strategically used, these agents enable companies to engage with customers on a personal level and provide continuous support, striving for a seamless, time-efficient, and cost-effective online experience [18,19].
Research has delved into how variations in AI agent morphology influence user evaluations and interactions with technology [20]. AI agents, functioning as social bots, trigger social responses from users despite their awareness of the non-human nature of computers [21]. Visual cues that mirror human characteristics tend to prompt users to treat chat agents as human and engage with them socially [22].
Among the design elements to enhance realism, agents can incorporate “human” characteristics and features. Gender, which has been extensively studied among these characteristics, significantly impacts the effectiveness of agents [23]. Earlier studies highlighted how gender stereotypes influence interactions with computers; participants demonstrated gender-stereotypic responses toward computers with different gender-associated voices, perceiving masculine voices as more valid and associating dominant traits more with men [24].
Gender stereotypes, which affect judgments of competence and warmth, often lead to men being perceived as more competent and women as warmer [25,26]. These biases influence evaluations across various scenarios [27,28,29]

2.2. Brand Concept

Within brand management, functional brands emphasize their functional performance. Previous research has defined functional value as a product’s ability to fulfill its intended functions in a consumer’s everyday life [30]. Functional needs, motivating consumers to seek products that address consumption-related problems [31,32], are met by products demonstrating functional performance. Therefore, a functional brand is designed to meet externally generated consumption needs [33]. Brands, as suggested by Park et al. [34], can be managed to alleviate uncertainty in consumer lives, offering control and efficacy, thus closely associating functional brands with product performance. Visual representations within brands remind or communicate functional benefits to customers [31].
Functional brands aim to effectively convey and reinforce their commitment to aiding customers, thereby strengthening brand–customer relationships [30,35]. Customer satisfaction with functional brands is pivotal in determining customer commitment, aligning with the core concept of brand management. According to the information-processing paradigm, consumer behavior leans toward objective and logical problem-solving [36,37]. Hence, customer confidence in a preferred functional brand is likely higher when the utilitarian value of the product category is substantial. Furthermore, Chaudhuri and Holbrook [37] identified a significant negative correlation between emotional response and a brand’s functional value.
Experiential brand strategies differentiate themselves from other strategies. Holbrook and Hirschman [38] defined experiential needs as desires for products that provide sensory pleasure; brands emphasizing experiential concepts highlight the brand’s impact on sensory satisfaction, spotlighting the experiential and fantasy aspects associated with consumption through various elements of the marketing mix. While research often investigates experiential needs in a visual context during purchasing decisions, other human senses constitute aesthetic experiences in traditional marketing research [31]. A complete appreciation of an aesthetic experience results from combining sensory inputs.

2.3. AI Agent Trust and Grounding

Most researchers have primarily focused on interactive AI agents, noting their ability to enhance customer satisfaction with a website or product, credibility, and patronage intentions [39,40]. Regarding HCIs, social-response theory posits that individuals respond to technology endowed with human-like features [41]. Studies suggest that increased anthropomorphism in an agent positively correlates with perceived credibility and competence [42]. However, even if an AI agent is realistic, the lack of anthropomorphism might hinder users’ willingness to engage or communicate due to the absence of perceived social potential [43]. Users tend to apply social rules to technology that exhibits human-like traits despite their conscious acknowledgment that they are interacting with a machine [40]. The degree of social presence embodied in avatars on company websites significantly impacts trust in website information and its emotional appeal, influencing purchase intentions.
Trust in HCI research aligns with discussions on interpersonal communication, exploring whether conversational agents designed with properties known to enhance trust in human relationships are more trusted by users. Rooted in the Computers As Social Actors paradigm, this approach indicates that social norms guiding human interaction apply to HCIs, as users unconsciously treat computers as independent social entities [41]. Trust in conversational agents is similar to trust in humans, with the belief that agents possessing trustworthy traits foster user trust. Various studies have defined trust similarly, emphasizing positive expectations about reliability, dependability, and confidence in an [44,45]. Trust in technology centers on expectations of reliable performance, predictability, and dependability. However, debates persist on whether factors fostering trust in human relationships apply similarly to trust in human–agent interactions.
Nonetheless, divergent views emerge, suggesting that users approach machine interactions distinctively, highlighting that the principles governing trust in interpersonal relations may not directly apply to human–machine trust [46,47]. For instance, Clark et al. [46] indicated that users approach conversations with computers more utilitarianly, distinguishing between social and transactional roles due to perceiving agents as tools. Consequently, users view conversations with agents primarily as goal-oriented transactions. They prioritize aspects like performance and security in their trust judgments concerning machines, questioning the necessity of establishing social interactions or relationships with machines.
Continual interaction with AI agents is significant, generating data that can enhance system efficiency [48]. This symbiotic relationship fosters shared cognitive understanding, mutually benefiting users and the system. Establishing mutual understanding, termed “grounding,” is pivotal in these interactions. While human communication achieves this naturally, achieving collaborative grounding with AI systems presents challenges [49]. Grounding denotes mutual understanding in conversation, involving explicit verbal and nonverbal acknowledgments that signify comprehension of prior conversation elements [50]. True linguistic grounding remains a challenge for machines, despite their ability to exhibit non-linguistic grounding signals.
AI agents can be viewed as “assistants” or “companions.” In the “assistant” perspective, AI technology is a useful aid to humans, assisting in task completion, such as tracking Uber ride arrivals or aiding disabled individuals [39]. These conversations lean toward task-oriented, formal exchanges focused on specific functional goals. In contrast, the “companion” perspective focuses on emotional support, where agents are seen as trustworthy companions engaging users in typical everyday conversations, akin to human interactions [51]. Sophisticated natural language processing capabilities allow AI avatars to mimic human-like behaviors, enabling users to interact as they would with another human [52].
This study proposes that the effects of two personality traits on evaluating AI agent trust and grounding will vary based on brand concepts (functional concept vs. experiential concept). Functional brands emphasize practical purposes, whereas experiential brands focus on sensory experiences [38]. For functional concepts, the AI agent’s competence (representing the brand’s competence) becomes crucial for purchase decisions [53]. In contrast, experiential brands are evaluated based on consumers’ sensory experiences and affective responses, where warmth exhibited by the AI agent is more significant in influencing emotional decisions and brand sales [54]. Consequently, the following hypothesis is proposed:
H1: For functional brands, a male AI agent will have higher trust and grounding, whereas for experiential brands, a female AI agent will have higher trust and grounding.
Integrating AI agents into customer service signifies a transformative evolution, offering precise recommendations and enhancing company–consumer relationships [55]. These AI-powered systems facilitate interaction and blur the distinction between human service assistants and conversational chatbots. Perceptions of AI-empowered agents are shaped by their anthropomorphic design cues during interaction and significantly influenced by their pre-encounter introduction and framing [16].
Van Looy et al. [56] defined AI agent identification as users’ emotional attachment to the AI agent, categorized into three subcategories: perceived similarity, embodied presence, and wishful identification. Studies indicate that male and younger users are more likely to identify with their avatars, especially if the avatars are idealized [57,58,59]. Several studies have explored the relationship between identification with virtual characters and various outcomes. For instance, Kim et al. [60] demonstrated that identification with virtual characters can enhance player self-efficacy and trust within their virtual communities. Additionally, Yee et al. [61] found that online virtual world visitors who perceive a smaller psychological gap between themselves and virtual avatars express greater satisfaction with avatars and spend more time online. Moreover, identification positively influences trust in virtual avatars [62,63].
Antecedents of trust can vary depending on individual differences (e.g., age or gender) and contextual factors. For instance, research shows that engaging in small talk with an embodied conversational agent can be effective in settings such as customer service, healthcare, or casual information exchange, whereas it might not be as effective in more serious settings like financial transactions or military training [64]. Certain demographic factors or personal traits can moderate the impact of these features [44].
Grounding establishes a connection between conversational turns, confirming active listening and fostering closeness and mutual understanding [65]. Beyond mere confirmation, grounding has a role in relationship development, acknowledging contributions and establishing shared knowledge [66,67]. Understanding, a cornerstone in human relationships, resolves conflicts and nurtures stronger emotional connections through shared thoughts and feelings [65]. These attributions significantly influence customers’ perceptions of salespersons, thereby impacting the formation of trust and inclinations for future interactions. Recognizing and establishing shared knowledge during conversations is fundamental in relationship development [67]. Consequently, the following hypothesis is proposed:
H2: The effect of AI agent identifications on brand recommendation is mediated by AI agent trust and grounding.

3. Materials and Methods

3.1. Data Collection and Sample

Employing a between-subjects design, the research used two independent variables: brand concept (functional vs. experiential) and AI agent gender (male vs. female). The measured dependent variables included AI agent trust and grounding, while brand attitude served as a control variable. Data were collected from individuals who had used the XR platform within the last month.
The final sample consisted of 187 respondents, with 49.7% being men and 50.3% being women, falling within the age range of 25‒40 years. An explanation of the survey’s purpose was provided to the participants, ensuring that they were aware of their right to withdraw at any point. Additionally, participants were assured of data confidentiality in accordance with the Korean Statistical Law, with a commitment to destroy all personal data after a year. Each participant was randomly assigned to one of the four experimental conditions.

3.2. Stimulus Development and Measures

Nonverbal avatar communication can encompass various forms such as text-based (speech-to-text avatars), gestures, and/or facial expressions. Perceptions regarding the conversational agent may be shaped by both anthropomorphic design cues during conversation and by the agent’s pre-introduction. Starting off, the study researchers aimed to develop an AI agent for the experiment, where two university students specializing in statistical data and business administration, respectively, were approached. Additionally, to maintain objectivity and ensure the validity of the data collected, a field expert provided feedback. To enhance the social presence of AI agents, efforts were made to assign names, where the male AI agent was named Ethan and the female AI agent was named Anna. Subsequently, specific store guidance phrases were formulated for each AI agent within a virtual brand store. The formulated phrase for Anna read as: “Welcome to the (brand) Store. I’m Anna, an AI agent here to assist you. Feel free to ask me anything about (brand), and I'll respond promptly.”
The researcher conducted two pre-tests to select brand concepts suitable for the virtual place. In the first pre-test, Grandeur (= AZERA in US) was chosen as the functional brand (n = 15; M functional benefit = 4.98 vs. M experiential benefit = 4.06; p < .001). In the second pre-test, BMW MINI was selected as the experiential brand (n = 23; Mfunctional benefit = 4.11 vs. Mexperiential benefit = 4.81; p < .001). Next, AI agents were created. A male and female agent were made using Zepeto, a virtual space platform. Ethan was the male AI agent (n = 14; Mmale = 4.85 vs. Mfemale = 2.09; p <. 001); Anna was the female AI agent (n = 18; Mmale = 3.11 vs. M female = 5.71; p < .001). The items of each measure were assessed on a 7-point scale, ranging from 1 = strongly disagree to 7 = strongly agree (Table 1).

3.3. Procedure

Participants were introduced to the AI agents within the virtual brand space and were tasked with evaluating the AI agents. Participants in the male AI agent group were instructed to create an AI agent that portrayed a male figure, whereas those in the female AI agent group were instructed to develop one resembling a female figure. Before the experiment, participants received information regarding the study’s purpose. Each participant completed a questionnaire that included control variables for brand attitude.
After finishing the control variable task, the participants engaged in a 5-minute session within the virtual brand space. Upon completion, they were prompted to respond to manipulation check items regarding AI agent gender and brand concepts. Additionally, participants were requested to express their opinions on AI trust, grounding avatar identification, and recommendations. The entire procedure lasted approximately 15 minutes.

4. Results

4.1. Manipulation Checks

Before analyzing the experimental results, a manipulation check was performed to ensure proper manipulation of the experimental stimuli. The results showed the experimental stimulus as successfully manipulated. Participants in the Grandeur group perceived functional benefits (M = 4.98) as higher than experiential benefits (M = 4.08; F = 17.238, p < .001). Moreover, participants in the MINI group perceived experiential benefits (M = 4.81) higher than functional benefits (M = 4.06; F = 47.859, p < .001).

4.2. Analysis of Trust and Grounding

Analysis of variance (ANOVA) was performed for empirical verification. ANOVA determines the differences between the means of different groups. The results of the 2 × 2 between-subjects analysis on AI agent trust revealed that the main effects of brand concept (F = 1.745, p > .1) and AI agent gender (F = 1.859, p > .1) were not statistically significant. However, the two-way interaction effect between brand concept and AI agent gender was found to be significant (F = 7.623, p < .01). Specifically, when virtual brand space was a functional concept participants reported a higher level of AI agent trust with a male AI agent (M = 4.16) than with a female AI agent (M = 3.66; F = 5.685, p < .05). However, when the virtual brand space was an experiential concept, participants’ trust did not differ based on AI agent gender (male AI agent = 3.78 vs. female AI agent = 4.08; F = 3.236, p = .06).
Next, the results of the 2 × 2 between-subjects analysis on grounding revealed that the main effects of brand concept (F = .728, p > .1) and AI agent gender (F = 1.039, p > .1) were not statistically significant. However, the two-way interaction effect between them was found to be significant (F = 4.426, p < .05). Specifically, when the virtual brand space was a functional concept, participants’ grounding did not differ based on AI agent gender (male AI agent = 4.31 vs. female AI agent = 4.13; F = .641, p > .1). However, when virtual brand space was an experiential concept, participants reported a higher level of grounding with a male AI agent (M = 4.34) than with a female AI agent (M = 3.81; F = 4.476, p < .05). Therefore, this finding supports Hypotheses 1 and 2.
Table 2. Results of 2 × 2 Between-subjects Analysis.
Table 2. Results of 2 × 2 Between-subjects Analysis.
Dependent Variable Type Ⅲ Sum of Squares df Mean Square F Sig.
Corrected Model Trust 29.708a 4 7.427 7.588 .000
Grounding 19.125b 4 4.781 3.827 .005
Intercept Trust 65.061 1 65.061 66.474 .000
Grounding 101.102 1 101.102 80.932 .000
attitude Trust 21.548 1 21.548 22.015 .000
Grounding 11.162 1 11.162 8.935 .003
Brand Concept(A) Trust 1.708 1 1.708 1.745 .188
Grounding .007 1 .007 .005 .941
Gender of AI Agent(B) Trust 1.819 1 1.819 1.859 .174
Grounding .435 1 .435 .348 .556
A X B Trust 7.461 1 7.461 7.623 .006
Grounding 5.740 1 5.740 4.595 .033
Error Trust 178.133 182 .979
Grounding 227.359 182 1.249
Total Trust 3088.889 187
Grounding 3469.444 187
Corrected Total Trust 207.841 186
Grounding 246.485 186
Note: R2 = .419 (adjusted R2 = .385).
Figure 1. Interaction effects.
Figure 1. Interaction effects.
Preprints 98314 g001

4.3. Mediation Effect

A mediation analysis was performed to test Hypothesis 2, which explains the mechanism of this study. For this, Model 6 of the PROCESS macro was used (Table 3). AI agent identification was set as an independent variable, AI trust and grounding were mediating variables, and recommendation was a dependent variable. The analysis results showed significant mediating effects of trust and grounding. Therefore, this finding supports Hypothesis 2.
Table 3. Results of Model 6 of the PROCESS Macro.
Table 3. Results of Model 6 of the PROCESS Macro.
Outcome variable: Trust
Coeff Standardized coeff SE t p LLCI ULCI
Identification .3002 .3397 .0611 4.9119 .000 .1796 .4208
Outcome variable: Grounding
Coeff Standardized coeff SE t p LLCI ULCI
Identification .0494 .0513 .0576 .8577 .3922 −.0642 .1629
Trust .6830 .6272 .0651 10.4872 .000 .5545 .8115
Outcome variable: Recommendation
Coeff Standardized coeff SE t p LLCI ULCI
Identification .1058 .1046 .0624 1.6946 .0918 −.0174 .2290
Trust .1208 .1055 .0891 1.3550 .1771 −.0551 .2966
Grounding .5347 .5088 .0798 6.700 .0000 .3773 .6922
Outcome variable: Recommendation
Coeff Standardized coeff SE t p LLCI ULCI
Identification .2781 .2749 .0715 3.888 .0001 .1370 .4192
Indirect effect of X on Y
Effect BootSE BootLLCI BootULCI
TOTAL .1723 .0452 .0842 .2625
Ind1 .0363 .0395 −.0376 .1196
Ind2 .0264 .0313 −.0290 .0975
Ind3 .1096 .0339 .0495 .1814
Note: LLCI = Lower-level confidence interval; ULCI = Upper-level confidence interval.
Figure 2. Double mediation effects.
Figure 2. Double mediation effects.
Preprints 98314 g002

5. Conclusions

The popularity of AI agents is fueled by two macroenvironmental factors. First, advancements in computer/digital technologies have enabled the development of more complex avatars. These often appear in 3D forms, imbued with seemingly distinctive personalities, appearances, and behavioral patterns, and are overall more appealing than the previous simpler versions [72,73]. Second, the increase in the use of AI agents reflects the growing importance of online service experiences, such as education, gaming, banking, and shopping [73,74]. Online customers frequently express frustration when they cannot quickly and easily find relevant information on a website. AI agents can effectively and efficiently provide a solution to this.
Online stores, relying solely on graphical user interfaces, do not enable retailers to persuade potential customers to buy products or provide customers with the opportunity to ask questions and learn more about products as they would with a human salesperson. Communication is crucial in attracting, serving, and retaining customers. One of the most beneficial ways to engage customers anywhere, anytime, and provide them with easy and natural interaction is to use a conversational user interface.
This study examines the impact of the gender of AI agents on trust and grounding in the virtual brand space. The primary objective was to investigate whether the gender of AI agents influences trust and grounding, focusing on their symbiotic relations. The study reveals the following key findings: in virtual brand spaces with a functional concept, male AI agents were found to elicit higher levels of trust. However, in virtual brand spaces with an experiential concept, female AI agents elicited higher levels of grounding. Additionally, this research indicates that the relationship between customers’ identification with AI agents and recommendations for actual brand purchases is mediated by trust and grounding. These findings support the notion that users strongly identifying with AI agents are more likely to recommend brand products after engaging in conversation within the virtual brand space.
In contrast to the physical world, the virtual brand space enables a brand experience that transcends time and space. Furthermore, this study establishes the role of conversation as a mediator between AI agent identification and brand recommendation. These insights contribute to understanding how the gender of AI agent representation can shape the customer–AI agent relationship in virtual spaces, outlining implications for marketers and designers.
The study results will provide practical insights for marketers and shed light on the evolving nature of consumer–AI agent interactions in virtual environments. Subsequent research can build upon these findings to explore additional factors influencing conversation in the virtual brand space and develop more targeted marketing strategies.
AI agents are attracting attention among virtual space platform users because of their potential to become the next-generation search service, replacing human assistants [75]. The next-generation AI agent, represented by a virtual human, leads the innovative development of intelligent technology to a new historical stage. This technology will profoundly impact how society produces, lives, and communicates and will fundamentally reshape society and humanity [76].
Marketers must understand how to best integrate AI into businesses and contribute to individuals’ higher acceptance to maintain a competitive advantage. Given the advent of AI-generated marketing, it is critical to understand whether consumers will accept AI-generated content and information, and how they accept it.

Author Contributions

The author confirms sole responsibility for the following: study conception and design, data collection, analysis and interpretation of results, and manuscript preparation.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Not applicable.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Davenport, T.; Guha, A.; Grewal, D.; Bressgott, T. How artificial intelligence will change the future of marketing. J Acad Mark Sci 2020, 48, 24–42. [Google Scholar] [CrossRef]
  2. Gesing, B.; Peterson, S.J.; Michelsen, D. Artificial intelligence in logistics. DHL Customer Solut Innov 2018, 3. [Google Scholar]
  3. Brandtzaeg, P.B.; Følstad, A. Chatbots: changing user needs and motivations. Interactions 2018, 25, 38–43. [Google Scholar] [CrossRef]
  4. Cassell, J.; Sullivan, J.; Prevost, S.; Churchill, E. Embodied Conversational Agents; MIT Press: Cambridge, MA, 2000. [Google Scholar]
  5. Kwak, S.S. The impact of the robot appearance types on social interaction with a robot and service evaluation of a robot. Arch Res 2014, 27, 81–93. [Google Scholar] [CrossRef]
  6. Bartneck, C.; Forlizzi, J. A design-centred framework for social human-robot interaction. In RO-MAN. 13th IEEE International Workshop on Robot and Human Interactive Communication (IEEE Catalog No. 04TH8759), 2004; Volume 2004; pp. 591–594. [CrossRef]
  7. Eyssel, F.; Hegel, F. (S)he’s got the look: gender stereotyping of robots 1. J Appl Soc Psychol 2012, 42, 2213–2230. [Google Scholar] [CrossRef]
  8. Shankar, V. How artificial intelligence (AI) is reshaping retailing. J Retailing 2018, 94, vi–xi. [Google Scholar] [CrossRef]
  9. Huang, M.H.; Rust, R.T. Artificial intelligence in service. J Serv Res 2018, 21, 155–172. [Google Scholar] [CrossRef]
  10. Syam, N.; Sharma, A. Waiting for a sales renaissance in the fourth Industrial Revolution: machine learning and artificial intelligence in sales research and practice. Ind Mark Manag 2018, 69, 135–146. [Google Scholar] [CrossRef]
  11. Mariani, M. Big data and analytics in tourism and hospitality: a perspective article. Tourism Rev 2020, 75, 299–303. [Google Scholar] [CrossRef]
  12. Mariani, M.M.; Baggio, R.; Fuchs, M.; Höepken, W. Business intelligence and big data in hospitality and tourism: a systematic literature review. Int J Contemp Hosp Manag 2018, 30, 3514–3554. [Google Scholar] [CrossRef]
  13. Choi, Y.K.; Miracle, G.E.; Biocca, F. The effects of anthropomorphic agents on advertising effectiveness and the mediating role of presence. J Interact Advertising 2001, 2, 19–32. [Google Scholar] [CrossRef]
  14. Jin, S.A.A.; Lee, K.M. The influence of regulatory fit and interactivity on brand satisfaction and trust in e-health marketing inside 3D virtual worlds (second life). Cyberpsychol Behav Soc Netw 2010, 13, 673–680. [Google Scholar] [CrossRef] [PubMed]
  15. Verhagen, T.; Van Nes, J.; Feldberg, F.; Van Dolen, W. Virtual customer service agents: using social presence and personalization to shape online service encounters. J Comput Mediated Commun 2014, 19, 529–545. [Google Scholar] [CrossRef]
  16. Araujo, T. Living up to the chatbot hype: the influence of anthropomorphic design cues and communicative agency framing on conversational agent and company perceptions. Comput Hum Behav 2018, 85, 183–189. [Google Scholar] [CrossRef]
  17. De Cicco, R.; Silva, S.C.; Alparone, F.R. Millennials’ attitude toward chatbots: an experimental study in a social relationship perspective. Int J Retail Distrib Manag 2020, 48, 1213–1233. [Google Scholar] [CrossRef]
  18. Adam, M.; Wessel, M.; Benlian, A. AI-based chatbots in customer service and their effects on user compliance. Electron Markets 2021, 31, 427–445. [Google Scholar] [CrossRef]
  19. Prentice, C.; Weaven, S.; Wong, I.A. Linking AI quality performance and customer engagement: the moderating effect of AI preference. Int J Hosp Manag 2020, 90, 102629. [Google Scholar] [CrossRef]
  20. Fong, T.; Nourbakhsh, I.; Dautenhahn, K. A survey of socially interactive robots. Robot Auton Syst 2003, 42, 143–166. [Google Scholar] [CrossRef]
  21. Zhao, S. Toward a taxonomy of copresence. Presence Teleoperators Virtual Environ 2003, 12, 445–455. [Google Scholar] [CrossRef]
  22. Go, E.; Sundar, S.S. Humanizing chatbots: the effects of visual, identity and conversational cues on humanness perceptions. Comput Hum Behav 2019, 97, 304–316. [Google Scholar] [CrossRef]
  23. Nass, C.; Yen, C. The Man Who Lied to His Laptop: What Machines Teach Us About Human Relationships; Current: New York, 2010. [Google Scholar]
  24. Nass, C.; Moon, Y.; Green, N. Are machines gender neutral? Gender-stereotypic responses to computers with voices. J Appl Soc Psychol 1997, 27, 864–876. [Google Scholar] [CrossRef]
  25. Broverman, I.K.; Vogel, S.R.; Broverman, D.M.; Clarkson, F.E.; Rosenkrantz, P.S. Sex-role stereotypes: a current appraisal 1. J Soc Issues 1972, 28, 59–78. [Google Scholar] [CrossRef]
  26. Fiske, S.T.; Cuddy, A.J.; Glick, P. Universal dimensions of social cognition: warmth and competence. Trends Cogn Sci 2007, 11, 77–83. [Google Scholar] [CrossRef] [PubMed]
  27. Ashton-James, C.E.; Tybur, J.M.; Grießer, V.; Costa, D. Stereotypes about surgeon warmth and competence: the role of surgeon gender. PLOS ONE 2019, 14, e0211890. [Google Scholar] [CrossRef] [PubMed]
  28. Heilman, M.E. Gender stereotypes and workplace bias. Res Organ Behav 2012, 32, 113–135. [Google Scholar] [CrossRef]
  29. Huddy, L.; Terkildsen, N. Gender stereotypes and the perception of male and female candidates. Am J Pol Sci 1993, 37, 119–147. [Google Scholar] [CrossRef]
  30. Park, C.W.; Eisingerich, A.B.; Pol, G.; Park, J.W. The role of brand logos in firm performance. J Bus Res 2013, 66, 180–187. [Google Scholar] [CrossRef]
  31. Jeon, J.E. The impact of brand concept on brand equity. Asia Pac J Innov Entrep 2017, 11, 233–245. [Google Scholar] [CrossRef]
  32. Park, C.W.; Jaworski, B.J.; MacInnis, D.J. Strategic brand concept-image management. J Mark 1986, 50, 135–145. [Google Scholar] [CrossRef]
  33. Brakus, J.J.; Schmitt, B.H.; Zarantonello, L. Brand experience: what is it? How is it measured? Does it affect loyalty? J Mark 2009, 73, 52–68. [Google Scholar] [CrossRef]
  34. Park, C.W.; MacInnis, D.J.; Priester, J.; Eisingerich, A.B.; Iacobucci, D. Brand attachment and brand attitude strength: conceptual and empirical differentiation of two critical brand equity drivers. J Mark 2010, 74, 1–17. [Google Scholar] [CrossRef]
  35. Morgan, R.M.; Hunt, S.D. The commitment-trust theory of relationship marketing. J Mark 1994, 58, 20–38. [Google Scholar] [CrossRef]
  36. Moorman, C.; Zaltman, G.; Deshpande, R. Relationships between providers and users of market research: the dynamics of trust within and between organizations. J Mark Res 1992, 29, 314–328. [Google Scholar] [CrossRef]
  37. Chaudhuri, A.; Holbrook, M.B. The chain of effects from brand trust and brand affect to brand performance: the role of brand loyalty. J Mark 2001, 65, 81–93. [Google Scholar] [CrossRef]
  38. Holbrook, M.B.; Hirschman, E.C. The experiential aspects of consumption: consumer fantasies, feelings, and fun. J Con Res 1982, 9, 132. [Google Scholar] [CrossRef]
  39. Chattaraman, V.; Kwon, W.S.; Gilbert, J.E.; Ross, K. Should AI-based, conversational digital assistants employ social-or task-oriented interaction style? A task-competency and reciprocity perspective for older adults. Comput Hum Behav 2019, 90, 315–330. [Google Scholar] [CrossRef]
  40. Holzwarth, M.; Janiszewski, C.; Neumann, M.M. The influence of avatars on online consumer shopping behavior. J Mark 2006, 70, 19–36. [Google Scholar] [CrossRef]
  41. Nass, C.; Moon, Y. Machines and mindlessness: social responses to computers. J Soc Issues 2000, 56, 81–103. [Google Scholar] [CrossRef]
  42. Westerman, D.; Tamborini, R.; Bowman, N.D. The effects of static avatars on impression formation across different contexts on social networking sites. Comput Hum Behav 2015, 53, 111–117. [Google Scholar] [CrossRef]
  43. Nowak, K.L.; Fox, J. Avatars and ComputerMediated communication: a review of the definitions, uses, and effects of digital representations. Rev Commun Research 2018, 6, 30–53. [Google Scholar] [CrossRef]
  44. Bickmore, T.W.; Picard, R.W. Establishing and maintaining long-term human-computer relationships. ACM Trans Comput Hum Interact 2005, 12, 293–327. [Google Scholar] [CrossRef]
  45. Tseng, S.; Fogg, B.J. Credibility and computing technology. Commun ACM 1999, 42, 39–44. [Google Scholar] [CrossRef]
  46. Clark, L.; Pantidi, N.; Cooney, O.; Doyle, P.; Garaialde, D.; Edwards, J.; Spillane, B.; Gilmartin, E.; Murad, C.; Munteanu, C.; Wade, V.; Cowan, B.R. What makes A good conversation? Challenges in designing truly conversational agents. In Proceedings of the Conference on Human Factors in Computing Systems, 2019; p. 3300705.
  47. Madhavan, P.; Wiegmann, D.A. Similarities and differences between human–human and human–automation trust: an integrative review. Theor Issues Ergon Sci 2007, 8, 277–301. [Google Scholar] [CrossRef]
  48. Horvitz, E. Principles of mixed-initiative user interfaces. In Proceedings of the S.I.G.C.H.I. Conference on Human Factors in Computing Systems; 1999; pp. 159–166. [Google Scholar] [CrossRef]
  49. Kontogiorgos, D.; Pereira, A.; Gustafson, J. Grounding behaviours with conversational interfaces: effects of embodiment and failures. J Multimodal User Interfaces 2021, 15, 239–254. [Google Scholar] [CrossRef]
  50. Richardson, D.C.; Dale, R. Looking to understand: the coupling between speakers’ and listeners’ eye movements and its relationship to discourse comprehension. Cogn Sci 2005, 29, 1045–1060. [Google Scholar] [CrossRef] [PubMed]
  51. Sundar, S.S.; Jung, E.H.; Waddell, T.F.; Kim, K.J. Cheery companions or serious assistants? Role and demeanor congruity as predictors of robot attraction and use intentions among senior citizens. Int J Hum Comput Stud 2017, 97, 88–97. [Google Scholar] [CrossRef]
  52. Guzman, A.L. Voices in and of the machine: source orientation toward mobile virtual assistants. Comput Hum Behav 2019, 90, 343–350. [Google Scholar] [CrossRef]
  53. Parulekar, A.A.; Raheja, P. Managing celebrities as brands: impact of endorsements on celebrity image. In Creating Images Psychol Mark Commun 2006, 161–169. [Google Scholar]
  54. Bennett, A.M.; Hill, R.P. The universality of warmth and competence: a response to brands as intentional agents. J Consum Psychol 2012, 22, 199–204. [Google Scholar] [CrossRef]
  55. Lee, S.; Choi, J. Enhancing user experience with conversational agent for movie recommendation: effects of self-disclosure and reciprocity. Int J Hum Comput Stud 2017, 103, 95–105. [Google Scholar] [CrossRef]
  56. Van Looy, J.; Courtois, C.; De Vocht, M.; De Marez, L. Player identification in online games: validation of a scale for measuring identification in MMOGs. Media Psychol 2012, 15, 197–221. [Google Scholar] [CrossRef]
  57. Bailey, R.; Wise, K.; Bolls, P. How avatar customizability affects children’s arousal and subjective presence during junk food–sponsored online video games. Cyberpsychol Behav 2009, 12, 277–283. [Google Scholar] [CrossRef] [PubMed]
  58. Mohd Tuah, N.; Wanick, V.; Ranchhod, A.; Wills, G.B. Exploring avatar roles for motivational effects in gameful environments. EAI Endorsed Trans Creat Technol 2017, 17, 1–11. [Google Scholar] [CrossRef]
  59. Szolin, K.; Kuss, D.J.; Nuyens, F.M.; Griffiths, M.D. ‘I am the character, the character is me’: a thematic analysis of the user-avatar relationship in videogames. Comput Hum Behav 2023, 143, 107694. [Google Scholar] [CrossRef]
  60. Kim, C.; Lee, S.G.; Kang, M. I became an attractive person in the virtual world: users’ identification with virtual communities and avatars. Comput Hum Behav 2012, 28, 1663–1669. [Google Scholar] [CrossRef]
  61. Yee, N.; Bailenson, J.N.; Ducheneaut, N. The Proteus effect: implications of transformed digital self-representation on online and offline behavior. Commun Res 2009, 36, 285–312. [Google Scholar] [CrossRef]
  62. Jeon, J.E. The effect of ideal avatar on virtual brand experience in XR platform. J Distrib Sci 2023, 21, 109–121. [Google Scholar]
  63. Park, B.W.; Lee, K.C. Exploring the value of purchasing online game items. Comput Hum Behav 2011, 27, 2178–2185. [Google Scholar] [CrossRef]
  64. Bickmore, T.; Cassell, J. Social dialogue with embodied conversational agents. Adv Nat Multimodal Dial Syst 2005, 30, 23–54. [Google Scholar] [CrossRef]
  65. Weger, H.; Castle Bell, G.C.; Minei, E.M.; Robinson, M.C. The relative effectiveness of active listening in initial interactions. Int J Listening 2014, 28, 13–31. [Google Scholar] [CrossRef]
  66. Clark, H.H.; Brennan, S.E. Grounding in communication. In Perspectives on Socially Shared Cognition; Resnick, L.B., Levine, J.M., Teasley, S.D., Eds.; American Psychological Association: Washington, DC, 1991; pp. 222–233. [Google Scholar]
  67. Reis, H.T.; Lemay, E.P., Jr.; Finkenauer, C. Toward understanding: the importance of feeling understood in relationships. Soc Pers Psychol Compass 2017, 11, e12308. [Google Scholar] [CrossRef]
  68. Bergner, A.S.; Hildebrand, C.; Häubl, G. Machine talk: how verbal embodiment in conversational AI shapes consumer–brand relationships. J Consum Res 2023, 50, 742–764. [Google Scholar] [CrossRef]
  69. Schultze, U. Performing embodied identity in virtual worlds. Eur J Inf Syst 2014, 23, 84–95. [Google Scholar] [CrossRef]
  70. Barnes, S.J.; Mattsson, J. Exploring the fit of real brands in the second-life virtual 1 world. J Mark Manag 2011, 27, 934–958. [Google Scholar] [CrossRef]
  71. Papagiannidis, S.; Pantano, E.; See-To, E.W.K.; Bourlakis, M. Modelling the determinants of a simulated experience in a virtual retail store and users’ product purchasing intentions. J Mark Manag 2013, 29, 1462–1492. [Google Scholar] [CrossRef]
  72. Ahn, S.J.; Fox, J.; Bailenson, J.N. Avatars. In Leadership in Science and Technology: A Reference Handbook; Bainbridge, W.S., Ed.; SAGE Publications: Thousand Oaks, CA, 2012; pp. 695–702. [Google Scholar]
  73. Garnier, M.; Poncin, I. The avatar in marketing: synthesis, integrative framework and perspectives. Rech Appl Mark (Engl Ed) 2013, 28, 85–115. [CrossRef]
  74. Kim, S.; Chen, R.P.; Zhang, K. Anthropomorphized helpers undermine autonomy and enjoyment in computer games. J Con Res 2016, 43, 282–302. [CrossRef]
  75. Singh, H.; Singh, A. ChatGPT: systematic review, applications, and agenda for multidisciplinary research. J Chin Econ Bus Stud 2023, 21, 193–212. [CrossRef]
  76. Hill-Yardin, E.L.; Hutchinson, M.R.; Laycock, R.; Spencer, S.J. A Chat(GPT) about the future of scientific publishing. Brain Behav Immun 2023, 110, 152–154. [CrossRef]
Table 1. Scale.
Table 1. Scale.
Constructs Items
Functional Brand This brand represents the functional benefits that I can expect from the brand.
This brand ensures that it assists me in handling my daily life competently.
This brand represents a solution to certain problems.
Jeon [31],
Park et al. [30]
Experiential Brand This brand expresses a luxurious image.
I have to pay a lot to buy this brand.
This brand makes life richer and more meaningful.
AI Agent Trust I trust an AI agent.
I have faith in the AI agent.
This AI agent gives me a feeling of trust.
Nass and Moon [41]
Grounding This AI agent provided feedback of having understood my input.
This AI agent provided feedback on having accepted my input.
I felt that this AI agent understood what I had to say.
Bergner et al. [68]
AI Agent Identification This AI agent is similar to me.
I identify with this AI agent.
This AI agent and I are similar in reality.
Schultze [69]
Szolin et al. [59]
Recommendation I would recommend this brand to my friends.
If my friends were looking to buy a product, I would tell them to try this brand.
Barnes and Mattsson [70]
Papagiannidis et al. [71]
Brand Attitude I like this brand.
This brand makes me favorable.
This brand is good.
Jeon [31]
Park et al. [34]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated