2.1. AI Agents and the Gender of Agents
AI encompasses “programs, algorithms, systems, and machines” [
8] that emulate elements of human intelligence and behavior [
9,
10]. Using various technologies like machine learning, natural language processing, deep learning, big data analysis, and physical robots [
1,
11,
12], AI has seen substantial development and become integral in consumers’ daily lives.
AI agents, defined as “computer-generated graphically displayed entities that represent either imaginary characters or real humans controlled by AI” [
13], exist in diverse forms, including animated pictures, interactive 3D avatars in virtual environments [
14], and human-like animated customer service agents resembling real sales representatives [
15]. These agents simulate human-like interactions by comprehending user queries and executing specific tasks akin to real-person interactions.
Advancements in AI have enabled AI agents to learn and improve from each customer interaction, thereby enhancing their intelligence. They offer consumers an easily accessible means of interaction, assisting customers in online transactions by providing additional information, personalized advice, recommendations [
15,
16,
17], and technical support. Strategically used, these agents enable companies to engage with customers on a personal level and provide continuous support, striving for a seamless, time-efficient, and cost-effective online experience [
18,
19].
Research has delved into how variations in AI agent morphology influence user evaluations and interactions with technology [
20]. AI agents, functioning as social bots, trigger social responses from users despite their awareness of the non-human nature of computers [
21]. Visual cues that mirror human characteristics tend to prompt users to treat chat agents as human and engage with them socially [
22].
Among the design elements to enhance realism, agents can incorporate “human” characteristics and features. Gender, which has been extensively studied among these characteristics, significantly impacts the effectiveness of agents [
23]. Earlier studies highlighted how gender stereotypes influence interactions with computers; participants demonstrated gender-stereotypic responses toward computers with different gender-associated voices, perceiving masculine voices as more valid and associating dominant traits more with men [
24].
Gender stereotypes, which affect judgments of competence and warmth, often lead to men being perceived as more competent and women as warmer [
25,
26]. These biases influence evaluations across various scenarios [
27,
28,
29]
2.2. Brand Concept
Within brand management, functional brands emphasize their functional performance. Previous research has defined functional value as a product’s ability to fulfill its intended functions in a consumer’s everyday life [
30]. Functional needs, motivating consumers to seek products that address consumption-related problems [
31,
32], are met by products demonstrating functional performance. Therefore, a functional brand is designed to meet externally generated consumption needs [
33]. Brands, as suggested by Park et al. [
34], can be managed to alleviate uncertainty in consumer lives, offering control and efficacy, thus closely associating functional brands with product performance. Visual representations within brands remind or communicate functional benefits to customers [
31].
Functional brands aim to effectively convey and reinforce their commitment to aiding customers, thereby strengthening brand–customer relationships [
30,
35]. Customer satisfaction with functional brands is pivotal in determining customer commitment, aligning with the core concept of brand management. According to the information-processing paradigm, consumer behavior leans toward objective and logical problem-solving [
36,
37]. Hence, customer confidence in a preferred functional brand is likely higher when the utilitarian value of the product category is substantial. Furthermore, Chaudhuri and Holbrook [
37] identified a significant negative correlation between emotional response and a brand’s functional value.
Experiential brand strategies differentiate themselves from other strategies. Holbrook and Hirschman [
38] defined experiential needs as desires for products that provide sensory pleasure; brands emphasizing experiential concepts highlight the brand’s impact on sensory satisfaction, spotlighting the experiential and fantasy aspects associated with consumption through various elements of the marketing mix. While research often investigates experiential needs in a visual context during purchasing decisions, other human senses constitute aesthetic experiences in traditional marketing research [
31]. A complete appreciation of an aesthetic experience results from combining sensory inputs.
2.3. AI Agent Trust and Grounding
Most researchers have primarily focused on interactive AI agents, noting their ability to enhance customer satisfaction with a website or product, credibility, and patronage intentions [
39,
40]. Regarding HCIs, social-response theory posits that individuals respond to technology endowed with human-like features [
41]. Studies suggest that increased anthropomorphism in an agent positively correlates with perceived credibility and competence [
42]. However, even if an AI agent is realistic, the lack of anthropomorphism might hinder users’ willingness to engage or communicate due to the absence of perceived social potential [
43]. Users tend to apply social rules to technology that exhibits human-like traits despite their conscious acknowledgment that they are interacting with a machine [
40]. The degree of social presence embodied in avatars on company websites significantly impacts trust in website information and its emotional appeal, influencing purchase intentions.
Trust in HCI research aligns with discussions on interpersonal communication, exploring whether conversational agents designed with properties known to enhance trust in human relationships are more trusted by users. Rooted in the Computers As Social Actors paradigm, this approach indicates that social norms guiding human interaction apply to HCIs, as users unconsciously treat computers as independent social entities [
41]. Trust in conversational agents is similar to trust in humans, with the belief that agents possessing trustworthy traits foster user trust. Various studies have defined trust similarly, emphasizing positive expectations about reliability, dependability, and confidence in an [
44,
45]. Trust in technology centers on expectations of reliable performance, predictability, and dependability. However, debates persist on whether factors fostering trust in human relationships apply similarly to trust in human–agent interactions.
Nonetheless, divergent views emerge, suggesting that users approach machine interactions distinctively, highlighting that the principles governing trust in interpersonal relations may not directly apply to human–machine trust [
46,
47]. For instance, Clark et al. [
46] indicated that users approach conversations with computers more utilitarianly, distinguishing between social and transactional roles due to perceiving agents as tools. Consequently, users view conversations with agents primarily as goal-oriented transactions. They prioritize aspects like performance and security in their trust judgments concerning machines, questioning the necessity of establishing social interactions or relationships with machines.
Continual interaction with AI agents is significant, generating data that can enhance system efficiency [
48]. This symbiotic relationship fosters shared cognitive understanding, mutually benefiting users and the system. Establishing mutual understanding, termed “grounding,” is pivotal in these interactions. While human communication achieves this naturally, achieving collaborative grounding with AI systems presents challenges [
49]. Grounding denotes mutual understanding in conversation, involving explicit verbal and nonverbal acknowledgments that signify comprehension of prior conversation elements [
50]. True linguistic grounding remains a challenge for machines, despite their ability to exhibit non-linguistic grounding signals.
AI agents can be viewed as “assistants” or “companions.” In the “assistant” perspective, AI technology is a useful aid to humans, assisting in task completion, such as tracking Uber ride arrivals or aiding disabled individuals [
39]. These conversations lean toward task-oriented, formal exchanges focused on specific functional goals. In contrast, the “companion” perspective focuses on emotional support, where agents are seen as trustworthy companions engaging users in typical everyday conversations, akin to human interactions [
51]. Sophisticated natural language processing capabilities allow AI avatars to mimic human-like behaviors, enabling users to interact as they would with another human [
52].
This study proposes that the effects of two personality traits on evaluating AI agent trust and grounding will vary based on brand concepts (functional concept vs. experiential concept). Functional brands emphasize practical purposes, whereas experiential brands focus on sensory experiences [
38]. For functional concepts, the AI agent’s competence (representing the brand’s competence) becomes crucial for purchase decisions [
53]. In contrast, experiential brands are evaluated based on consumers’ sensory experiences and affective responses, where warmth exhibited by the AI agent is more significant in influencing emotional decisions and brand sales [
54]. Consequently, the following hypothesis is proposed:
H1: For functional brands, a male AI agent will have higher trust and grounding, whereas for experiential brands, a female AI agent will have higher trust and grounding.
Integrating AI agents into customer service signifies a transformative evolution, offering precise recommendations and enhancing company–consumer relationships [
55]. These AI-powered systems facilitate interaction and blur the distinction between human service assistants and conversational chatbots. Perceptions of AI-empowered agents are shaped by their anthropomorphic design cues during interaction and significantly influenced by their pre-encounter introduction and framing [
16].
Van Looy et al. [
56] defined AI agent identification as users’ emotional attachment to the AI agent, categorized into three subcategories: perceived similarity, embodied presence, and wishful identification. Studies indicate that male and younger users are more likely to identify with their avatars, especially if the avatars are idealized [
57,
58,
59]. Several studies have explored the relationship between identification with virtual characters and various outcomes. For instance, Kim et al. [
60] demonstrated that identification with virtual characters can enhance player self-efficacy and trust within their virtual communities. Additionally, Yee et al. [
61] found that online virtual world visitors who perceive a smaller psychological gap between themselves and virtual avatars express greater satisfaction with avatars and spend more time online. Moreover, identification positively influences trust in virtual avatars [
62,
63].
Antecedents of trust can vary depending on individual differences (e.g., age or gender) and contextual factors. For instance, research shows that engaging in small talk with an embodied conversational agent can be effective in settings such as customer service, healthcare, or casual information exchange, whereas it might not be as effective in more serious settings like financial transactions or military training [
64]. Certain demographic factors or personal traits can moderate the impact of these features [
44].
Grounding establishes a connection between conversational turns, confirming active listening and fostering closeness and mutual understanding [
65]. Beyond mere confirmation, grounding has a role in relationship development, acknowledging contributions and establishing shared knowledge [
66,
67]. Understanding, a cornerstone in human relationships, resolves conflicts and nurtures stronger emotional connections through shared thoughts and feelings [
65]. These attributions significantly influence customers’ perceptions of salespersons, thereby impacting the formation of trust and inclinations for future interactions. Recognizing and establishing shared knowledge during conversations is fundamental in relationship development [
67]. Consequently, the following hypothesis is proposed:
H2: The effect of AI agent identifications on brand recommendation is mediated by AI agent trust and grounding.