Preprint
Article

Enhancing Financial Advisory Services with GenAI: Consumer Perceptions and Attitudes through SDL and AIDUA Perspectives

Altmetrics

Downloads

262

Views

103

Comments

0

A peer-reviewed article of this preprint also exists.

This version is not peer-reviewed

Submitted:

19 August 2024

Posted:

20 August 2024

You are already at the latest version

Alerts
Abstract
Financial institutions are currently undergoing a significant shift from traditional robo-advisors to more advanced generative artificial intelligence (GenAI) technologies. This transformation has motivated us to investigate the factors influencing consumer responses to GenAI-driven financial advice. Despite extensive research on the adoption of robo-advisors, there is a gap in understanding the specific contributors and differences in consumer attitudes and reactions to GenAI-based financial guidance, particularly in the context of GenAI. This study aims to address this gap by analyzing the impact of personalized investment suggestions, human-like empathy, and the continuous improvement of GenAI-provided financial advice on consumers' perceived authenticity, their utilitarian attitude towards the use of GenAI for financial advice, and their reactions to GenAI-generated financial suggestions. A comprehensive research model was developed based on Service-Dominant Logic (SDL) and Artificial Intelligence Device Use Acceptance (AIDUA). The model was subsequently employed in a structural equation modeling (SEM) analysis of survey data from 822 mobile banking users. The findings of this study indicate that personalized investment suggestions, human-like empathy, and the continuous improvement of GenAI's recommendations have a positive influence on consumers' perception of its authenticity. Moreover, we discovered a positive correlation between utilitarian attitudes and perceived authenticity, which ultimately influences consumers' responses to GenAI's financial advisory solutions. This is manifested as either a willingness to engage or resistance to communication. This study contributes to the research on GenAI-powered financial services and underscores the significance of integrating GenAI financial guidance into the routine operations of financial institutions. Our work builds upon previous research on robo-advisors, offering practical insights for financial institutions seeking to leverage GenAI-driven technologies to enhance their services and customer experiences.
Keywords: 
Subject: Business, Economics and Management  -   Finance

1. Introduction

The financial sector is undergoing a profound transformation with the advent of sophisticated technologies such as robo-advisors and generative artificial intelligence (GenAI) platforms like ChatGPT. This technological revolution has fundamentally altered how individuals manage their finances and receive financial advice. While robo-advisors provide algorithm-based asset management services with minimal human intervention [1], GenAI technologies have taken this a step further by offering financial advice through conversational engagement [2].
Previous research has demonstrated the significant impact of robo-advisors on the finance industry, highlighting factors such as behavioral biases, trust, perceived risk, and user attitudes in determining the adoption and effectiveness of automated financial advisory systems [3,4,5]. However, the introduction of GenAI has further complicated user interactions, potentially influencing consumers’ attitudes and responses to these services [6].
To gain a deeper understanding of consumers’ responses to GenAI-based financial advice, it is crucial to consider its unique attributes, such as personalized investment suggestions, human-like empathy, and continuous learning and improvement. These factors significantly impact consumers’ perceptions of the authenticity and reliability of GenAI [7]. This study suggests that these characteristics and their influence on consumer responses can be analyzed using service-dominant logic (SDL) and AI Device Use Acceptance (AIDUA) frameworks [8,9].
To comprehensively examine this interaction, we integrate SDL and AIDUA into a research model and employ structural equation modeling to analyze data from 822 mobile banking users. Our study addresses four principal research questions:
How do GenAI attributes influence consumers’ perceptions of authenticity in using GenAI for financial advice?
What is the relationship between perceived authenticity and utilitarian attitudes towards GenAI financial advice?
How do utilitarian attitudes affect consumers’ responses to GenAI financial advice?
How does AI literacy moderate the impact of GenAI attributes on perceived authenticity?
This research contributes to both theory and practice in the field of multi-modal GenAI in financial advisory services. It offers insights into the psychological foundations of consumer trust and provides valuable guidance for the design, implementation, and user education strategies in GenAI-powered financial services. By investigating the impact of GenAI attributes on perceived authenticity and subsequent consumer attitudes and behaviors, this study addresses significant gaps in the literature and offers practical insights for the development of effective GenAI-based financial advisory services.

2. Literature Review and Theoretical Framework

2.1. Evolution of Financial Advisory Services: From Robo-Advisors to GenAI

The landscape of financial advisory services has dramatically transformed over the past decade, with the emergence of robo-advisors representing a crucial turning point. Robo-advisors emerged as a response to the demand for cost-effective and accessible financial planning tools, disrupting the traditional finance industry by providing standardized investment solutions to a wider audience [10]. These platforms use algorithms to build portfolios, reducing the need for human financial planners and lowering the overall cost of investment advice [3,11,12]. However, as technology rapidly advances, the limitations of robo-advisors are becoming more evident. These include a lack of customization, an inability to empathize with consumers, and a limited capacity to learn from past data. As a result, there is a growing need for a shift towards more sophisticated tools [13].
The transition from robo-advisors to GenAI represents the next stage in the evolution of financial advisory services. GenAI platforms represent a significant technological leap, delivering interactive and personalized financial advice through advanced natural language processing (NLP) and machine learning (ML) capabilities [14]. In contrast to their robo-advisor predecessors, GenAI tools are capable of engaging in dynamic human-machine interactions, simulating human-like conversations, and offering tailored investment suggestions that adapt to changes in users’ financial situations and market conditions [15,16].
The development of GenAI has been significantly advanced by substantial advancements in NLP, which have enabled these systems to understand, interpret, and generate human language with increasing accuracy. These advancements not only increase the effectiveness of AI advisors but also enable them to engage in empathetic conversations, thereby improving the consumer experience [17]. The capacity of GenAI to process complex inquiries and execute transactions through seamless conversations represents a paradigm shift in how consumers manage their investments, offering a more engaging and personalized advisory experience [18].
As we continue to examine the capabilities and consequences of GenAI in finance, it becomes evident that these advancements not only indicate progress within financial institutions but also foreshadow profound alterations in the nature of financial advisory services. The implications for customer engagement, service delivery, and the role of AI advisors are profound. GenAI holds immense potential to redefine the financial services industry [19]. It is imperative that both financial institutions and consumers comprehend this evolutionary trajectory if they are to effectively leverage these technologies and navigate the new landscape of investment advice.

2.2. GenAI’s Attributes: Personalized Investment Suggestion, Human-Like Empathy, and Continuous Improvement

A notable feature of GenAI in financial services is its ability to provide personalized recommendations. Personalization is a key factor in consumer satisfaction and the continued use of technology-based services [20,21]. Unlike robo-advisors, which typically deliver standardized recommendations using limited algorithms, GenAI tools can analyze extensive consumer input and specific data, including financial goals, risk tolerance, investment preferences, and even emotional cues, to tailor recommendations to individual needs [22]. This high level of personalization in GenAI-driven services enhances the relevance and effectiveness of investment advice, potentially leading to better financial outcomes for consumers [18].
In addition to personalization, the continuous improvement of GenAI is another critical attribute, enabled by the embedded machine learning algorithms in GenAI systems. These systems can learn and adapt through interactions with consumers, thereby enhancing their ability to provide accurate and contextual investment advice over time [23]. This self-learning and improvement function is of paramount importance in a dynamic financial market where consumer needs and market conditions are in a state of constant flux. Empirical studies have shown that AI systems capable of continuous learning and adaptation are more likely to gain user trust and be perceived as authentic [24].
Finally, while the analytical capabilities of GenAI have been widely recognized, the role of its human-like empathy has also garnered increasing attention [25]. The incorporation of emotional intelligence into GenAI enables it to recognize and respond to consumers’ emotional cues, thereby elevating interactions beyond mere mechanical responses and providing support that is consistent with consumers’ emotional states. The incorporation of AI tools with human-like empathy can enhance consumer engagement and trust, as emotional connection is an important component of successful consulting relationships [26].
The combination of personalized investment suggestions, human-like empathy, and continuous improvement in GenAI represents a compelling value proposition for consumers. These attributes combine to create a user experience that mirrors interaction with a human advisor while harnessing the effectiveness and efficiency of GenAI technology. GenAI’s approach is notably different from the “one size fits all” model of traditional robo-advisors. GenAI offers a high degree of participation, adaptability, and emotional intelligence that aligns with the complex and diverse needs of consumers.

2.3. Perceived Authenticity of GenAI

The perceived authenticity of GenAI-powered financial advice is a pivotal factor in establishing trust and encouraging user engagement. Users assess the authenticity of platforms like GenAI based on their perception of the truthfulness, dependability, and impartiality of the investment recommendations provided. Research has shown that authenticity is crucial in determining users’ willingness to accept and engage with AI advisors, forming the foundation for trust [27,28]. When GenAI is perceived as authentic, it not only gains users’ confidence more effectively but also fosters a stronger connection, which is vital in the context of financial information and assets, given the sensitivity of such matters.
The essence of GenAI’s authenticity in financial advice lies not only in the accuracy of its information but also in its ability to offer recommendations that align with users’ ethical principles and financial goals [29]. Moreover, it is crucial to ensure transparency in how GenAI handles user data and arrives at its recommendations in order to enhance its perceived authenticity. This transparency, in conjunction with a commitment to ethical AI practices, underscores the significance of clear communication and ethical design principles in the development of GenAI systems [30].

2.4. Utilitarian Attitude towards GenAI and Consumer Response

In evaluating consumer response to GenAI, particularly in financial contexts, the utilitarian perspective offers a compelling lens through which to view the phenomenon. Utility is a key factor in technology adoption and a strong predictor of consumer willingness to engage with AI. If consumers believe that GenAI will enhance the efficiency of their asset management and improve the accuracy of their decisions, their willingness to interact with the technology will increase [31].
The efficacy of GenAI, including its accuracy and relevance in investment suggestions, is of paramount importance in determining consumer willingness to engage with it [32]. The capacity of GenAI to furnish consistent, personalized, and valuable counsel exerts a profound influence on the attitude of the user, which in turn affects their engagement, whether positive or negative. Individuals who have had positive experiences with GenAI are more likely to develop a favorable attitude toward it and to engage with it again in the future [33].
However, it is important to acknowledge that not all consumers are willing to adopt GenAI’s financial advice, despite its potential benefits. Consumer resistance can be attributed to various factors, including a lack of trust, perceived loss of control, privacy concerns, and discomfort with technology [34]. Additionally, perceived complexity and a less anthropomorphic interface may contribute to consumer resistance [35]. Some consumers may perceive GenAI as a threat to their personal autonomy or asset security, which may lead to resistance in communicating with it. This resistance may be further compounded by a lack of comprehension of how GenAI functions or a conviction that it is incapable of replicating the intricate human comprehension essential for financial decision-making. To comprehend the reasons behind the differing attitudes towards the utilization of GenAI, it is essential to investigate the utilitarian attitudes of consumers towards the platform. A nuanced understanding of these attitudes and their underlying determinants can assist in the development of GenAI applications that better align with consumer needs, thereby reducing resistance.

2.5. AI lLiteracy

The integration of GenAI into financial services is not solely a matter of technological development; it is also a matter of user adaptation, in which AI literacy plays a crucial role. AI literacy refers to the skills and competencies individuals need to effectively use AI technologies and applications [36]. This includes understanding AI capabilities, context, and implementation. The integration of GenAI into financial services underscores the crucial role of AI literacy in influencing the adoption and usage of AI technologies [37].
Previous literature suggests that high AI literacy can alleviate users’ doubts and help them fully harness AI’s potential in financial decision-making, thereby enhancing the use of AI technology [38]. Individuals with higher AI literacy levels are more likely to trust and rely on AI-driven financial advice [39]. Furthermore, AI literacy affects the user experience as a whole. Individuals with a more profound comprehension of AI are better able to navigate the interface with greater efficiency and efficacy, pose specific inquiries to AI, and interpret the recommendations provided by AI with greater accuracy, thereby leading to a more satisfactory experience [40].
Furthermore, AI literacy can mitigate users’ resistance to new technologies by elucidating the nature of AI and rendering its processes more transparent [41]. Once users comprehend the manner in which GenAI generates financial advice, their skepticism may dissipate, thereby reducing their resistance to utilizing such systems and fostering an openness to them. The discrepancy in AI knowledge levels among different user groups results in a knowledge gap. It is therefore imperative to provide education on the functioning of AI in order to bridge this gap and facilitate the more effective adoption of AI among diverse user groups.

2.6. Service-Dominant Logic (SDL) and Artificially Intelligent Device Use Acceptance (AIDUA)

Service-dominant logic (SDL) has emerged as a key framework for understanding value co-creation across industries, including financial services. In accordance with SDL, value is generated through interactions between providers and consumers, as opposed to being inherent in the output itself [8,42]. In the context of GenAI, SDL offers a perspective on how GenAI can facilitate value co-creation processes.
SDL shifts the focus from the traditional goods-dominant logic, which views value as created by companies and distributed to consumers, to a service-centered perspective, where value is co-created by multiple parties, including consumers [43]. This shift is of critical importance for comprehending the relational and interactive nature of financial services provided by GenAI technology [44].
The operation of GenAI financial services is contingent upon the interaction of multiple stakeholders, including financial institutions, technology companies, and consumers. SDL posits that the efficacy of the ecosystem in jointly creating value is pivotal to the success of the service. Consequently, SDL represents a strategic instrument for comprehending and augmenting the value co-creation process in GenAI-driven financial services. The importance of interaction, personalization, and resource integration in shaping user experience and overall service efficiency is emphasized [45].
In addition to SDL, the development of a new theoretical framework is necessary to understand consumer acceptance and usage behavior when integrating AI systems into consumer devices. The Artificial Intelligence Device Use Acceptance (AIDUA) model is a comprehensive framework that reveals the multifaceted nature of consumer interactions with AI technologies such as GenAI.
The AIDUA model delineates several stages for the acceptance of AI devices, including primary appraisal, secondary appraisal, and the outcome stage [9]. Each of these stages is of significant consequence in the evaluation of GenAI by consumers. In light of studies that have applied the AIDUA model, it can be postulated that personalized suggestions, human-like empathy, and continuous improvement serve as the primary drivers in measuring consumers’ assessment of GenAI-powered financial advice. In the secondary appraisal stage, consumers primarily evaluate their decision options and the potential outcomes based on their attitudes. When deciding whether to accept or resist GenAI-driven financial advice, they assess the costs and benefits of using AI devices in service delivery, considering perceived authenticity. Following this intricate appraisal process, consumers develop a utilitarian attitude towards GenAI-based financial advice, which subsequently determines their willingness to communicate with GenAI or their resistance to utilizing GenAI for financial guidance.
Empirical studies have demonstrated the efficacy of the AIDUA model in explaining and predicting consumer behavior toward AI devices. These studies have also validated the model’s utility as a diagnostic and prescriptive tool for businesses [31,46,47]. For practitioners, the AIDUA model suggests that marketing and design strategies for AI devices should address consumers’ concerns about trust, perceived risk, and ease of use in order to increase acceptance.
As artificial intelligence (AI) technology evolves and becomes more prevalent in financial institutions, frameworks like AIDUA will become increasingly essential for understanding and predicting consumer interactions with AI tools. This comprehensive approach allows for the design and implementation of AI technologies that align with consumer expectations and promote acceptance.

2.7. Integrating SDL and AIDUA to Understand Consumer-AI Interaction

The seamless integration of SDL and the AIDUA model provides a comprehensive theoretical foundation for understanding and explaining consumer interactions with GenAI in the service industry, particularly in financial services. Combining SDL’s value co-creation perspective with AIDUA’s focus on consumers’ appraisal stages of AI usage creates a powerful framework for investigating the nuances of consumer interactions with GenAI.
SDL’s focus on value co-creation through interaction and resource integration aligns with the AIDUA model’s emphasis on consumer acceptance and resistance of AI technology. The two frameworks converge in the value-driven usage of AI [48], where consumers are active participants in AI-driven services, rather than passive recipients [49]. The comprehensive framework posits that when services are designed to facilitate consumers’ active role in co-creating personalized value (a fundamental concept of SDL), consumers’ experiences with AI could be enhanced.

3. Hypotheses Development and Research Model

3.1. Personalized Investment Suggestion, Human-Like Empathy, Continuous Improvement

Personalization is increasingly acknowledged as a vital component in enhancing user experience and fostering authenticity in digital interactions [50]. In financial advice, personalized recommendations are particularly impactful, as they demonstrate an understanding of the user’s specific needs and preferences [51]. The delivery of personalized financial advice through GenAI can enhance perceived authenticity, as the advice appears more relevant and trustworthy. Consumer behavior studies indicate that services are often perceived as more authentic when they are closely aligned with the user’s unique circumstances [52,53].
Moreover, empathy, especially in the form of human-like emotional intelligence, is crucial in user interactions. When users feel that AI tools can understand and respond to their emotional states, they are more likely to trust and use the technology [54]. The capacity for human-like empathy in GenAI enables it to comprehend consumers’ financial concerns and objectives on an emotional level, which is crucial for enhancing the perceived authenticity of advice [55]. Empathetic interactions can elevate the nature of financial advice beyond that of a purely transactional nature, thereby creating a sense of care and personal connection.
Furthermore, the ability of artificial intelligence systems to continuously learn and improve over time is essential for maintaining their relevance and ensuring the delivery of high-quality services. The ongoing enhancement of GenAI’s financial counsel could result in more precise and contemporary recommendations, which might enhance the credibility of the counsel. The principle of continual improvement aligns with the dynamic nature of financial markets and consumer expectations [56]. As GenAI adapts and evolves, its advice may be perceived as more authentic, reflecting up-to-date knowledge and a deeper understanding of the financial landscape. Based on these insights, we propose the following hypothesis:
H1: 
Personalized investment suggestion of GenAI is positively associated with consumers’ perceived authenticity.
H2: 
Human-like empathy of GenAI is positively associated with consumers’ perceived authenticity.
H3: 
Continuous improvement of GenAI is positively associated with consumers’ perceived authenticity.

3.2. Perceived Authenticity

Following the initial evaluation of the specific characteristics of GenAI tools, perceived authenticity is crucial in how consumers assess and adopt these services [57]. When consumers perceive the service as authentic and the advice as genuine, they are more likely to find the service useful and practical. This belief fosters a utilitarian attitude towards the service, as consumers prioritize functionality and the ability to effectively achieve their goals [58].
In the realm of AI-driven financial guidance, like the service offered by GenAI, the perceived authenticity of the advice is essential in shaping users’ perceptions of the service’s utility. When recommendations are perceived as truthful, users are more likely to view them as reliable, precise, and tailored to their specific requirements. Consequently, this enhances the perceived usefulness of GenAI’s offerings. The concept of perceived authenticity encompasses the effectiveness, efficiency, and overall usefulness of the suggestions that GenAI provided. The perceived authenticity of GenAI’s financial advice exerts a direct influence on users’ utilitarian attitudes towards the service, which in turn determines its perceived value and adoption [59]. Based on the above, in light of the interrelationship between perceived authenticity, trust, and utility, the following hypothesis is put forth:
H4: 
Consumers’ perceived authenticity is positively associated with their utilitarian attitude towards GenAI.

3.3. Utilitarian Attitude

Utilitarianism in technology usage refers to the extent to which users perceive a technology as efficient and effective in achieving their objectives [60,61]. When consumers view a technology through a utilitarian lens, they evaluate its value based on its ability to help them achieve specific goals and simplify decision-making. Essentially, the stronger the belief in a technology’s utilitarian value, the higher the likelihood of its acceptance and integration into users’ daily lives. This is because users recognize its practical benefits and its ability to streamline tasks and decision-making processes [62].
In considering the role of GenAI in offering financial guidance, a utilitarian perspective suggests that users value the platform’s capacity to deliver efficient, precise, and timely information that can support their financial decision-making process. This mindset is expected to enhance consumers’ readiness to engage with GenAI, as they anticipate that the interaction will assist them in attaining their financial objectives [63]. In other words, when users perceive GenAI as a tool that can effectively streamline their financial planning and provide valuable insights, they are more likely to embrace and utilize the platform. This is driven by the belief that it will contribute to their overall financial well-being and success.
In addition to the adoption of new technology, resistance to its use is often shaped by various factors, including a lack of practicality, increased complexity, or perceived risks to personal information, established social norms, and personal habits [64,65]. However, when consumers view a technology through a utilitarian lens, they recognize its potential to streamline tasks and boost productivity. This perception reduces the probability of consumer resistance, as the technology aligns with their values, objectives, and the advantages of its use outweigh the associated efforts, risks, and costs. In essence, a utilitarian attitude towards technology fosters a sense of value and purpose, making users more likely to embrace and incorporate it into their daily lives. They recognize the technology’s practical benefits and its ability to enhance their overall efficiency and effectiveness [66].
In the context of GenAI, the identification of utilitarian advantages such as time savings, cost-effectiveness, and enhanced financial results will result in a decrease in the resistance of users to utilizing this AI-driven platform for financial guidance. The perception of GenAI as a beneficial tool that aligns with their objectives will make users less likely to oppose its adoption and integration [67]. Consequently, they will be more inclined to accept the innovation, recognizing its potential to positively impact their financial decision-making process and overall outcomes [68]. In other words, the more users perceive GenAI as a practical and advantageous tool for managing their finances, the less likely they will be to resist its adoption and use. As a result, there is a greater likelihood of adopting this AI-powered technology in their financial decision-making process. Based on this understanding, the following hypotheses are proposed:
H5: 
Consumers’ utilitarian attitude towards GenAI is positively associated with their willingness to communicate with GenAI.
H6: 
Consumers’ utilitarian attitude towards GenAI is negatively associated with their resistance to communicate with GenAI.

3.4. AI Literacy

In addition to the inherent features of AI-driven financial tools, the level of AI literacy among users plays a critical role in the communication process. The concept of AI literacy encompasses users’ comprehension of AI technology, which is crucial for regulating the interaction with AI tools [69]. As AI literacy increases, users are better equipped to understand complex AI functions, such as personalized recommendations. In the context of GenAI, higher AI literacy enables consumers to better grasp how the platform tailors its recommendations based on user data, which in turn enhances perceptions of its authenticity. Consequently, AI literacy can strengthen the positive relationship between GenAI’s personalized advice and perceived authenticity. In other words, as users become more knowledgeable about AI technology, they are more likely to appreciate and trust the personalized financial guidance provided by GenAI, recognizing its genuine value and relevance to their specific needs and circumstances.
Moreover, the continuous improvement of GenAI represents another advanced AI feature. As users’ AI literacy increases, they are better positioned to comprehend and appreciate this aspect of the platform. They are aware that the AI system will consistently refine and enhance its recommendations based on ongoing interactions, thereby enhancing the perceived authenticity of the advice provided. In this context, AI literacy can act as a moderating factor, enhancing the relationship between continuous improvement and perceived authenticity. Specifically, more knowledgeable users are more likely to place a higher value on the evolution of AI in delivering precise financial guidance [70]. In essence, as consumers become more well-versed in AI technology, they are more apt to acknowledge and trust the ongoing advancements in GenAI’s financial advice. They recognize the genuine benefits of its adaptive nature in providing tailored and relevant recommendations that align with their evolving needs and circumstances.
Finally, the human-like empathy exhibited by GenAI is the result of sophisticated programming that enables empathetic interactions. Individuals with a higher level of AI literacy are better equipped to understand and value these empathetic responses, resulting in an increased perception of authenticity. Conversely, individuals with limited AI literacy may encounter difficulty in comprehending the nuances of empathetic AI, leading to a diminished perception of authenticity. As a result, the development of AI literacy is expected to strengthen the correlation between human-like empathy and perceived authenticity. As users gain a deeper understanding of AI technology, they are more likely to recognize and value the genuine nature of GenAI’s empathetic interactions [71,72], thereby increasing their confidence in the platform’s financial advice. Based on these insights, we propose the following hypothesis:
H7: 
Consumers’ AI literacy positively moderates the relationship between GenAI’s personalized investment suggestion and consumers’ perceived authenticity.
H8: 
Consumers’ AI literacy positively moderates the relationship between GenAI’s continuous improvement and consumers’ perceived authenticity.
H9: 
Consumers’ AI literacy positively moderates the relationship between GenAI’s human-like empathy and consumers’ perceived authenticity.
In essence, as users become more knowledgeable about AI technology, the impact of personalized investment suggestions, human-like empathy, and continuous improvement on the perceived authenticity of GenAI’s financial advice will be amplified, ultimately leading to a higher level of trust and acceptance among consumers.
Figure 1. Research Model.
Figure 1. Research Model.
Preprints 115683 g001

4. Research Methodology

4.1. Measurement Development

We commenced our investigation by developing a comprehensive questionnaire designed to capture the relevant data necessary for our analysis. In light of the significance of expert input, we solicited evaluations from esteemed professors in the Finance, Information Technology, and Management Science departments. Their invaluable feedback prompted revisions to the questionnaire, allowing us to refine and clarify our questions for greater precision and relevance.
A rigorous methodology was employed to ensure that the questionnaire accurately assessed eight key dimensions. These included the extent to which the investment advice was personalized, GenAI’s capacity for continuous improvement, its ability to demonstrate human-like empathy, consumers’ perceived authenticity of its responses, the utilitarian attitude of consumers towards GenAI, consumers’ willingness and resistance to engage with GenAI for financial guidance, and their overall AI literacy.
The introductory section of the questionnaire clearly outlined the purpose of the study, ensuring participants’ confidentiality and anonymity. Additionally, survey instructions were provided. The initial part of the questionnaire included basic demographic questions, such as age, gender, income level, and education, to establish a foundational understanding of the respondents’ backgrounds. The second part consisted of items carefully designed to assess the eight constructs under investigation.
The measurement items of the personalized investment suggestion assessed respondents’ perceptions of GenAI’s ability to comprehend their individual financial needs and deliver customized recommendations. The evaluation of continuous improvement assessed respondents’ views on GenAI’s ability to learn from interactions and improve its suggestions over time [73]. Human-like empathy was measured through items [26,74,75] that gauged the extent to which GenAI understood and considered respondents’ emotional and financial concerns. The perceived authenticity of GenAI’s financial advice was examined by asking respondents to rate the genuineness and reliability of the advice [24,76]. The usefulness, efficiency, and practicality of GenAI’s recommendations were evaluated in order to assess utilitarian attitudes [68]. The respondents’ willingness to communicate with GenAI was gauged through items [31,77] that determined the likelihood of future engagement with the AI for financial advice. The reluctance of respondents to communicate with GenAI was evaluated by assessing their hesitation or reluctance to use GenAI for financial guidance [31,78]. Finally, the AI literacy scale was used to assess respondents’ knowledge and understanding of AI technologies, particularly their application in financial advice [79,80]. Detailed breakdowns can be found in the Appendix.

4.2. Data Collection

This study used a comprehensive data collection approach to gather insights from mobile banking service users who had engaged with GenAI for financial guidance. The survey was carefully crafted to gather a wide range of information, including participants’ interactions with GenAI, their evaluations of the AI’s authenticity, their level of AI knowledge, and their attitudes towards using AI for financial advice.
The study targeted adult mobile banking users who had interacted with GenAI’s financial advice feature. Purposive sampling was used to select participants meeting this criterion. The final sample included 822 respondents, balanced across gender, age, and economic background to ensure a representative cross-section of the diverse mobile banking user population. Participants were aged 18 to 65 and had varying levels of experience and knowledge regarding AI-powered financial advice tools.
First, a pilot test was conducted with a select group of respondents to identify and resolve any potential issues, ensuring the clarity and comprehensibility of the questionnaire. This preliminary phase was crucial for fine-tuning the survey instrument and optimizing the data collection process.
Following rigorous vetting, we commenced the formal data collection phase by administering the thoroughly reviewed questionnaire to our targeted respondent group. This structured approach, supported by expert validation and meticulous testing, reinforced the integrity of our methodology and improved the quality of the data collected, thereby providing a solid foundation for subsequent analysis.
To ensure the collection of high-quality, relevant data, a meticulous methodology was employed. To maintain a high standard of data collection on consumer responses to GenAI financial advice, we collaborated with a professional survey firm. This partnership aimed to leverage the firm’s expertise in survey design, distribution, and data processing to obtain a representative and reliable sample. The survey was distributed through several channels: 1. An email campaign targeted a database of mobile banking users provided by partner banks. 2. In-app notifications were sent to users of the mobile banking application, encouraging survey participation. 3. The survey was shared on relevant financial forums and social media platforms.
The survey was a critical component of our investigation into consumer responses to GenAI’s financial advice. Conducted over a three-month period, it allowed ample time to gather responses from a large number of participants. To ensure data integrity and relevance, the survey company employed advanced filtering techniques to screen the responses. Additionally, stringent measures were taken to ensure the anonymity and confidentiality of respondents’ data, with all responses anonymized before analysis. This approach was essential for maintaining ethical research standards and ensuring the reliability and validity of the collected data. Table 1 summarizes the demographic characteristics of the survey respondents.

5. Data analysis and Results

5.1. Measurement Model

Ref. [81] suggested that single-source data may be prone to common method variance. To determine the presence of common method bias (CMB) in our collected data, we conducted Harman’s single-factor test. This test involves loading all measurement items into a principal component analysis without rotation. It is widely accepted that CMB is a concern if a single factor accounts for more than 50% of the total variance. In this study, the first factor accounted for 31.95% of the variance, which is below the 50% threshold. Therefore, we can conclude that the data in this study are not affected by common method bias.
The measurement model was assessed by examining factor loading values, composite reliability (CR), and average variance extracted (AVE). As shown in Table 2, all factor loadings exceed the recommended threshold of 0.6. Additionally, Cronbach’s α, which measures internal consistency reliability, ranged from 0.845 to 0.949, surpassing the suggested threshold of 0.7 [82]. These results provide strong evidence supporting the scale’s reliability.
Composite reliability (CR) was used to evaluate the internal consistency of the scale, with higher values indicating greater reliability. Ref. [83] state that CR values between 0.6 and 0.7 are acceptable, while values between 0.7 and 0.9 are considered satisfactory to good. As shown in Table 3, all CR values exceeded 0.8, confirming the scale’s satisfactory composite reliability.
Additionally, the average variance extracted (AVE) values for all variables exceeded 0.5, meeting the criteria for convergent validity [84]. These results collectively indicate that the measurement model demonstrates strong reliability and convergent validity.
To assess discriminant validity, we used the Ref. [84] method, which requires the square root of the AVE to be greater than the correlations among the constructs. Table 3 shows the square root of the AVE values along the diagonal (in bold) and the correlations among the constructs in the off-diagonal cells. The results reveal that the square root of the AVE for each construct is higher than the corresponding off-diagonal correlation values. This indicates that the measurement model has satisfactory discriminant validity, as each construct is more strongly related to its own measures than to those of other constructs.
Before conducting structural equation modeling (SEM) analysis, a confirmatory factor analysis (CFA) was performed to evaluate the measurement model. The model’s goodness of fit was assessed using various indices and their corresponding thresholds, as recommended by Ref. [85].
The CFA results indicated that the measurement model fit the data well. Specifically, the chi-square to degrees of freedom ratio (χ2/df) was 1.173, which is within the acceptable range. The Goodness of Fit Index (GFI) and Adjusted Goodness of Fit Index (AGFI) values were 0.938 and 0.932, respectively, both exceeding the recommended thresholds. Additionally, the Comparative Fit Index (CFI) and Normed Fit Index (NFI) values were 0.992 and 0.95, respectively, indicating a strong fit. The Incremental Fit Index (IFI) value of 0.992 also met the criteria. Finally, the Standardized Root Mean Square Residual (SRMR) and Root Mean Square Error of Approximation (RMSEA) values were 0.026 and 0.015, respectively, both falling below the recommended thresholds, further supporting the model’s acceptable fit.
As shown in Table 4, all fitting indices of the measurement model met the recommended criteria, confirming that the model adequately represents the data and is suitable for subsequent SEM analysis.

5.2. Structural Model

The structural model was evaluated to examine the relationships between the constructs proposed in the research model. The analysis revealed that all paths were positive and significant at the 0.05 level. Table 5 presents the standardized path coefficients between constructs, significance levels, and explanatory power (R2) for each construct. According to the rule of thumb, R2 values of 25%, 50%, and 75% indicate weak, average, and substantial explanatory power, respectively.
In this study, the R2 values for perceived authenticity, utilitarian attitude, willingness to communicate with GenAI, and resistance to communicate with GenAI were 56.9%, 50.5%, 50.3%, and 54.6%, respectively, indicating a satisfactory level of explanation.
The results in Table 5 show a positive association between personalized investment suggestions and perceived authenticity (β = 0.318, p < 0.001), supporting Hypothesis 1. Similarly, there is a positive association between human-like empathy and perceived authenticity (β = 0.338, p < 0.001), confirming Hypothesis 2. Additionally, continuous improvement positively influences perceived authenticity (β = 0.287, p < 0.001), supporting Hypothesis 3. Together, personalized investment suggestions, human-like empathy, and continuous improvement account for 56.9% of the variance in perceived authenticity.
Furthermore, perceived authenticity positively impacts utilitarian attitude (β = 0.71, p < 0.001), accounting for 50.5% of its variance, thereby supporting Hypothesis 4. In turn, utilitarian attitude positively influences the willingness to communicate with GenAI (β = 0.709, p < 0.001), supporting Hypothesis 5, and negatively affects resistance to communicating with GenAI (β = -0.739, p < 0.001), supporting Hypothesis 6. Utilitarian attitude explains 50.3% of the variance in willingness to communicate with GenAI and 54.6% of the variance in resistance to communicating with GenAI.
After verifying the hypotheses, a structural model test was conducted. The results indicated that the model demonstrated an acceptable fit to the data, according to the criteria recommended by Hu and Bentler (1999). The chi-square to degrees of freedom ratio (χ2/df) was 1.225, which is within the acceptable range. The Goodness of Fit Index (GFI) and Adjusted Goodness of Fit Index (AGFI) values were 0.941 and 0.935, respectively, both exceeding the recommended thresholds. Additionally, the Comparative Fit Index (CFI), Normed Fit Index (NFI), and Incremental Fit Index (IFI) values were 0.990, 0.953, and 0.990, respectively, indicating a strong fit between the model and the data. The standardized root mean squared residual (SRMR) value of 0.038 and the root mean square error of approximation (RMSEA) value of 0.018 were both below the recommended cutoff points, further supporting the model’s acceptable fit. These fit indices, as presented in Table 6, collectively indicate that the structural model adequately represents the relationships among the constructs and provides a satisfactory explanation of the data.
In addition to the primary hypotheses, the study proposed that AI literacy moderates the relationships between GenAI’s characteristics (personalized investment suggestion, human-like empathy, and continuous improvement) and perceived authenticity. The results presented in Table 5 demonstrate that as AI literacy increases or decreases, the positive associations between GenAI’s characteristics and consumers’ perceived authenticity remain consistent.
The interaction term between personalized investment suggestions and AI literacy is positively associated with perceived authenticity (β = 0.101, p < 0.001), indicating that the relationship between personalized investment suggestions and perceived authenticity is strengthened by higher levels of AI literacy. Similarly, the interaction term between human-like empathy and AI literacy is positively associated with perceived authenticity (β = 0.097, p < 0.001), suggesting that the relationship between human-like empathy and perceived authenticity is enhanced by higher levels of AI literacy. Finally, the interaction term between continuous improvement and AI literacy is positively associated with perceived authenticity (β = 0.108, p < 0.001), indicating that the relationship between continuous improvement and perceived authenticity is reinforced by higher levels of AI literacy.
Figure 2 presents a visual representation of the standardized path coefficients and the significance levels for each hypothesis, including the moderating effects of AI literacy on the relationships between GenAI’s characteristics and perceived authenticity.

6. Conclusion

The objective of this study was to explore the dynamics of consumer responses to GenAI-powered financial advice, addressing a critical gap in the literature on the adoption of GenAI technologies in financial services. Through rigorous empirical analysis, it was shown that personalized investment suggestions, human-like empathy, and the continuous improvement of GenAI significantly enhance consumers’ perceptions of authenticity. These perceptions, in turn, foster a utilitarian attitude towards using GenAI for financial advice, influencing consumers’ willingness to engage with and resist communication with GenAI. Notably, the study highlights the role of AI literacy in amplifying the positive effects of GenAI’s features on perceived authenticity.
Our findings delineate a clear pathway through which GenAI’s features influence consumer behaviors. The provision of personalized investment advice, demonstration of human-like empathy, and commitment to continuous improvement enhance the perceived authenticity of GenAI’s financial counsel. These insights align with Refs. [26,86], who emphasized the importance of perceived human-likeness in user interactions with AI systems. Additionally, the work of Refs. [73,87] highlighted the role of personalization and continuous improvement in enhancing consumer trust in AI services.
We also found that perceived authenticity is crucial in developing a utilitarian attitude towards GenAI, which in turn increases the willingness to interact with the AI and reduces resistance. These findings extend previous research on the importance of authentic design for GenAI platforms [88,89].
Furthermore, the significant moderating influence of AI literacy underscores the importance of consumers’ understanding and familiarity with AI technologies in enhancing the effectiveness of GenAI’s features. These findings support past studies on AI literacy [36,80] and demonstrate its value in the field of financial advisory services.

6.1. Academic Implications

This research significantly enhances the understanding of how GenAI influences consumer behavior in the realm of financial advice. The study’s findings contribute to the theoretical landscape by extending the application of SDL, integrating the AIDUA framework, and highlighting the complex interplay between AI attributes and consumer perceptions.
The study’s findings emphasize the importance of personalized investment suggestions, human-like empathy, and continuous improvement in GenAI’s recommendations within the context of consumer value co-creation, as highlighted by the SDL theory. By tailoring its services to individual consumer needs and preferences, GenAI facilitates a more interactive and collaborative experience between the service provider and the consumer, thus enabling value co-creation. As demonstrated by Ref. [90], personalization is crucial in enabling value co-creation, allowing for a more interactive and collaborative experience between the service provider and the consumer. The current study’s findings align with SDL principles and extend the theory by showing how digital technologies enhance personalized value co-creation, surpassing the limitations of traditional human-to-human service frameworks.
Moreover, GenAI’s ability to exhibit human-like empathy significantly influences consumers’ perceived authenticity, encompassing genuine care and concern for others. This finding contributes to the growing body of literature on the importance of designing AI technologies that are not only competent but also genuine and transparent in their interactions [91]. Additionally, GenAI’s capacity for continuous learning allows it to adapt to evolving user needs and preferences, thereby enhancing its perceived authenticity over time [92,93].
These findings underscore the importance of integrating personalized investment suggestions, human-like empathy, and continuous improvement in GenAI-driven financial advice. This integration reflects the processes of SDL and AIDUA by co-creating value through tailored, empathetic, and adaptive financial guidance, ultimately enhancing consumer engagement, trust, and participation in GenAI-powered financial services.
The study also highlights the importance of perceived authenticity in human-bot interactions, especially within the field of artificial intelligence [75,76]. The positive correlation between GenAI’s features and perceived authenticity aligns with the authenticity principle in AI research [29,94,95]. This underscores the necessity for GenAI and similar technologies to demonstrate authenticity to effectively engage and support users.
Additionally, the study identifies a strong correlation between perceived authenticity, utilitarian attitudes, and consumers’ willingness and resistance to communicate with GenAI for financial advice, expanding our understanding of technology adoption theories. The research demonstrates that perceived authenticity enhances utilitarian attitudes toward GenAI, which in turn affects the willingness or resistance to use GenAI for financial advice. This suggests that the value consumers place on authenticity can significantly influence their practical assessment of a technology’s benefits [96]. This finding advocates for a broader interpretation of perceived usefulness in AI technology acceptance, highlighting the importance of authenticity in shaping utilitarian evaluations of AI technology.
Lastly, the study’s focus on AI literacy adds to the theoretical landscape by suggesting that a higher level of AI literacy can enhance the effectiveness of AI features in improving perceived authenticity and, consequently, utilitarian attitudes [97]. This implies that individuals’ interactions with AI technologies are significantly influenced by their understanding of the technology, leading to increased acceptance and willingness to communicate with GenAI. Conversely, lower levels of AI literacy may lead to resistance in communicating with GenAI, highlighting the importance of addressing this factor to facilitate the effective integration of AI-driven services in the consumer value co-creation process.
In conclusion, this study offers a comprehensive integration of key concepts, including personalized investment suggestions, human-like empathy, continuous improvement, perceived authenticity, utilitarian attitudes, and consumers’ willingness and resistance to communicate with GenAI, within the frameworks of SDL and AIDUA. The findings show that GenAI’s personalized and empathetic approach, along with its ability to continuously improve, enhances perceived authenticity and utilitarian attitudes among consumers, facilitating value co-creation as proposed by SDL. Additionally, the study extends the AIDUA model by incorporating continuous improvement as a factor influencing perceived authenticity, a key determinant of AI tool usage. The research also underscores the role of AI literacy in shaping consumers’ willingness or resistance to engage with GenAI, highlighting the importance of addressing this factor to ensure the effective integration of AI-driven services in the value co-creation process. Overall, this study contributes to the growing body of literature on AI-driven services and their impact on consumer behavior, providing valuable insights for both researchers and practitioners in the field.

6.2. Practical Implications

The practical implications of this study are substantial, providing valuable insights for a wide range of stakeholders, including financial institutions, technology developers, and policymakers. For financial service providers, the study emphasizes the importance of developing GenAI technologies with enhanced human-like characteristics, such as the ability to offer personalized advice and empathy. This suggests that financial institutions should invest in AI systems that go beyond basic natural language processing and incorporate the ability to understand and adapt to individual emotional states and preferences. The research indicates that GenAI-driven chatbots capable of recognizing and responding to users’ emotions can significantly enhance user satisfaction and engagement. This underscores the necessity for financial institutions to employ GenAI technologies that can tailor their services to individual needs and preferences.
Furthermore, the study underscores the importance of ongoing learning in maintaining and enhancing consumer trust and engagement with GenAI systems. Financial institutions should prioritize designing AI systems that can continuously update their knowledge base and refine their algorithms based on user interactions. This approach aligns with the continuous improvement aspect of AI development and ensures that AI systems remain relevant and effective in meeting evolving consumer needs and preferences. AI systems capable of continuous learning and improvement are better equipped to build and maintain user trust over time by demonstrating an ongoing commitment to providing accurate and up-to-date information.
The study’s findings also highlight the importance of AI literacy in enhancing the positive impact of GenAI’s attributes on perceived authenticity. This suggests that financial institutions should develop educational programs and resources to improve consumers’ understanding of AI. By investing in initiatives that demystify AI technologies, financial institutions can reduce resistance and increase engagement among consumers. This aligns with the broader goal of enhancing AI literacy and ensuring that consumers have the necessary knowledge and skills to interact effectively with AI-driven services. Consumers with higher levels of AI literacy are more likely to appreciate the benefits of AI-driven services and engage with them more effectively. Therefore, businesses should invest in educational initiatives to promote consumer understanding and acceptance of these technologies.
In conclusion, the study’s implications highlight the importance for policymakers to consider the impact of GenAI-driven financial advice on personalized investment suggestions, human-like empathy, and continuous improvement in consumer financial services. As GenAI becomes increasingly integrated into the sector, policymakers must ensure that consumers receive tailored advice that aligns with their unique financial circumstances, fostering trust and engagement. Additionally, they should prioritize consumer privacy protection while promoting equitable access to AI-driven benefits, addressing the digital divide. This may involve establishing standards for transparency in AI algorithms, ensuring data privacy, and implementing digital literacy programs. By proactively addressing these issues with a focus on personalization, empathy, and ongoing improvement, policymakers can create a regulatory landscape that supports responsible innovation. This approach will ultimately encourage the development and deployment of AI technologies within the financial sector that prioritize individual needs, build meaningful connections, and continuously evolve to better serve consumers.

6.3. Limitations and future Directions

Although this study provides valuable insights into the factors influencing consumer perceptions and attitudes towards GenAI in the context of financial advice, it is important to recognize its limitations. One limitation is the focus on mobile banking users as the sample population, which may limit the generalizability of the findings to other consumer segments. Future research could address this by exploring similar questions across different demographics. Additionally, utilizing qualitative methodologies, such as in-depth interviews or focus groups, could provide a more nuanced understanding of consumer perceptions and attitudes towards GenAI-driven financial advice.
Another avenue for future research is to examine the influence of cultural differences on consumer reactions to GenAI-powered financial advisors. Given the variability in cultural values, norms, and expectations across societies, it is plausible that the factors influencing perceived authenticity and utilitarian attitudes towards GenAI-driven financial advice may vary. Comparative studies across different cultural contexts could offer valuable insights into designing and deploying GenAI-driven financial advisors to meet the unique needs and preferences of diverse consumer groups.
Finally, ethical considerations and privacy concerns surrounding GenAI-driven financial advice are critical areas for future research. As GenAI systems become more integrated into financial services, ensuring they are designed and deployed to respect consumer privacy, avoid bias, and promote fairness is paramount. Research on the ethical implications of GenAI-driven financial advice could inform the development of guidelines and regulations to ensure these technologies are used responsibly and in the best interests of consumers.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix

Operational definition and measurement items
Constructs Measurements Source(s)
Personalized
Investment
Suggestion
(PIS)
1. I feel that the investment suggestion by the GenAI is in line with my preferences. [73]
2.I feel that the investment suggestion by the GenAI is in line with my taste.
3. The investment suggestion by the GenAI is what I am interested in.
4. The investment suggestion by the GenAI is better than the suggestions I get from other places.
5. I feel that the quality of investment suggestion by the GenAI is what I want.
6. My overall evaluation of the GenAI investment suggestion is very high.
7. I think the the GenAI investment suggestions are valuable.
8. The investment suggestions of the GenAI is flexible and changeable according to my question.
Human-like
Enpathy
(HLE)
1. The GenAI makes me feel warm. [26,74,85]
2. The GenAI makes me feel that it cares about my needs.
3. The GenAI makes me feel concerned.
4. I feel that the GenaI serves me attentively.
5. I feel that the GenAI puts my interests first.
6. The GenAI gives me personalized attention.
7. The GenAI has expressed being able to empathize with the customer’s feelings.
8. The GenAI has indicated it could put itself well in the customer’s shoes.
9. The GenAI is able to accurately understand the customer’s concerns.
10. The GenAI can adopt my perspective and recommending the desired financial products.
11. The GenAI is preoccupied with offering me the best financial products.
Continuous Improvement (CI) 1. The GenAI can learn from past experience. [73]
2. The GenAI’s ability is enhanced through learning.
3. After a period of use, the GenAI’s performance is getting better and better.
4. I can feel the GenAI is constantly upgrading.
5. The GenAI fixes previous errors.
6. I feel that the GenAI is getting more and more advanced.
7. The function of the GenAI has been enhanced.
Perceived Authenticity (PA) 1. When I think of the GenAI, I see a unique set of characteristics. [24,76]
2. I would think of the GenAI as a unique individual.
3. Using the GenaI provided me with genuine experiences.
Utilitarian Attitude
(UA)
1. The GenaI is useful. [68]
2. The GenAI is productive.
3. The GenaI is necessary.
4. The GenAI is practical.
5. The GenAI is functional.
Willingness to Communicate with GenAI
(WCG)
1. I am willing to receive financial advisory services from GenAI. [31,77]
2. I will feel happy to interact with GenAI.
3. I am likely to interact with GenAI.
4. I would like to utilize the GenAI powered financial service if there is an opportunity.
5. I intend to utilize the GenAI financial advisory service continuously.
6. I recommend the GenAI financial advisory service to my friends.
Resistance to Communicate with GenAI
(RCG)
1. The financial advisory service provided by the GenAI is processed in a less humanized manner. [31,78]
2. I prefer human contact when looking for investment suggestions.
3. People need emotion exchange during service transactions.
4. Interaction with the GenAI lacks social contact.
5. The existing problems with GenAI make me take a wait-and-see approach to it.
6. I do not plan to continue using GenAI.
AI Literacy (AIL) 1. I can use AI to solve problems involving text and words. [79,80]
2. I know how to decide which data to collect and how to process them for training AI models to solve problems.
3. I know how to interpret results obtained from AI to solve problems.
4. I know how to select AI algorithms to solve problems.
5. I know how to improve my ability to use AI for problem-solving.
6. I can use AI to solve problems involving images and videos.

References

  1. Sironi, P. FinTech Innovation: From Robo-Advisors to Goal Based Investing and Gamification; John Wiley & Sons: Hoboken, NJ, USA, 2016.
  2. Dewasiri, N.J.; Karunarathna, K.S.S.N.; Rathnasiri, M.S.H.; Dharmarathne, D.G.; Sood, K. Unleashing the Challenges of Chatbots and ChatGPT in the Banking Industry: Evidence from an Emerging Economy. In The Framework for Resilient Industry: A Holistic Approach for Developing Economies; Routledge: London, UK, 2024; pp. 23-37.
  3. Brenner, L.; Meyll, T. Robo-Advisors: A Substitute for Human Financial Advice?. J. Behav. Exp. Finance 2020, 25, 100275. [CrossRef]
  4. Bhatia, A.; Chandani, A.; Divekar, R.; Mehta, M.; Vijay, N. Digital Innovation in Wealth Management Landscape: The Moderating Role of Robo Advisors in Behavioural Biases and Investment Decision-Making. Int. J. Innov. Sci. 2022, 14, 693-712. [CrossRef]
  5. Xia, H.; Zhang, Q.; Zhang, J.Z.; Zheng, L.J. Exploring Investors’ Willingness to Use Robo-Advisors: Mediating Role of Emotional Response. Ind. Manag. Data Syst. 2023, 123, 2857-2881. [CrossRef]
  6. Fui-Hoon Nah, F.; Zheng, R.; Cai, J.; Siau, K.; Chen, L. Generative AI and ChatGPT: Applications, Challenges, and AI-Human Collaboration. J. Inf. Technol. Case Appl. Res. 2023, 25, 277-304. [CrossRef]
  7. Pelau, C.; Dabija, D.C.; Ene, I. What Makes an AI Device Human-Like? The Role of Interaction Quality, Empathy, and Perceived Psychological Anthropomorphic Characteristics in the Acceptance of Artificial Intelligence in the Service Industry. Comput. Hum. Behav. 2021, 122, 106855. [CrossRef]
  8. Vargo, S.L.; Lusch, R.F. Evolving to a New Dominant Logic for Marketing. J. Mark. 2004, 68, 1-17. [CrossRef]
  9. Gursoy, D.; Chi, O.H.; Lu, L.; Nunkoo, R. Consumers’ Acceptance of Artificially Intelligent (AI) Device Use in Service Delivery. Int. J. Inf. Manag. 2019, 49, 157-169. [CrossRef]
  10. Huang, M.H.; Rust, R.T. Artificial Intelligence in Service. J. Serv. Res. 2018, 21, 155-172. [CrossRef]
  11. Roh, T.; Park, B.I.; Xiao, S.S. Adoption of AI-Enabled Robo-Advisors in Fintech: Simultaneous Employment of UTAUT and the Theory of Reasoned Action. J. Electron. Commer. Res. 2023, 24, 29-47. https://api.semanticscholar.org/CorpusID:258835831.
  12. Chou, S.Y.; Lin, C.W.; Chen, Y.C.; Chiou, J.S. The Complementary Effects of Bank Intangible Value Binding in Customer Robo-Advisory Adoption. Int. J. Bank Mark. 2023, 41, 971-988. [CrossRef]
  13. Ullah, R.; Ismail, H.B.; Khan, M.T.I.; Zeb, A. Nexus between ChatGPT Usage Dimensions and Investment Decisions Making in Pakistan: Moderating Role of Financial Literacy. Technol. Soc. 2024, 76, 102454.
  14. Roumeliotis, K.I.; Tselikas, N.D. ChatGPT and Open-AI Models: A Preliminary Review. Future Internet 2023, 15, 192. [CrossRef]
  15. Javaid, M.; Haleem, A.; Singh, R.P. A Study on ChatGPT for Industry 4.0: Background, Potentials, Challenges, and Eventualities. J. Econ. Technol. 2023, 1, 127-143. [CrossRef]
  16. Oehler, A.; Horn, M. Does ChatGPT Provide Better Advice than Robo-Advisors?. Finance Res. Lett. 2024, 60, 104898. [CrossRef]
  17. Aldunate, Á.; Maldonado, S.; Vairetti, C.; Armelini, G. Understanding Customer Satisfaction via Deep Learning and Natural Language Processing. Expert Syst. Appl. 2022, 209, 118309. [CrossRef]
  18. Ko, H.; Lee, J. Can ChatGPT Improve Investment Decisions? From a Portfolio Management Perspective. Finance Res. Lett. 2024, 64, 105433.
  19. Chen, B.; Wu, Z.; Zhao, R. From Fiction to Fact: The Growing Role of Generative AI in Business and Finance. J. Chin. Econ. Bus. Stud. 2023, 21, 471-496. [CrossRef]
  20. Srinivasan, S.S.; Anderson, R.; Ponnavolu, K. Customer Loyalty in E-Commerce: An Exploration of its Antecedents and Consequences. J. Retail. 2002, 78, 41-50. [CrossRef]
  21. Tam, K.Y.; Ho, S.Y. Web Personalization as a Persuasion Strategy: An Elaboration Likelihood Model Perspective. Inf. Syst. Res. 2005, 16, 271-291. [CrossRef]
  22. Ali, H.; Aysan, A.F. What Will ChatGPT Revolutionize in Financial Industry?. Available online: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4403372 (accessed on 18 August 2024).
  23. Ashta, A.; Herrmann, H. Artificial Intelligence and Fintech: An Overview of Opportunities and Risks for Banking, Investments, and Microfinance. Strateg. Change 2021, 30, 211-222. [CrossRef]
  24. Vo, D.T.; Nguyen, L.T.V.; Dang-Pham, D.; Hoang, A.P. When Young Customers Co-Create Value of AI-Powered Branded App: The Mediating Role of Perceived Authenticity. Young Consum. 2024, [Epub ahead of print]. [CrossRef]
  25. Nazir, A.; Wang, Z. A Comprehensive Survey of ChatGPT: Advancements, Applications, Prospects, and Challenges. Meta-Radiology 2023, 100022. [CrossRef]
  26. Pelau, C.; Dabija, D.C.; Ene, I. What Makes an AI Device Human-Like? The Role of Interaction Quality, Empathy and Perceived Psychological Anthropomorphic Characteristics in the Acceptance of Artificial Intelligence in the Service Industry. Comput. Hum. Behav. 2021, 122, 106855. [CrossRef]
  27. Alboqami, H. Trust Me, I’m an Influencer!-Causal Recipes for Customer Trust in Artificial Intelligence Influencers in the Retail Industry. J. Retail. Consum. Serv. 2023, 72, 103242. [CrossRef]
  28. Glikson, E.; Asscher, O. AI-Mediated Apology in a Multilingual Work Context: Implications for Perceived Authenticity and Willingness to Forgive. Comput. Hum. Behav. 2023, 140, 107592. [CrossRef]
  29. Jones, C.L.E.; Hancock, T.; Kazandjian, B.; Voorhees, C.M. Engaging the Avatar: The Effects of Authenticity Signals during Chat-Based Service Recoveries. J. Bus. Res. 2022, 144, 703-716. [CrossRef]
  30. Stahl, B.C.; Eke, D. The Ethics of ChatGPT–Exploring the Ethical Issues of an Emerging Technology. Int. J. Inf. Manag. 2024, 74, 102700. [CrossRef]
  31. Ma, X.; Huo, Y. Are Users Willing to Embrace ChatGPT? Exploring the Factors on the Acceptance of Chatbots from the Perspective of AIDUA Framework. Technol. Soc. 2023, 75, 102362. [CrossRef]
  32. Niu, B.; Mvondo, G.F.N. I Am ChatGPT, the Ultimate AI Chatbot! Investigating the Determinants of Users’ Loyalty and Ethical Usage Concerns of ChatGPT. J. Retail. Consum. Serv. 2024, 76, 103562. [CrossRef]
  33. Paul, J.; Ueno, A.; Dennis, C. ChatGPT and Consumers: Benefits, Pitfalls and Future Research Agenda. Int. J. Consum. Stud. 2023, 47, 1213-1225. [CrossRef]
  34. Chang, T.S.; Hsiao, W.H. Understand Resist Use Online Customer Service Chatbot: An Integrated Innovation Resist Theory and Negative Emotion Perspective. Aslib J. Inf. Manag. 2024. [CrossRef]
  35. Baek, T.H.; Kim, M. Is ChatGPT Scary Good? How User Motivations Affect Creepiness and Trust in Generative Artificial Intelligence. Telemat. Inform. 2023, 83, 102030. [CrossRef]
  36. Ng, D.T.K.; Leung, J.K.L.; Chu, S.K.W.; Qiao, M.S. Conceptualizing AI Literacy: An Exploratory Review. Comput. Educ. Artif. Intell. 2021, 2, 100041. [CrossRef]
  37. Perchik, J.D.; Smith, A.D.; Elkassem, A.A.; Park, J.M.; Rothenberg, S.A.; Tanwar, M.; Sotoudeh, H. Artificial Intelligence Literacy: Developing a Multi-Institutional Infrastructure for AI Education. Acad. Radiol. 2023, 30, 1472-1480. [CrossRef]
  38. Cardon, P.; Fleischmann, C.; Aritz, J.; Logemann, M.; Heidewald, J. The Challenges and Opportunities of AI-Assisted Writing: Developing AI Literacy for the AI Age. Bus. Prof. Commun. Q. 2023, 86, 257-295. [CrossRef]
  39. Shin, D.; Rasul, A.; Fotiadis, A. Why Am I Seeing This? Deconstructing Algorithm Literacy through the Lens of Users. Internet Res. 2022, 32, 1214-1234. [CrossRef]
  40. Wang, B.; Rau, P.L.P.; Yuan, T. Measuring User Competence in Using Artificial Intelligence: Validity and Reliability of Artificial Intelligence Literacy Scale. Behav. Inf. Technol. 2023, 42, 1324-1337. [CrossRef]
  41. Markus, A.; Pfister, J.; Carolus, A.; Hotho, A.; Wienrich, C. Effects of AI Understanding-Training on AI Literacy, Usage, Self-Determined Interactions, and Anthropomorphization with Voice Assistants. Comput. Educ. Open 2024, 6, 100176. [CrossRef]
  42. Vargo, S.L.; Maglio, P.P.; Akaka, M.A. On Value and Value Co-Creation: A Service Systems and Service Logic Perspective. Eur. Manag. J. 2008, 26, 145-152. [CrossRef]
  43. Grönroos, C. Service Logic Revisited: Who Creates Value? And Who Co-Creates?. Eur. Bus. Rev. 2008, 20, 298-314. [CrossRef]
  44. Riikkinen, M.; Saarijärvi, H.; Sarlin, P.; Lähteenmäki, I. Using Artificial Intelligence to Create Value in Insurance. Int. J. Bank Mark. 2018, 36, 1145-1168. [CrossRef]
  45. Zhu, H.; Vigren, O.; Söderberg, I.L. Implementing Artificial Intelligence Empowered Financial Advisory Services: A Literature Review and Critical Research Agenda. J. Bus. Res. 2024, 174, 114494. [CrossRef]
  46. Lin, H.; Chi, O.H.; Gursoy, D. Antecedents of Customers’ Acceptance of Artificially Intelligent Robotic Device Use in Hospitality Services. J. Hosp. Mark. Manag. 2020, 29, 530-549. [CrossRef]
  47. Kelly, S.; Kaye, S.A.; Oviedo-Trespalacios, O. What Factors Contribute to the Acceptance of Artificial Intelligence? A Systematic Review. Telemat. Inform. 2023, 77, 101925. [CrossRef]
  48. Bag, S.; Srivastava, G.; Bashir, M.M.A.; Kumari, S.; Giannakis, M.; Chowdhury, A.H. Journey of Customers in this Digital Era: Understanding the Role of Artificial Intelligence Technologies in User Engagement and Conversion. Benchmarking 2022, 29, 2074-2098. [CrossRef]
  49. Ameen, N.; Tarhini, A.; Reppel, A.; Anand, A. Customer Experiences in the Age of Artificial Intelligence. Comput. Hum. Behav. 2021, 114, 106548. [CrossRef]
  50. Vesanen, J. What is Personalization? A Conceptual Framework. Eur. J. Mark. 2007, 41, 409-418. [CrossRef]
  51. Musto, C.; Semeraro, G.; Lops, P.; De Gemmis, M.; Lekkas, G. Personalized Finance Advisory through Case-Based Recommender Systems and Diversification Strategies. Decis. Support Syst. 2015, 77, 100-111. [CrossRef]
  52. Napoli, J.; Dickinson, S.J.; Beverland, M.B.; Farrelly, F. Measuring Consumer-Based Brand Authenticity. J. Bus. Res. 2014, 67, 1090-1098. [CrossRef]
  53. Morhart, F.; Malär, L.; Guèvremont, A.; Girardin, F.; Grohmann, B. Brand Authenticity: An Integrative Framework and Measurement Scale. J. Consum. Psychol. 2015, 25, 200-218. [CrossRef]
  54. Chi, N.T.K.; Hoang Vu, N. Investigating the Customer Trust in Artificial Intelligence: The Role of Anthropomorphism, Empathy Response, and Interaction. CAAI Trans. Intell. Technol. 2023, 8, 260-273. [CrossRef]
  55. Chuah, S.H.W.; Yu, J. The Future of Service: The Power of Emotion in Human-Robot Interaction. J. Retail. Consum. Serv. 2021, 61, 102551. [CrossRef]
  56. Huang, M.H.; Rust, R.T. Engaged to a Robot? The Role of AI in Service. J. Serv. Res. 2021, 24, 30-41. [CrossRef]
  57. Li, J.; Huang, J.; Li, Y. Examining the Effects of Authenticity Fit and Association Fit: A Digital Human Avatar Endorsement Model. J. Retail. Consum. Serv. 2023, 71, 103230. [CrossRef]
  58. Alimamy, S.; Al-Imamy, S. Customer Perceived Value through Quality Augmented Reality Experiences in Retail: The Mediating Effect of Customer Attitudes. J. Mark. Commun. 2022, 28, 428-447.
  59. Kwon, J.; Amendah, E.; Ahn, J. Mediating Role of Perceived Authenticity in the Relationship between Luxury Service Experience and Life Satisfaction. J. Strateg. Mark. 2024, 32, 137-151. [CrossRef]
  60. Zamil, A.M.; Ali, S.; Akbar, M.; Zubr, V.; Rasool, F. The Consumer Purchase Intention toward Hybrid Electric Car: A Utilitarian-Hedonic Attitude Approach. Front. Environ. Sci. 2023, 11, 1101258. [CrossRef]
  61. Fu, X. Understanding the Adoption Intention for Electric Vehicles: The Role of Hedonic-Utilitarian Values. Energy 2024, 131703. [CrossRef]
  62. Kim, H.W.; Chan, H.C.; Gupta, S. Value-Based Adoption of Mobile Internet: An Empirical Investigation. Decis. Support Syst. 2007, 43, 111-126. [CrossRef]
  63. Dinh, C.M.; Park, S. How to Increase Consumer Intention to Use Chatbots? An Empirical Analysis of Hedonic and Utilitarian Motivations on Social Presence and the Moderating Effects of Fear across Generations. Electron. Commer. Res. 2023, 1-41. [CrossRef]
  64. Hsieh, P.J. An Empirical Investigation of Patients’ Acceptance and Resistance Toward the Health Cloud: The Dual Factor Perspective. Comput. Hum. Behav. 2016, 63, 959-969. [CrossRef]
  65. Ghosh, M. Empirical Study on Consumers’ Reluctance to Mobile Payments in a Developing Economy. J. Sci. Technol. Policy Manag. 2024, 15, 67-92. [CrossRef]
  66. Attié, E.; Meyer-Waarden, L. The Acceptance and Usage of Smart Connected Objects According to Adoption Stages: An Enhanced Technology Acceptance Model Integrating the Diffusion of Innovation, Uses and Gratification and Privacy Calculus Theories. Technol. Forecast. Soc. Change 2022, 176, 121485. [CrossRef]
  67. Jan, I.U.; Ji, S.; Kim, C. What (De) Motivates Customers to Use AI-Powered Conversational Agents for Shopping? The Extended Behavioral Reasoning Perspective. J. Retail. Consum. Serv. 2023, 75, 103440. [CrossRef]
  68. Priya, B.; Sharma, V. Exploring Users’ Adoption Intentions of Intelligent Virtual Assistants in Financial Services: An Anthropomorphic Perspectives and Socio-Psychological Perspectives. Comput. Hum. Behav. 2023, 148, 107912. [CrossRef]
  69. Carolus, A.; Koch, M.J.; Straka, S.; Latoschik, M.E.; Wienrich, C. MAILS—Meta AI Literacy Scale: Development and Testing of an AI Literacy Questionnaire Based on Well-Founded Competency Models and Psychological Change- and Meta-Competencies. Comput. Hum. Behav. Artif. Humans 2023, 1, 100014. [CrossRef]
  70. Tirado-Morueta, R.; Aguaded-Gómez, J.I.; Hernando-Gómez, Á. The Socio-Demographic Divide in Internet Usage Moderated by Digital Literacy Support. Technol. Soc. 2018, 55, 47-55. [CrossRef]
  71. Baabdullah, A.M.; Alalwan, A.A.; Algharabat, R.S.; Metri, B.; Rana, N.P. Virtual Agents and Flow Experience: An Empirical Examination of AI-Powered Chatbots. Technol. Forecast. Soc. Change 2022, 181, 121772. [CrossRef]
  72. Sperling, K.; Stenberg, C.J.; McGrath, C.; Åkerfeldt, A.; Heintz, F.; Stenliden, L. In Search of Artificial Intelligence (AI) Literacy in Teacher Education: A Scoping Review. Comput. Educ. Open 2024, 100169. [CrossRef]
  73. Chen, Q.; Gong, Y.; Lu, Y.; Tang, J. Classifying and Measuring the Service Quality of AI Chatbot in Frontline Service. J. Bus. Res. 2022, 145, 552-568. [CrossRef]
  74. Fu, J.; Mouakket, S.; Sun, Y. The Role of Chatbots’ Human-Like Characteristics in Online Shopping. Electron. Commer. Res. Appl. 2023, 61, 101304. [CrossRef]
  75. Seitz, L. Artificial Empathy in Healthcare Chatbots: Does it Feel Authentic?. Comput. Hum. Behav. Artif. Humans 2024, 2, 100067. [CrossRef]
  76. Meng, L.M.; Li, T.; Huang, X. Double-Sided Messages Improve the Acceptance of Chatbots. Ann. Tour. Res. 2023, 102, 103644. [CrossRef]
  77. Kim, W.B.; Hur, H.J. What Makes People Feel Empathy for AI Chatbots? Assessing the Role of Competence and Warmth. Int. J. Hum. Comput. Interact. 2023, 1-14.
  78. Yang, B.; Sun, Y.; Shen, X.L. Understanding AI-Based Customer Service Resistance: A Perspective of Defective AI Features and Tri-Dimensional Distrusting Beliefs. Inf. Process. Manag. 2023, 60, 103257. [CrossRef]
  79. Almatrafi, O.; Johri, A.; Lee, H. A Systematic Review of AI Literacy Conceptualization, Constructs, and Implementation and Assessment Efforts (2019-2023). Comput. Educ. Open 2024, 100173. [CrossRef]
  80. Kong, S.C.; Cheung, M.Y.W.; Tsang, O. Developing an Artificial Intelligence Literacy Framework: Evaluation of a Literacy Course for Senior Secondary Students Using a Project-Based Learning Approach. Comput. Educ. Artif. Intell. 2024, 6, 100214. [CrossRef]
  81. Podsakoff, P.M.; Organ, D.W. Self-Reports in Organizational Research: Problems and Prospects. J. Manag. 1986, 12, 531-544. [CrossRef]
  82. Hair, J.F.; Gabriel, M.; Patel, V. AMOS Covariance-Based Structural Equation Modeling (CB-SEM): Guidelines on its Application as a Marketing Research Tool. Braz. J. Mark. 2014, 13, 1-15. [CrossRef]
  83. Raza, S.A.; Qazi, W.; Khan, K.A.; Salam, J. Social Isolation and Acceptance of the Learning Management System (LMS) in the Time of COVID-19 Pandemic: An Expansion of the UTAUT Model. J. Educ. Comput. Res. 2021, 59, 183-208. [CrossRef]
  84. Fornell, C.; Larcker, D.F. Structural Equation Models with Unobservable Variables and Measurement Error: Algebra and Statistics. J. Mark. Res. 1981, 18, 39-50. [CrossRef]
  85. Hu, L.T.; Bentler, P.M. Cutoff Criteria for Fit Indexes in Covariance Structure Analysis: Conventional Criteria Versus New Alternatives. Struct. Equ. Modeling 1999, 6, 1-55. [CrossRef]
  86. Kim, J.; Kang, S.; Bae, J. Human Likeness and Attachment Effect on the Perceived Interactivity of AI Speakers. J. Bus. Res. 2022, 144, 797-804. [CrossRef]
  87. Pitardi, V. Personalized and Contextual Artificial Intelligence-Based Services Experience. In Artificial Intelligence in Customer Service: The Next Frontier for Personalized Engagement; Springer: Cham, Switzerland, 2023; pp. 101-122.
  88. Lee, G.; Kim, H.Y. Human vs. AI: The Battle for Authenticity in Fashion Design and Consumer Response. J. Retail. Consum. Serv. 2024, 77, 103690. [CrossRef]
  89. Pandey, P.; Rai, A.K. Analytical Modeling of Perceived Authenticity in AI Assistants: Application of PLS-Predict Algorithm and Importance-Performance Map Analysis. South Asian J. Bus. Stud. 2024. [CrossRef]
  90. Wen, H.; Zhang, L.; Sheng, A.; Li, M.; Guo, B. From “Human-to-Human” to “Human-to-Non-Human”–Influence Factors of Artificial Intelligence-Enabled Consumer Value Co-Creation Behavior. Front. Psychol. 2022, 13, 863313. [CrossRef]
  91. Markovitch, D.G.; Stough, R.A.; Huang, D. Consumer Reactions to Chatbot Versus Human Service: An Investigation in the Role of Outcome Valence and Perceived Empathy. J. Retail. Consum. Serv. 2024, 79, 103847. [CrossRef]
  92. Baidoo-Anu, D.; Ansah, L.O. Education in the Era of Generative Artificial Intelligence (AI): Understanding the Potential Benefits of ChatGPT in Promoting Teaching and Learning. J. AI 2023, 7, 52-62. [CrossRef]
  93. Raj, R.; Singh, A.; Kumar, V.; Verma, P. Analyzing the Potential Benefits and Use Cases of ChatGPT as a Tool for Improving the Efficiency and Effectiveness of Business Operations. BenchCouncil Trans. Benchmarks Stand. Eval. 2023, 3, 100140. [CrossRef]
  94. Rese, A.; Ganster, L.; Baier, D. Chatbots in Retailers’ Customer Communication: How to Measure Their Acceptance?. J. Retail. Consum. Serv. 2020, 56, 102176. [CrossRef]
  95. Kuhail, M.A.; Thomas, J.; Alramlawi, S.; Shah, S.J.H.; Thornquist, E. Interacting with a Chatbot-Based Advising System: Understanding the Effect of Chatbot Personality and User Gender on Behavior. Informatics 2022, 9, 81. [CrossRef]
  96. Alimamy, S.; Kuhail, M.A. I Will Be with You Alexa! The Impact of Intelligent Virtual Assistant’s Authenticity and Personalization on User Reusage Intentions. Comput. Hum. Behav. 2023, 143, 107711. [CrossRef]
  97. Du, H.; Sun, Y.; Jiang, H.; Islam, A.Y.M.; Gu, X. Exploring the Effects of AI Literacy in Teacher Learning: An Empirical Study. Humanit. Soc. Sci. Commun. 2024, 11, 1-10. [CrossRef]
Figure 2. Path coefficients of the research model.
Figure 2. Path coefficients of the research model.
Preprints 115683 g002
Table 1. Demographic statistics.
Table 1. Demographic statistics.
Demographics Frequency Percentage (%)
Gender Male 412 50.1
Female 410 49.9
Age 18-24 139 16.9
25-34 322 39.2
35-44 233 28.3
45-54 86 10.5
55-64 35 4.3
Above 65 7 0.9
Education
Background
High school and below 164 20
3-year college 252 30.7
Bachelor 363 44.2
Master and above 43 5.2
Monthly
Income
3000 CNY and below 141 17.2
3001-5000 CNY 423 51.5
5001-7000 CNY 140 17
7001-9000 CNY 61 7.4
9000 CNY and above 57 6.9
Frequency of
using GenAI
Several times per day 29 3.5
Once a day 78 9.5
Several times per week 106 12.9
Once a week 364 44.3
Several times per month 208 25.3
Once a month 37 4.5
Table 2. Reliability, CR, and AVE.
Table 2. Reliability, CR, and AVE.
Constructs Items Item loadings Cronbach’s Alpha CR AVE
Personalized Investment Suggestion PIS1 0.924 0.926 0.928 0.619
PIS2 0.778
PIS3 0.769
PIS4 0.741
PIS5 0.747
PIS6 0.744
PIS7 0.783
PIS8 0.791
Human-like Empathy HLE1 0.898 0.949 0.95 0.635
HLE2 0.793
HLE3 0.778
HLE4 0.78
HLE5 0.754
HLE6 0.788
HLE7 0.768
HLE8 0.768
HLE9 0.798
HLE10 0.814
HLE11 0.814
Continuous Improvement CI1 0.87 0.915 0.917 0.613
CI2 0.778
CI3 0.718
CI4 0.75
CI5 0.788
CI6 0.771
CI7 0.796
Perceived
Authenticity
PA1 0.886 0.845 0.853 0.66
PA2 0.764
PA3 0.781
Utilitarian Attitude UA1 0.887 0.865 0.876 0.587
UA2 0.733
UA3 0.685
UA4 0.74
UA5 0.771
Willingness to communicate with GenAI WCG1 0.888 0.894 0.898 0.596
WCG2 0.721
WCG3 0.738
WCG4 0.726
WCG5 0.765
WCG6 0.78
Resistance to communicate with GenAI RCG1 0.863 0.885 0.887 0.57
RCG2 0.8
RCG3 0.672
RCG4 0.686
RCG5 0.762
RCG6 0.728
AI Literacy AIL1 0.768 0.91 0.91 0.629
AIL2 0.757
AIL3 0.844
AIL4 0.818
AIL5 0.76
AIL6 0.808
Table 3. Discriminant validity.
Table 3. Discriminant validity.
PIS HLE CI PA UA WCG RCG AIL
PIS 0.787
HLE .442** 0.797
CI .423** .446** 0.783
AIL .150** .160** .174** 0.793
PA .541** .551** .500** .317** 0.812
UA .451** .493** .480** .195** .614** 0.766
WCG .348** .332** .324** .143** .413** .669** 0.772
RCG -.315** -.336** -.371** -.198** -.473** -.677** -.435** 0.755
Note: *, p<0.05; **, p<0.01. Values in bold represent the square root of the AVE.
Table 4. Measurement model fit.
Table 4. Measurement model fit.
Fit Indices χ 2/df GFI AGFI NFI CFI IFI SRMR RMSEA
Recommended Criteria <3 >0.9 >0.8 >0.9 >0.9 >0.9 <0.08 <0.08
Scores 1.173 0.938 0.932 0.95 0.992 0.992 0.026 0.015
Table 5. Hyphotheses test results.
Table 5. Hyphotheses test results.
Hypothesis Path β P-value R2 Remarks
H1 PIS PA 0.318 *** 0.569 Supported
H2 HLE PA 0.338 *** Supported
H3 CI PA 0.287 *** Supported
H4 PA UA 0.71 *** 0.505 Supported
H5 UA WCG 0.709 *** 0.503 Supported
H6 UA RCG -0.739 *** 0.546 Supported
Moderating Effect Path β P-value Remarks
H7 PIS*AIL PA 0.101 *** Supported
H8 HLE*AIL PA 0.097 *** Supported
H9 CI*AIL PA 0.108 *** Supported
Note: *, p<0.05; **, p<0.01; ***, p<0.001.
Table 6. Structural model fit.
Table 6. Structural model fit.
Fit Indices χ 2/df GFI AGFI NFI CFI IFI SRMR RMSEA
Recommended Criteria <3 >0.9 >0.8 >0.9 >0.9 >0.9 <0.08 <0.08
Scores 1.173 0.938 0.932 0.95 0.992 0.992 0.026 0.015
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated