1. Introduction
The financial sector is undergoing a profound transformation with the advent of sophisticated technologies such as robo-advisors and generative artificial intelligence (GenAI) platforms like ChatGPT. This technological revolution has fundamentally altered how individuals manage their finances and receive financial advice. While robo-advisors provide algorithm-based asset management services with minimal human intervention [
1], GenAI technologies have taken this a step further by offering financial advice through conversational engagement [
2].
Previous research has demonstrated the significant impact of robo-advisors on the finance industry, highlighting factors such as behavioral biases, trust, perceived risk, and user attitudes in determining the adoption and effectiveness of automated financial advisory systems [
3,
4,
5]. However, the introduction of GenAI has further complicated user interactions, potentially influencing consumers’ attitudes and responses to these services [
6].
To gain a deeper understanding of consumers’ responses to GenAI-based financial advice, it is crucial to consider its unique attributes, such as personalized investment suggestions, human-like empathy, and continuous learning and improvement. These factors significantly impact consumers’ perceptions of the authenticity and reliability of GenAI [
7]. This study suggests that these characteristics and their influence on consumer responses can be analyzed using service-dominant logic (SDL) and AI Device Use Acceptance (AIDUA) frameworks [
8,
9].
To comprehensively examine this interaction, we integrate SDL and AIDUA into a research model and employ structural equation modeling to analyze data from 822 mobile banking users. Our study addresses four principal research questions:
How do GenAI attributes influence consumers’ perceptions of authenticity in using GenAI for financial advice?
What is the relationship between perceived authenticity and utilitarian attitudes towards GenAI financial advice?
How do utilitarian attitudes affect consumers’ responses to GenAI financial advice?
How does AI literacy moderate the impact of GenAI attributes on perceived authenticity?
This research contributes to both theory and practice in the field of multi-modal GenAI in financial advisory services. It offers insights into the psychological foundations of consumer trust and provides valuable guidance for the design, implementation, and user education strategies in GenAI-powered financial services. By investigating the impact of GenAI attributes on perceived authenticity and subsequent consumer attitudes and behaviors, this study addresses significant gaps in the literature and offers practical insights for the development of effective GenAI-based financial advisory services.
4. Research Methodology
4.1. Measurement Development
We commenced our investigation by developing a comprehensive questionnaire designed to capture the relevant data necessary for our analysis. In light of the significance of expert input, we solicited evaluations from esteemed professors in the Finance, Information Technology, and Management Science departments. Their invaluable feedback prompted revisions to the questionnaire, allowing us to refine and clarify our questions for greater precision and relevance.
A rigorous methodology was employed to ensure that the questionnaire accurately assessed eight key dimensions. These included the extent to which the investment advice was personalized, GenAI’s capacity for continuous improvement, its ability to demonstrate human-like empathy, consumers’ perceived authenticity of its responses, the utilitarian attitude of consumers towards GenAI, consumers’ willingness and resistance to engage with GenAI for financial guidance, and their overall AI literacy.
The introductory section of the questionnaire clearly outlined the purpose of the study, ensuring participants’ confidentiality and anonymity. Additionally, survey instructions were provided. The initial part of the questionnaire included basic demographic questions, such as age, gender, income level, and education, to establish a foundational understanding of the respondents’ backgrounds. The second part consisted of items carefully designed to assess the eight constructs under investigation.
The measurement items of the personalized investment suggestion assessed respondents’ perceptions of GenAI’s ability to comprehend their individual financial needs and deliver customized recommendations. The evaluation of continuous improvement assessed respondents’ views on GenAI’s ability to learn from interactions and improve its suggestions over time [
73]. Human-like empathy was measured through items [
26,
74,
75] that gauged the extent to which GenAI understood and considered respondents’ emotional and financial concerns. The perceived authenticity of GenAI’s financial advice was examined by asking respondents to rate the genuineness and reliability of the advice [
24,
76]. The usefulness, efficiency, and practicality of GenAI’s recommendations were evaluated in order to assess utilitarian attitudes [
68]. The respondents’ willingness to communicate with GenAI was gauged through items [
31,
77] that determined the likelihood of future engagement with the AI for financial advice. The reluctance of respondents to communicate with GenAI was evaluated by assessing their hesitation or reluctance to use GenAI for financial guidance [
31,
78]. Finally, the AI literacy scale was used to assess respondents’ knowledge and understanding of AI technologies, particularly their application in financial advice [
79,
80]. Detailed breakdowns can be found in the Appendix.
4.2. Data Collection
This study used a comprehensive data collection approach to gather insights from mobile banking service users who had engaged with GenAI for financial guidance. The survey was carefully crafted to gather a wide range of information, including participants’ interactions with GenAI, their evaluations of the AI’s authenticity, their level of AI knowledge, and their attitudes towards using AI for financial advice.
The study targeted adult mobile banking users who had interacted with GenAI’s financial advice feature. Purposive sampling was used to select participants meeting this criterion. The final sample included 822 respondents, balanced across gender, age, and economic background to ensure a representative cross-section of the diverse mobile banking user population. Participants were aged 18 to 65 and had varying levels of experience and knowledge regarding AI-powered financial advice tools.
First, a pilot test was conducted with a select group of respondents to identify and resolve any potential issues, ensuring the clarity and comprehensibility of the questionnaire. This preliminary phase was crucial for fine-tuning the survey instrument and optimizing the data collection process.
Following rigorous vetting, we commenced the formal data collection phase by administering the thoroughly reviewed questionnaire to our targeted respondent group. This structured approach, supported by expert validation and meticulous testing, reinforced the integrity of our methodology and improved the quality of the data collected, thereby providing a solid foundation for subsequent analysis.
To ensure the collection of high-quality, relevant data, a meticulous methodology was employed. To maintain a high standard of data collection on consumer responses to GenAI financial advice, we collaborated with a professional survey firm. This partnership aimed to leverage the firm’s expertise in survey design, distribution, and data processing to obtain a representative and reliable sample. The survey was distributed through several channels: 1. An email campaign targeted a database of mobile banking users provided by partner banks. 2. In-app notifications were sent to users of the mobile banking application, encouraging survey participation. 3. The survey was shared on relevant financial forums and social media platforms.
The survey was a critical component of our investigation into consumer responses to GenAI’s financial advice. Conducted over a three-month period, it allowed ample time to gather responses from a large number of participants. To ensure data integrity and relevance, the survey company employed advanced filtering techniques to screen the responses. Additionally, stringent measures were taken to ensure the anonymity and confidentiality of respondents’ data, with all responses anonymized before analysis. This approach was essential for maintaining ethical research standards and ensuring the reliability and validity of the collected data.
Table 1 summarizes the demographic characteristics of the survey respondents.
5. Data analysis and Results
5.1. Measurement Model
Ref. [
81] suggested that single-source data may be prone to common method variance. To determine the presence of common method bias (CMB) in our collected data, we conducted Harman’s single-factor test. This test involves loading all measurement items into a principal component analysis without rotation. It is widely accepted that CMB is a concern if a single factor accounts for more than 50% of the total variance. In this study, the first factor accounted for 31.95% of the variance, which is below the 50% threshold. Therefore, we can conclude that the data in this study are not affected by common method bias.
The measurement model was assessed by examining factor loading values, composite reliability (CR), and average variance extracted (AVE). As shown in
Table 2, all factor loadings exceed the recommended threshold of 0.6. Additionally, Cronbach’s α, which measures internal consistency reliability, ranged from 0.845 to 0.949, surpassing the suggested threshold of 0.7 [
82]. These results provide strong evidence supporting the scale’s reliability.
Composite reliability (CR) was used to evaluate the internal consistency of the scale, with higher values indicating greater reliability. Ref. [
83] state that CR values between 0.6 and 0.7 are acceptable, while values between 0.7 and 0.9 are considered satisfactory to good. As shown in
Table 3, all CR values exceeded 0.8, confirming the scale’s satisfactory composite reliability.
Additionally, the average variance extracted (AVE) values for all variables exceeded 0.5, meeting the criteria for convergent validity [
84]. These results collectively indicate that the measurement model demonstrates strong reliability and convergent validity.
To assess discriminant validity, we used the Ref. [
84] method, which requires the square root of the AVE to be greater than the correlations among the constructs.
Table 3 shows the square root of the AVE values along the diagonal (in bold) and the correlations among the constructs in the off-diagonal cells. The results reveal that the square root of the AVE for each construct is higher than the corresponding off-diagonal correlation values. This indicates that the measurement model has satisfactory discriminant validity, as each construct is more strongly related to its own measures than to those of other constructs.
Before conducting structural equation modeling (SEM) analysis, a confirmatory factor analysis (CFA) was performed to evaluate the measurement model. The model’s goodness of fit was assessed using various indices and their corresponding thresholds, as recommended by Ref. [
85].
The CFA results indicated that the measurement model fit the data well. Specifically, the chi-square to degrees of freedom ratio (χ2/df) was 1.173, which is within the acceptable range. The Goodness of Fit Index (GFI) and Adjusted Goodness of Fit Index (AGFI) values were 0.938 and 0.932, respectively, both exceeding the recommended thresholds. Additionally, the Comparative Fit Index (CFI) and Normed Fit Index (NFI) values were 0.992 and 0.95, respectively, indicating a strong fit. The Incremental Fit Index (IFI) value of 0.992 also met the criteria. Finally, the Standardized Root Mean Square Residual (SRMR) and Root Mean Square Error of Approximation (RMSEA) values were 0.026 and 0.015, respectively, both falling below the recommended thresholds, further supporting the model’s acceptable fit.
As shown in
Table 4, all fitting indices of the measurement model met the recommended criteria, confirming that the model adequately represents the data and is suitable for subsequent SEM analysis.
5.2. Structural Model
The structural model was evaluated to examine the relationships between the constructs proposed in the research model. The analysis revealed that all paths were positive and significant at the 0.05 level.
Table 5 presents the standardized path coefficients between constructs, significance levels, and explanatory power (R
2) for each construct. According to the rule of thumb, R
2 values of 25%, 50%, and 75% indicate weak, average, and substantial explanatory power, respectively.
In this study, the R2 values for perceived authenticity, utilitarian attitude, willingness to communicate with GenAI, and resistance to communicate with GenAI were 56.9%, 50.5%, 50.3%, and 54.6%, respectively, indicating a satisfactory level of explanation.
The results in
Table 5 show a positive association between personalized investment suggestions and perceived authenticity (β = 0.318, p < 0.001), supporting Hypothesis 1. Similarly, there is a positive association between human-like empathy and perceived authenticity (β = 0.338, p < 0.001), confirming Hypothesis 2. Additionally, continuous improvement positively influences perceived authenticity (β = 0.287, p < 0.001), supporting Hypothesis 3. Together, personalized investment suggestions, human-like empathy, and continuous improvement account for 56.9% of the variance in perceived authenticity.
Furthermore, perceived authenticity positively impacts utilitarian attitude (β = 0.71, p < 0.001), accounting for 50.5% of its variance, thereby supporting Hypothesis 4. In turn, utilitarian attitude positively influences the willingness to communicate with GenAI (β = 0.709, p < 0.001), supporting Hypothesis 5, and negatively affects resistance to communicating with GenAI (β = -0.739, p < 0.001), supporting Hypothesis 6. Utilitarian attitude explains 50.3% of the variance in willingness to communicate with GenAI and 54.6% of the variance in resistance to communicating with GenAI.
After verifying the hypotheses, a structural model test was conducted. The results indicated that the model demonstrated an acceptable fit to the data, according to the criteria recommended by Hu and Bentler (1999). The chi-square to degrees of freedom ratio (χ
2/df) was 1.225, which is within the acceptable range. The Goodness of Fit Index (GFI) and Adjusted Goodness of Fit Index (AGFI) values were 0.941 and 0.935, respectively, both exceeding the recommended thresholds. Additionally, the Comparative Fit Index (CFI), Normed Fit Index (NFI), and Incremental Fit Index (IFI) values were 0.990, 0.953, and 0.990, respectively, indicating a strong fit between the model and the data. The standardized root mean squared residual (SRMR) value of 0.038 and the root mean square error of approximation (RMSEA) value of 0.018 were both below the recommended cutoff points, further supporting the model’s acceptable fit. These fit indices, as presented in
Table 6, collectively indicate that the structural model adequately represents the relationships among the constructs and provides a satisfactory explanation of the data.
In addition to the primary hypotheses, the study proposed that AI literacy moderates the relationships between GenAI’s characteristics (personalized investment suggestion, human-like empathy, and continuous improvement) and perceived authenticity. The results presented in
Table 5 demonstrate that as AI literacy increases or decreases, the positive associations between GenAI’s characteristics and consumers’ perceived authenticity remain consistent.
The interaction term between personalized investment suggestions and AI literacy is positively associated with perceived authenticity (β = 0.101, p < 0.001), indicating that the relationship between personalized investment suggestions and perceived authenticity is strengthened by higher levels of AI literacy. Similarly, the interaction term between human-like empathy and AI literacy is positively associated with perceived authenticity (β = 0.097, p < 0.001), suggesting that the relationship between human-like empathy and perceived authenticity is enhanced by higher levels of AI literacy. Finally, the interaction term between continuous improvement and AI literacy is positively associated with perceived authenticity (β = 0.108, p < 0.001), indicating that the relationship between continuous improvement and perceived authenticity is reinforced by higher levels of AI literacy.
Figure 2 presents a visual representation of the standardized path coefficients and the significance levels for each hypothesis, including the moderating effects of AI literacy on the relationships between GenAI’s characteristics and perceived authenticity.
6. Conclusion
The objective of this study was to explore the dynamics of consumer responses to GenAI-powered financial advice, addressing a critical gap in the literature on the adoption of GenAI technologies in financial services. Through rigorous empirical analysis, it was shown that personalized investment suggestions, human-like empathy, and the continuous improvement of GenAI significantly enhance consumers’ perceptions of authenticity. These perceptions, in turn, foster a utilitarian attitude towards using GenAI for financial advice, influencing consumers’ willingness to engage with and resist communication with GenAI. Notably, the study highlights the role of AI literacy in amplifying the positive effects of GenAI’s features on perceived authenticity.
Our findings delineate a clear pathway through which GenAI’s features influence consumer behaviors. The provision of personalized investment advice, demonstration of human-like empathy, and commitment to continuous improvement enhance the perceived authenticity of GenAI’s financial counsel. These insights align with Refs. [
26,
86], who emphasized the importance of perceived human-likeness in user interactions with AI systems. Additionally, the work of Refs. [
73,
87] highlighted the role of personalization and continuous improvement in enhancing consumer trust in AI services.
We also found that perceived authenticity is crucial in developing a utilitarian attitude towards GenAI, which in turn increases the willingness to interact with the AI and reduces resistance. These findings extend previous research on the importance of authentic design for GenAI platforms [
88,
89].
Furthermore, the significant moderating influence of AI literacy underscores the importance of consumers’ understanding and familiarity with AI technologies in enhancing the effectiveness of GenAI’s features. These findings support past studies on AI literacy [
36,
80] and demonstrate its value in the field of financial advisory services.
6.1. Academic Implications
This research significantly enhances the understanding of how GenAI influences consumer behavior in the realm of financial advice. The study’s findings contribute to the theoretical landscape by extending the application of SDL, integrating the AIDUA framework, and highlighting the complex interplay between AI attributes and consumer perceptions.
The study’s findings emphasize the importance of personalized investment suggestions, human-like empathy, and continuous improvement in GenAI’s recommendations within the context of consumer value co-creation, as highlighted by the SDL theory. By tailoring its services to individual consumer needs and preferences, GenAI facilitates a more interactive and collaborative experience between the service provider and the consumer, thus enabling value co-creation. As demonstrated by Ref. [
90], personalization is crucial in enabling value co-creation, allowing for a more interactive and collaborative experience between the service provider and the consumer. The current study’s findings align with SDL principles and extend the theory by showing how digital technologies enhance personalized value co-creation, surpassing the limitations of traditional human-to-human service frameworks.
Moreover, GenAI’s ability to exhibit human-like empathy significantly influences consumers’ perceived authenticity, encompassing genuine care and concern for others. This finding contributes to the growing body of literature on the importance of designing AI technologies that are not only competent but also genuine and transparent in their interactions [
91]. Additionally, GenAI’s capacity for continuous learning allows it to adapt to evolving user needs and preferences, thereby enhancing its perceived authenticity over time [
92,
93].
These findings underscore the importance of integrating personalized investment suggestions, human-like empathy, and continuous improvement in GenAI-driven financial advice. This integration reflects the processes of SDL and AIDUA by co-creating value through tailored, empathetic, and adaptive financial guidance, ultimately enhancing consumer engagement, trust, and participation in GenAI-powered financial services.
The study also highlights the importance of perceived authenticity in human-bot interactions, especially within the field of artificial intelligence [
75,
76]. The positive correlation between GenAI’s features and perceived authenticity aligns with the authenticity principle in AI research [
29,
94,
95]. This underscores the necessity for GenAI and similar technologies to demonstrate authenticity to effectively engage and support users.
Additionally, the study identifies a strong correlation between perceived authenticity, utilitarian attitudes, and consumers’ willingness and resistance to communicate with GenAI for financial advice, expanding our understanding of technology adoption theories. The research demonstrates that perceived authenticity enhances utilitarian attitudes toward GenAI, which in turn affects the willingness or resistance to use GenAI for financial advice. This suggests that the value consumers place on authenticity can significantly influence their practical assessment of a technology’s benefits [
96]. This finding advocates for a broader interpretation of perceived usefulness in AI technology acceptance, highlighting the importance of authenticity in shaping utilitarian evaluations of AI technology.
Lastly, the study’s focus on AI literacy adds to the theoretical landscape by suggesting that a higher level of AI literacy can enhance the effectiveness of AI features in improving perceived authenticity and, consequently, utilitarian attitudes [
97]. This implies that individuals’ interactions with AI technologies are significantly influenced by their understanding of the technology, leading to increased acceptance and willingness to communicate with GenAI. Conversely, lower levels of AI literacy may lead to resistance in communicating with GenAI, highlighting the importance of addressing this factor to facilitate the effective integration of AI-driven services in the consumer value co-creation process.
In conclusion, this study offers a comprehensive integration of key concepts, including personalized investment suggestions, human-like empathy, continuous improvement, perceived authenticity, utilitarian attitudes, and consumers’ willingness and resistance to communicate with GenAI, within the frameworks of SDL and AIDUA. The findings show that GenAI’s personalized and empathetic approach, along with its ability to continuously improve, enhances perceived authenticity and utilitarian attitudes among consumers, facilitating value co-creation as proposed by SDL. Additionally, the study extends the AIDUA model by incorporating continuous improvement as a factor influencing perceived authenticity, a key determinant of AI tool usage. The research also underscores the role of AI literacy in shaping consumers’ willingness or resistance to engage with GenAI, highlighting the importance of addressing this factor to ensure the effective integration of AI-driven services in the value co-creation process. Overall, this study contributes to the growing body of literature on AI-driven services and their impact on consumer behavior, providing valuable insights for both researchers and practitioners in the field.
6.2. Practical Implications
The practical implications of this study are substantial, providing valuable insights for a wide range of stakeholders, including financial institutions, technology developers, and policymakers. For financial service providers, the study emphasizes the importance of developing GenAI technologies with enhanced human-like characteristics, such as the ability to offer personalized advice and empathy. This suggests that financial institutions should invest in AI systems that go beyond basic natural language processing and incorporate the ability to understand and adapt to individual emotional states and preferences. The research indicates that GenAI-driven chatbots capable of recognizing and responding to users’ emotions can significantly enhance user satisfaction and engagement. This underscores the necessity for financial institutions to employ GenAI technologies that can tailor their services to individual needs and preferences.
Furthermore, the study underscores the importance of ongoing learning in maintaining and enhancing consumer trust and engagement with GenAI systems. Financial institutions should prioritize designing AI systems that can continuously update their knowledge base and refine their algorithms based on user interactions. This approach aligns with the continuous improvement aspect of AI development and ensures that AI systems remain relevant and effective in meeting evolving consumer needs and preferences. AI systems capable of continuous learning and improvement are better equipped to build and maintain user trust over time by demonstrating an ongoing commitment to providing accurate and up-to-date information.
The study’s findings also highlight the importance of AI literacy in enhancing the positive impact of GenAI’s attributes on perceived authenticity. This suggests that financial institutions should develop educational programs and resources to improve consumers’ understanding of AI. By investing in initiatives that demystify AI technologies, financial institutions can reduce resistance and increase engagement among consumers. This aligns with the broader goal of enhancing AI literacy and ensuring that consumers have the necessary knowledge and skills to interact effectively with AI-driven services. Consumers with higher levels of AI literacy are more likely to appreciate the benefits of AI-driven services and engage with them more effectively. Therefore, businesses should invest in educational initiatives to promote consumer understanding and acceptance of these technologies.
In conclusion, the study’s implications highlight the importance for policymakers to consider the impact of GenAI-driven financial advice on personalized investment suggestions, human-like empathy, and continuous improvement in consumer financial services. As GenAI becomes increasingly integrated into the sector, policymakers must ensure that consumers receive tailored advice that aligns with their unique financial circumstances, fostering trust and engagement. Additionally, they should prioritize consumer privacy protection while promoting equitable access to AI-driven benefits, addressing the digital divide. This may involve establishing standards for transparency in AI algorithms, ensuring data privacy, and implementing digital literacy programs. By proactively addressing these issues with a focus on personalization, empathy, and ongoing improvement, policymakers can create a regulatory landscape that supports responsible innovation. This approach will ultimately encourage the development and deployment of AI technologies within the financial sector that prioritize individual needs, build meaningful connections, and continuously evolve to better serve consumers.
6.3. Limitations and future Directions
Although this study provides valuable insights into the factors influencing consumer perceptions and attitudes towards GenAI in the context of financial advice, it is important to recognize its limitations. One limitation is the focus on mobile banking users as the sample population, which may limit the generalizability of the findings to other consumer segments. Future research could address this by exploring similar questions across different demographics. Additionally, utilizing qualitative methodologies, such as in-depth interviews or focus groups, could provide a more nuanced understanding of consumer perceptions and attitudes towards GenAI-driven financial advice.
Another avenue for future research is to examine the influence of cultural differences on consumer reactions to GenAI-powered financial advisors. Given the variability in cultural values, norms, and expectations across societies, it is plausible that the factors influencing perceived authenticity and utilitarian attitudes towards GenAI-driven financial advice may vary. Comparative studies across different cultural contexts could offer valuable insights into designing and deploying GenAI-driven financial advisors to meet the unique needs and preferences of diverse consumer groups.
Finally, ethical considerations and privacy concerns surrounding GenAI-driven financial advice are critical areas for future research. As GenAI systems become more integrated into financial services, ensuring they are designed and deployed to respect consumer privacy, avoid bias, and promote fairness is paramount. Research on the ethical implications of GenAI-driven financial advice could inform the development of guidelines and regulations to ensure these technologies are used responsibly and in the best interests of consumers.