Preprint
Article

This version is not peer-reviewed.

Guidance Principles for the Development of Adaptive Self‐Regulating Guidelines for the Use of Generative Artificial Intelligence in the Business Customer Experience

Submitted:

17 April 2025

Posted:

19 April 2025

You are already at the latest version

Abstract
Generative artificial intelligence (GenAI) continues to influence all aspects of our business and society and is fast becoming unavoidable due to its capabilities in streamlining and automating business processes. However, businesses are continuously facing the moral dilemma of wanting to optimise and leverage the benefits of GenAI without experiencing unintended harms and gaps associated with its use. To this end, to allow businesses to reap the benefits while simultaneously mitigating the risks; this research aims to develop guiding principles for adaptive self-regulation guidelines for the use of GenAI in business customer experiences. This is exploratory qualitative research grounded on the interpretivist paradigm with data collected from 21 semi-structured interviews with technology-savvy business professionals with GenAI experience in South Africa. The empirical analysis yielded seven themes, which were role use and benefits of GenAI, gaps of GenAI, enabler of GenAI self-regulation, self-monitoring Gen AI, judgement of monitored process, self-reaction of monitored GenAI process, self-efficacy of self-regulation. The aforementioned themes highlighted the dynamic and interactive relationship that is necessary to develop guiding principles for adaptive self-regulating guidelines for the use of GenAI in the business customer experience principle. To develop adaptive self-regulatory guidelines the research recognises the role of the principles of clarity, ethical and risk-conscious AI design, security and data protection, empathy simulation (with limits), self-monitoring, judgement of monitored process, self-reaction of monitored process, and self-efficacy of self-regulation. To this end, this research study developed a conceptual model and principles for the development of guidelines that contributes to the academic body of GenAI regulation knowledge, by showing the intersection between Albert Bandura’s Social Cognitive Theory (SCT) of self-regulation and Cybernetics (the field of study of a field of study that examines human-machine (GenAI) interaction, focusing on the principles of feedback, control, and communication AI relationships). This study is also useful for businesses, individuals and policy makers developing adaptive GenAI self-regulatory interventions.
Keywords: 
;  ;  ;  ;  

1. Introduction

Generative artificial intelligence (GenAI) is a system that has the ability to create autonomous content when prompted (Sai et al., 2024). These prompts include text, images, audio, or video, without human input or oversight (Caporusso, 2023; Naqbi et al., 2024). These systems generate new content by learning patterns and relationships within large datasets, as exemplified by advanced language models such as Generative Pretrained Transformer 4 (GPT-4) (Feuerriegel et al., 2024). GenAI also confers significant transformational benefits on business customer experiences, yielding enhancements in productivity, efficiency, automation, speed, and applicability, thereby revolutionising customer interaction and service delivery (AWS, 2023; Malacaria et al., 2023; Yikilmaz & Halis, 2023). GenAI is used in different aspects of business operations and management (McKnight et al., 2024).
A critical role of the adoption of GenAI business customer experience models is in decision making (Ferraro et al., 2024). In business decision making, the use of GenAI algorithms can be transformational, pivotal, and efficient for decision makers and policy makers and require the manager to have a clear understanding of the inward workings of algorithms and the role that data sets play in algorithms (Jawarkar, 2022). However, the use of such GenAI models entirely without guardrails in place can lead to several unintended harms in decision making, including data bias (Ferrara, 2024). Data bias refers to the bias resulting from an AI system that learns from historical information, algorithms, and assembled datasets that reflect a bias (Min, 2023). Therefore, some considerations must be made in the use of algorithmic decision making systems in business customer experience, for example, to protect against privacy breaches, discrimination inaccuracies, and incomplete decisions (Breidbach, 2024). To support this, Ferrara (2024) explained that GenAI offers competitive advantages, but is also susceptible to biases and raises critical concerns about its impact on marginalised groups and accentuates the need for vigilant monitoring and mitigation strategies to address these gaps. This increasing complexity of GenAI-generated content in customer experiences underscores the need for continuous research. Such research in this field is vital to fully understand the risks, limitations, capabilities, implications, and potential of GenAI (Banh & Strobel, 2023; Marie & Mathews, 2021). This is particularly critical, especially in the adoption of GenAI models in the business customer experience due to its multifaceted consequences and potential effects (Daqar & Smoudy, 2019; Kshetri et al., 2024). Furthermore, such transformative GenAI innovations have brought a moral dilemma to society in general; their use also results in unintended adverse ethical issues such as biases, incorrect results, security breaches, and more (Akhtar, 2024). Therefore, this research confronts the moral dilemma where, on the one hand, the benefits of GenAI in the business customer experience compete against the ethical concerns and gaps inherent in its deployment. There is a need to investigate the role of self-regulation of GenAI models to avoid this dilemma. In particular, GenAI models have the propensity to inherit and reinforce the prejudices and discrimination present in the data used to train them (Marie & Mathews, 2021).
This research aims to develop guiding principles for adaptive self-regulation guidelines for the use of GenAI in business customer experiences. This was investigated using the following objectives: (1) to determine the role, use, and benefits of GenAI in the business customer experience, (2) to explore the gaps and shortcomings encountered in GenAl that present a moral dilemma in the business customer experience, and (3) to identify the building blocks required to develop adaptive self-regulation guidelines for the use of GenAI in the business customer experience. The remainder of the article begins with the literature review that synthesises the existing literature, followed by the methodology that provides the ‘how part’ of the research. This is followed by the study findings, discussion which included guiding principles, and implications for management, theory and policy makers. The article closes with a conclusion, limitations, and direction for future research.

2. Review of the Literature

2.1. Social Cognitive Theory of Self-regulation and Cybernetics Theory

This study is underpinned by Albert Bandura’s Social Cognitive Theory (SCT) of self-regulation and Cybernetics Theory. Table 1 provides the features and characteristics of these theories. Both these theories recognise system adaptation, where in SCT of self-regulation, humans self-regulate; in cybernetics, systems adapt via feedback. These theories can be merged to account for AI-human interaction systems or behavioural modelling in technology environments.
The SCT of self-regulation emphasises the interplay between individual agency and environmental influences in self-regulation. This theory posits that individuals can shape their thoughts, emotions, and actions through self-efficacy and self-regulation, which are crucial for learning and performance(Bandura, 1986). The theory suggests that individuals have the capacity to self-regulate, but this capacity can be enhanced or impaired by various factors, including environmental cues, social norms, and beliefs about personal efficacy (Scott et al., 2024). Abdullah (2019) posits that the core concept of SCT is triadic reciprocal determinism in which behaviour, cognition, and environment influence each other. GenAI models mimic the cognitive skills and intelligence of humans (Pandey & Pandey, 2024; Yan et al., 2024). Therefore, these GenAI models, as they mimic human intelligence, have some form of cognitive skills, where they have been trained to plan their actions and experiences (Scott et al., 2024). An example of how this can be done is, for instance, reducing algorithmic bias by carefully selecting machine learning algorithms that inherently exhibit reduced susceptibility to bias just as a human selects nonbiased thoughts. This process takes place through the identification of models with architectures designed to uphold ethical GenAI values (Chadha, 2024).
The integration of these concepts with cybernetics theory, which focuses on feedback loops and self-regulating systems, provides a comprehensive understanding of the self-regulation of GenAI. It is important to also draw additional understanding of the dynamics of GenAI self-regulation from relevant concepts such as Cybernetics of Self-regulation. Cybernetics examines human-machine interaction, focusing on the principles of feedback, control, and communication that are applicable across diverse systems (Mindell, 2000). The foundational Macy Cybernetics Conferences (1946–1953) played a seminal role in interdisciplinary studies that later evolved into cybernetics, cognitive science, and artificial intelligence. These discussions preceded the Dartmouth Summer Research Project (1956), widely recognised as the formal inception (Kirova et al., 2023). In this context, GenAI cybernetic self-regulation refers to the process of self-regulating systems by which they are capable of self-monitoring and controlling their behaviour to adapt to achieve optimal performance (Rivas et al., 2025). Notably, cybernetics research combined with artificial intelligence studies, as cybernetics artificial intelligence, is an emergent scientific paradigm that addresses a historical gap of cybernetics within artificial intelligence research. This discourse underscores the need to delineate professional and personal ethics from immutable axiological constructs, advocating their critical role in self-regulating frameworks guiding AI-driven solutions to complex social imperatives (Groumpos, 2024).Cybernetics examines how individuals and systems maintain stability through feedback loops, which can be applied to understanding self-regulation in behaviour (DeYoung & Weisberg, 2018). Cybernetics studies systems that can self-regulate, adapting to changes without human intervention. These systems are applicable in diverse fields, where they can enhance learning processes through adaptive feedback mechanisms (Rivas et al., 2025).

2.2. GenAI Within Business Customer Experience

Contextually, GenAI models can be adopted to improve business customer experiences (Daqar & Smoudy, 2019). In addition, these GenAI models are being used by many businesses in various domains of industry and commerce domains, including education, banking, architecture, construction, mining, and more, to enhance customer experiences. GenAI-enabled business customer experiences encompass two dimensions: hedonic (memorable, entertaining, and novel) and recognition (feeling valued, respected, and safe) aspects (Ameen et al., 2021; Ullah, 2023). The research of business customer experience and GenAI is still in its formative stages since GenAI technologies are still fairly new, for example, ChatGPT only came out in 2022 and Meta AI has just been released in 2024 (Melise et al., 2024). A Zendesk research revealed that 52% of customers immediately shift from their business to a competitor when they have a dissatisfactory customer experience and that has raised their desire to invest in technologies to augment their business customer experiences (Law, 2024).
GenAI in business customer experiences has been linked to several advantages, including empowering enterprises to refine their messaging strategies, gauge customer satisfaction with precision, and foster more empathetic connections with their audience, enhancing the business customer experience (Oanh, 2024). Furthermore, businesses enjoy competitive advantages through sophisticated advancements, including accelerated content creation capabilities that encompass image, voice, text, and video generation. A notable example is CarMax, a used vehicle retailer that is leveraging GenAI to generate concise text summaries for its car research pages. These summaries are not only precise and engaging, but are also optimised for high search engine rankings, demonstrating the potential of generative AI to transform curated customer experiences that lead to customer satisfaction. (Capgemini, 2023). Additionally, GenAI models support the customer experience supporting the timely delivery of the right goods and services using monitored algorithms (Brzozowska et al., 2023). Commonly, GenAI is beneficial in customer relationship management, offering personalised outputs; which support data-driven decision-making and improve business customer experience, and streamline repetitive tasks, leading to cost savings and revenue growth (Seda et al., 2024). Businesses have been leveraging on this innovation for their business customer experiences by harnessing GenAI’s capabilities, in generating personalised content and data-driven multitier campaigns catering to specific customer segments, leading to enhanced conversion rates and customer satisfaction (Oanh, 2024). GenAI enables the deployment of sophisticated algorithms to analyse individual customer data and deliver highly customised experiences beyond rudimentary product recommendations, but it encompasses personalised email campaigns, dynamic website content, and customised advertising, thus fostering enhanced customer loyalty (Rane et al., 2024).

2.3. Gaps and Shortcomings of GenAI in Business Customer Experience

These gaps underscore the importance of self-regulation and the need for an examination of the moral implications of GenAI adoption in business customer experiences. Some of the gaps that arise from the use of GenAI include (a) Hallucination, (b) Privacy Concerns, (c) GenAI System Downtime, (d) Biases, (e) Compromises Learning development, (f) Surveillance of data, (g) the absence of human intervention, and (h) rise of deepfakes in critical processes. Hallucination occurs when GenAI systems, or language models like GPT-3, produce outputs that are convincing and realistic but entirely fabricated, lacking any connection to facts or context ((Kim, 2024). This phenomenon can manifest in various forms, including text, images, or audio, and is a result of the models’ tendency to generate plausible responses without truly comprehending the context or possessing genuine knowledge (Latifi, 2024). Privacy concerns are a significant challenge in the use of GenAI, privacy concerns, and many other vulnerabilities, particularly when dealing with sensitive information. Thus, making most customers prefer face-to-face interactions due to concerns about online security and data privacy (Akolkar, 2024). The downtime of GenAI systems is also a shortcoming where there is the risk of the GenAI tools having downtime or crushing or not performing correcting, not being able to cope timeously with new updates or version releases, and customer expectations leaving businesses at risk (Akolkar, 2024). Biases also occur when GenAI technologies are trained on a limited data set or have restricted access to data, resulting in biases in the output generated for the business model (BM) (Seda et al., 2024). As a result, the range of propositions presented by generative technology can be narrow and limited, potentially limiting its effectiveness and accuracy in supporting business decision-making. This highlights the importance of ensuring diverse and comprehensive training data to mitigate biases and improve the utility of generative technologies in the development of BM (Lecocq et al., 2024). Compromised learning development is the increasing reliance on AI dialogue systems, which poses a risk to the development of a young human brain in student learning and could compromise their critical cognitive skills, and students are affected by misinformation from GenAI , algorithmic biases, plagiarism, privacy breaches, and transparency issues which can threaten to erode their learning development (Zhai et al., 2024). Data surveillance is occurring as such, GenAI continues to raise significant moral and ethical concerns, including exploitation of surveillance data, in which these technologies collect data, aggregate data, and commodify it (Zuboff, 2017). Interestingly, GenAI capabilities are enabling businesses to mine and disseminate disproportionate amounts of aggregated data about people. Therefore, commercialising, exploiting, and compromising consumers and their privacy resulted in what has been termed “surveillance capitalism” (Capraro et al., 2023). The absence of human interventions in critical processes. Interestingly, although GenAI mods are also being used to add value to customers, for example, in the banking sector, one research concluded that there is still a need to have a human agent involved in critical transactions to enhance the business customer experience, as there is high risk involved like loss of funds from deep fakes (Dana & Zwiegelaar, 2023). GenAI has also been associated with an increase in deepfakes. Deepfakes are artificial intelligence synthetically simulated images or videos that purport to be an actual person (Whittaker et al., 2021). These deep-fakes can have far-reaching consequences, such as being used to commit crimes, manipulate evidence, and scamming and defrauding customers (Kaushik et al., 2024). Understanding these gaps and shortcomings that arise in the adoption of GenAI in business customer experience can aid in an informed deployment of GenAI and self-regulation interventions (Marie & Mathews, 2021; McKnight et al., 2024; Rajaram & Tinguely, 2024; Shukla, 2023). The gaps associated with the adoption of GenAI in the business customer experience need to be monitored. For example, the proliferation of biases generated in a model over a period of time can tend to remain unnoticed for a while, thus escalating prejudices to some customers (Ferrara, 2024a; Salazar, Peeples, and Brooks, 2024).

2.4. Practices in Self-Regulation of GenAI

Self-led measures exist in terms of GenAI self-regulation in business customer experiences, and the following section explores some of the common or frequently recurring measures identified in the literature. In the South African business customer experience landscape GenAI self-regulation is still relatively new, but the overarching piece of new legislation that has recently been promulgated in 2024 is South Africa’s National AI Policy (Funa & Gabay, 2025; South Africa’s National AI Policy Framework, 2024). Therefore, there is a knowledge gap for guidelines specifically for businesses and customer experience in South Africa. This framework was developed with several consultations that included the participation of researchers, citizens, lawyers, the government and other stakeholders. Many other countries have attempted to self-regulate or develop legislatures, policies, frameworks, or guidelines for AI, including GenAI. In Europe, there was also a goal to create guidelines for the orientation of Artificial Intelligence Governance. Therefore, through a consultative process similar to this research, experts were consulted to write what has been Guidelines for a Trustworthy AI’. The guidelines are structured according to three levels or tiers; the top tier addresses ethics premised upon human rights, in the second tier are the key requirements for assessing and AI systems and its lifecycle, lastly the base lists recommendations for operationalising requirements in the first tier from whence each system has been developed.
The nations in the BRICS+ (including Brazil, Russia, India, China, South Africa, the United Arab Emirates (UAE), and others) have also been developing guidelines (Bolotskikh et al., 2024). For example, in China infrastructure companies must be supported by government based on the guidelines termed the ‘2017 National AI Strategy’(Bolotskikh et al., 2024). Another example is in the United Arab Emirates where technology is largely governed by the government and they too have a vision and guidelines where they aim to become the global leader in AI by 2031 focusing also on GenAI growth. They have named this ‘The National Strategy for Artificial Intelligence 2031’. However, legislation is still in development in India, but guidelines have been established. The following articles were reviewed to identify how guidelines are developed for GenAI.
Self-regulation of GenAI is essential to harness its benefits while minimising its risks. The unregulated deployment of GenAI in society and business has raised significant concerns, as exemplified by Italy’s 2023 ban on ChatGPT due to privacy concerns (Meskó & Topol, 2023). This incident highlights the imperative need for self-regulation in GenAI technologies. Although GenAI has the potential to produce benefits, such as reducing healthcare care costs (Palaniappan, Lin, and Vogel, 2024), its unregulated use poses significant risks, including misuse of personal information. In addition, GenAI presents various risks, such as those outlined in the table below, underscoring the need to implement effective regulatory measures to mitigate these risks and ensure the responsible development and deployment of GenAI technologies. By establishing a regulatory framework, we can maximise the benefits of GenAI while minimising its risks and ensuring public trust.
Some of the notable academic literature on self-regulation and other regulation guidelines of GenAI are by Hirsch, (2021) and Gans, (2018) whom discuss the importance of pursuing social good and not just the good of the company in the long term. To this end, there is a need to have integrative building blocks in place, which ensure the implementation of responsible holistic and novel GenAI self-regulation guidelines, that incorporate audit processes of not only the performances but audits of the GenAI business customer experiences, clarifying trade-offs, and ensuring design policies, procedures, and goals are in place, for review and correction of any anomalies.

3. Methodology

The research design is exploratory in nature and is a qualitative research grounded on the interpretivist paradigm (Saunders, 2016) to resolve the GenAI business moral dilemma to advance knowledge and insights towards the development of self-regulating guidelines (Singh, 2021). In this investigation, a nonprobability sampling method is adopted using a purpose sampling technique (Creswell & Clark, 2017; Siva et al., 2019), where a sample of 21 business professionals in South Africa with a technological background and knowledge of generative artificial intelligence (Table 2). The professional had 5 to 30 years in technology, all except one having experience in GenAI. Semi-structured interviews were conducted using a hybrid of online and in-person with online interviews conducted and recorded on the Zoom platform (Archibald et al., 2019). This approach ensured an accurate capture of participants’ responses, allowing for detailed transcription and analysis. Ethics approval was granted by the university with reference RU-HREC 2024-7941-8927, which allowed the research to be carried out and the participants, all interviewed participants consented to be interviewed. The interview guide used in Appendix A.
Data analysis was performed using computer-assisted software, Atlas.Ti 22 was (Ferrario & Stantcheva, 2022). The interviews were transcribed and analysed using a step-by-step Atlas.Ti process adapted from Naeem et al. (2023). The data generated unique codes that were consolidated to 22 codes and then consolidated to seven themes. The study ensured trustworthiness through triangulation (Morse et al., 2002), adequacy, and relevance of the sample (Guetterman, 2015). Data saturation was achieved according to the guidelines by Guest et al. (2020). The study also ensures transferability to similar settings, confirmability, and dependability (Nowell et al., 2017).

4. Findings of the Study

The empirical analysis resulted in seven themes that were role use and benefits in GenAI, gaps of GenAI, enabler of GenAI self-regulation, self-monitoring Gen AI, judgement of monitored process, self-reaction of monitored GenAI process, self-efficacy of self-regulation subfactors (Figure 1).

4.1. Role, Use, and Benefits of GenAI

Throughout the research interviews, a myriad of benefits, use cases, and roles of GenAI were identified. These include time savings, efficiency, better analysis, and actionable insights. Most of the participants’ views cited that ‘time’, more specifically time saving attributes, were viewed as the main benefits reaped from the adoption of GenAI models. Similarly, the word frequency also showed that the participants corroborated each other’s opinions and similar terms were used such as ‘speed’, ‘quicker’ and “faster”.
Participant 15 raised an important facet to do with customers saving time in business customer experience ‘I mean, particularly in South Africa, you will find that there are certain services that, for example, in banking, if you like to wait for going to a queue and waiting for things, it seems like a waste of time. But now the fact that you can do some mobile banking where you are using AI on the platforms to actually help you through the processes is like giving back time to people. These sentiments were also shared by the other participants, and the findings show that not only are the customers saving time, but businesses save time and focus on core business by delegating generic and repetitive tasks to GenAI models, for example, tasks such as answering frequently asked questions. In particular, participant 5 also supported and highlighted the use of GenAI in coding the quote ‘because it saves time.’ Sometimes I was taking eight hours to write code, but when I use AI, it takes me two hours. It looks at bugs and helps you clean up the code. Another common trend in the findings related to the use and benefits of GenAI was the efficiency associated with the adoption of these technologies in the customer experience. Participant 18 highlighted that ‘there is potential for improved efficiency by learning from feedback and adjusting its responses and outputs.’
GenAI can also track inefficiencies by providing businesses with the capacity to monitor and improve the quality of their business customers. Participant 5 explained that a customer repeating the same question or prompt raises a flag to the business about inefficiencies and can prompt the business that customers are not satisfied with the responses they are receiving. Businesses that adopt GenAI for their business customer experience are able to leverage the efficiencies that GenAI brings in other areas of their supply chain. Participant 2 notes that ‘we outsource our coding in most cases and I know they are using generative, I am sure they are using generative AI for them to meet deadlines and contribute to the code base and all that. So, I think it is helpful in the sense that it makes, again, we are going back to the efficiency line, it helps with that... ‘
The participants also highlighted better analysis and that GenAI also provides actionable insights. One technology professional expressed the prowess of GenAI models tasks, in tedious tasks such as summarising reports on business customer experience. Participant 1 explained that ‘cloud AI came up with new features, so it allows us to do better data analysis.’ We can just upload your CSV files there and then go and analyse the data that you have submitted there.’ The participants also highlighted that GenAI models can be used to monitor sales performance in the business, providing information on the performance of their customer experience, which provides actionable insights. Participant 18 noted that ‘like you are speaking to a real-life expert with the data we give you, it can generate actionable insights in a matter of seconds. These sentiments were further corroborated by other participants and it was worth mentioning that the objectivity of these insights was a benefit.

4.2. Gaps and Shortcomings of GenAI

The gaps that are inherent with the adoption of GenAI in business customer experience are unavoidable and need to be explored to better understand the moral dilemma at hand. In the research interviews, the research participants were asked to identify some of the inherent to the adoption of GenAI adoption. A common gap that participants identified was the biases associated with the adoption of GenAI models. Several biases exist including training data biases, human biases and biases that the GenAI model learns on its own. For example, Participant 19 was of the opinion that; “if there’s some supervision that is needed, the bias in that training dataset could creep in there again’. Therefore, the findings from expert insights suggest that it is not uncommon for GenAI models to already be trained using data sets that contain social biases. One of the common trends in the research interviews was the issue of hallucinations. Participant 2 was of the opinion that; “there’s a hallucination factor where generative AI kind of likes to make up its own thing, makes up its own information and data, and it still has a problem when it comes to accuracy’. Another participant stressed the importance of limiting access to sensitive information and protecting customer privacy by providing a sense of trust and security in a customer’s business customer experience. Even more worrying for business is that not only are their customers a prime target, but their logs records are also a target. For example, Participant 10 noted that possibly the exchange of information through the GenAI models for business customer experiences can attract hackers. Therefore, the role of human oversight and security checks cannot be overemphasised. It is clear from history that technology cannot fully escape human beings. The total absence of a human was also a drawback registered by the participants in the interviews as a major gap mentioned by the participants in the adoption of GenAI business customer experiences. GenAI models will always require human oversight and support to provide optimal business customer experience and certain transactions cannot be executed by GenAI machines for example bank card replacements. Participant 18 said; “There will always be a need for human intervention, as there will be concerns about ethical safeguards and transparency’.

4.3. Building Blocks to Develop Adaptive Self-Regulation Guidelines

The participants identified five building blocks of self-regulation and these included self- monitoring GenAI, judgement of monitored processes, self-reaction of monitored GenAI processes, self-efficacy of self-regulation subfactors, as well as feedback loop and error correction.
Self-monitoring contributes to the end goal of achieving adaptive self-regulatory guidelines as it suggests a process of scrutiny and observation of the performance of the GenAI models in business customer experience. Similarly, participants also identified by participants as a building block toward adaptive self-regulation. Most of the participants expressed that they if they have sufficient knowledge and understanding they were able to determine and exercise judgement on the performance of GenAI models. Participant 5 stated that, “So to be able to make a judgement, you have to understand what the biases are, what it means”. Hence, they could determine when the monitored performances were beneficial or a cause for concern, i.e., creating gaps. In addition, to achieve adaptive self-regulation, a level of judgement and assessment must be conducted. Judgement is a subset of standards, and performance is deemed favorable or negative based on the standards under which it is being evaluated for self-regulation. Hence, participants also identified benchmarking against industry standards becomes relevant to South African Businesses self-regulating their GenAI business customer experience models. Therefore, to self-regulate and exercise the judgement subfunction of self-regulation, it is imperative for there to be a human factor that aligns the GenAI in the business customer experience with the acceptable standards in that business, community, country, and region. It is also clear from the experts’ opinions that there needs to be some form of knowledge of standards and acceptable practices that the person providing oversight of the GenAI model oversight needs to possess to discern and exercise judgement on the GenAI model. Another key aspect is that of taking ownership of the experience by being proactive as a form of self-reaction. Participant 15 cited “I am going to share need to know very minimum”. Thus, aanother key building block identified by the participants was self-reaction. Self-reaction is crucial in aiding the role of adaptive self-regulation because if businesses are apathetic towards incorrect GenAI conduct and not reacting to close the gaps, it is problematic for customers. One of the self-reactions that were identified was testing. For example, one interesting strategy by Participant 5 was; “quality assurance testing to check against all of these things you’ve mentioned, make sure the system is robust, it goes through information security, they have to try and hack it. If they manage to hack it, then we patch it before we put it in the market or before we put it to people”. Some participants recognised the critical role that testing plays and the need for frequent tests especially for GenAI customer models to identify the cause of anomalies and quality assurance and also to eliminate future gaps is a building block that also facilitates the self-regulation process. Implementing a rigorous testing framework, which encompasses quality assurance protocols, penetration tests, and simulated hacking exercises, is crucial for continually assessing the integrity of the system, detecting latent vulnerabilities, and forming iterative self-regulatory enhancements. Customers are also testing the models; Participants 1 and 2 described that in the event of a wrong result, they were capable of correcting the model where they realise that they are getting incorrect responses. Furthermore, the participants highlighted that customers also had the onus to restrict the information that they shared and also to self-regulate the models by not compromising their personal information. For example, when sharing information Participant 15 was of the view that; “My starting point is to be as conservatist as possible and I operate on a need-to-know basis”
The participant also increased the self-efficacy of the self-regulation subfactors. About 75% of the participants, when asked if they had mastered their ability to monitor and identify gaps, felt that they were self-efficient. In other words, they had the confidence to monitor, exercise judgements, and react. This is particularly important, as businesses need to be confident in their ability to adapt to their self-regulation. Self-efficacy plays a pivotal role in the self-regulation process. This belief also partially influences how experts perform other self-regulation subfunctions (self-monitoring, judgement, and self-reaction). The findings further underscored the significance of the role feedback mechanisms and systematic evaluation of the performance of GenAI customer-facing models play as a building block to adaptive self-regulation. Continuous feedback allows customers and businesses to engage with GenAI models continuously in an agile manner to eliminate gaps that may arise. The last was the feedback loop, and error correction was constituted by regular updates, knowledge and continuous learning, and human GenAI model collaboration. Due to the fact that GenAI models are predominately trained on data, it is possible that some of the data will be outdated and the models will also become out of date, hence a critical building block for self-regulation is regular updates. For example, Participant 10 highlighted that; “there are tools that help you determine if maybe a certain function has broken or maybe it needs an update. There are tools that help us. ‘The reference made by the participant was that there are tools that can be used to automate and keep track of the regular updates of the GenAI models. Furthermore, the participants alluded to the importance of knowledge as another building block to self-regulation. In particular, participants were of the opinion that GenAI administrators need to equip and train themselves to self-regulate accordingly. Importantly, if businesses and the experts managing these models are continuously learning and staying abreast with trends, threads, and best practises, they become more empowered to self-regulate. In alignment with cybernetics which studies human – machine interactions; Participant 3 expressed that GenAI models and humans must interact for optimal business customer experiences. Participant 7 recognised that there are limitations to what models can do versus what a human is able to achieve in terms of customer experiences. She emphasised that; “it lacks the human factor because I think what most businesses tend to miss is that once you have the chatbot, you just leave the chatbot to answer all the questions. Whereas you need to have also an option where you can say, talk to a human so that you can get all the human answers that you need”. Hence to self-regulate the GenAI model, several research participants were of the opinion that Gen-AI and humans need to collaborate to holistically manage and adaptively self-regulate the business customer experiences.

5. Discussion

5.1. Guidelines for Adaptive Self-Regulation Guidelines

The development of guidelines is significantly influenced by the guiding principles, which serve as foundational frameworks that ensure consistency, transparency, and effectiveness in the guideline creation process (Kaushik et al., 2024; Morgan et al., 2018). These principles help streamline development. Guiding principles serve as heuristic tools that provide narrative guidance for decision-making (Oliver & Jacobs, 2007). These principles were developed from the findings of this study related to the role, use, and benefits of GenAI, the gaps and the shortcomings of GenAI, and building blocks to develop adaptive self-regulation guidelines (Table 3). These were the principles of clarity, ethical and risk-conscious AI design, security and data protection, empathy simulation (with limits), self-monitoring, judgement of monitored process, self-reaction of monitored process and self-efficiency of self-regulation.
  • Principle of clarity. The GenAI as a key driver of efficiency, innovation, and competitive advantage (Woolley, 2024). Additionally, GenAI improves efficiencies on transactions initiated by customers, which could improve customer satisfaction (Melise et al., 2024). Strategically leveraging GenAI can also position businesses for a competitive advantage (Oanh, 2024). One the fact that the efficiencies was of GenAI models were capable of responding to customer questions in real-time, and also providing personalised business customer experiences and more (Moura et al., 2021). The predictive capabilities of GenAI models in recalling customer patterns, purchase history, and trends, customers also enjoy the benefits of saving time in decision making (Tiutiu & Dabija, 2023). Although the customer experiences are enhanced from personalised or customised customer experiences, other nefarious consequences including ethical and data privacy side effects exist (Singh et al., 2024). GenAI offers revolutionary strategic business potential by providing analysis that has the potential of boosting customer experiences and sales performance through the continuous monitoring of trends, customer spending patterns, and customised sales and services (Yusuff, 2024). GenAI models enable the analysis of sales performance in the backend, thereby facilitating further enhancements to business customer experiences (Kumar et al., 2024). Research participants were of the opinion that GenAI models have the ability to assist businesses in their decision-making through their objective and readily available insights (İşgüzar et al., 2024). The principle of clarity states that AI systems should operate in a way that is transparent, understandable, and honest about their identity, purpose, and outputs, thus enabling users to make informed, responsible decisions when interacting with or relying on AI.
  • Principle of accuracy and fairness-conscious AI design. Bias is unavoidable, and that means that it becomes a gap that needs to be managed by the business and the customers as well. The findings accentuate the importance of acknowledging and addressing the potential gaps and shortcomings associated with GenAI adoption, particularly in business customer experience contexts (Lecocq et al., 2024). Noble (2018) posit that search engine algorithms have been criticised for displaying racially skewed image results when searching for terms like “woman”, highlighting the need for greater diversity and inclusivity in AI design (Noble, 2018). Biases also emerge when a GenAI model is built for Western cultures, thus creating an inherent bias toward their preferences (Nyaaba et al., 2024). Additionally, the findings highlighted that one of the most common gaps is hallucinations and outdated results (Akolkar, 2024; Latifi, 2024). This concern was also consistently raised by a majority of participants, highlighting the issue of hallucinations and related inconsistencies as a significant and recurring theme affecting business customer experiences (Williamson & Prybutok, 2024). As such, there is a need for a principle of accuracy and fairness-conscious AI design. This principle must emphasise algorithmic fairness and bias mitigation and active awareness of known risks, covering bias and hallucinations. The intention is to make sure that users and the public understand what AI cannot do well. When roles, uses, and benefits are clear, it is easier to trace errors and hallucinations, thus advancing the interlinkage between the principle of clarity and the principle of accuracy and fairness-conscious AI design.
  • Principle of security and data protection. In particular, the use of GenAI raises critical issues regarding sensitive data privacy, malicious input, and other ethical shortcomings, necessitating a nuanced consideration of its implications (Golda et al., 2024). As identified by the research participants, there are several security risks that are possible, including the leaking of the customers personal information shared with the GenAI models and also the sharing of sensitive business information by the GenAI with the world, so both the business and the customers are at risk (Sekine, 2025). Businesses that collect personal information face these sort of vulnerabilities and risks in the adoption of GenAI models for their business customer experience (Hinds et al., 2020). Potentially of all the gaps the issue to do with security is not only a business concern but a human rights issue, which has also been a concern highlighted with other technological advancements such as social media sites like Facebook (Meta) (Sieber, 2019). The principle of security and data protection promotes that AI systems must be designed, developed, and deployed in a way that safeguards personal data, ensures system integrity, and prevents unauthorised access or misuse of information throughout the AI lifecycle. This principle is essential for self-regulation, where organisations take proactive responsibility for ethical use of AI without needing external enforcement in every instance.
  • Principle of empathy simulation (with limits). The findings show a desire for AI to have warm and empathetic aspects that human beings possess. These limitations underscore the need for improved user education and interface design to facilitate effective human-AI collaboration (Katragadda, 2024). This principle balances emotional intelligence and ethical restraint, helping AI show care without crossing the line into false intimacy or misleading affective behaviour. AI systems designed to simulate empathy must do so responsibly, by acknowledging emotional context, responding in a supportive and respectful tone, while also clearly signalling their non-human nature to avoid emotional manipulation or deception.
  • Principle of self-monitoring.
  • The participants identified the crucial role that human oversight plays as a founding principle that initiates the process of self-regulation, and they also recommended the use of self-monitoring systems. These systems can be machine systems where the self-monitoring subfunction is fulfilled (Varsha, 2023). Several conclusions can be drawn from the findings in relation to the role that GenAI self-monitoring plays in the self-regulation of these models in the business customer experience. Self-regulation, in this context, involves the implementation of internal controls and guidelines that govern the control, design, and the deployment of GenAI models in business customer experiences. Businesses cannot afford to lose control of the output and the function of the GenAI customer experiences, as was the case with the Air Canada, where a customer was given a wrongful discount that the business had to pay including legal costs (Cerullo, 2024). A substantial majority of research participants converged in their opinions that self-monitoring plays an active self-regulatory role in GenAI business customer experiences (Holmström & Carroll, 2024). The former highlights the importance of self-monitoring, which enables businesses to maintain a state of control, oversight, awareness, and connectivity with the customer experiences created by their GenAI models (Gupta et al., 2024). Furthermore, oversight is crucial, to protect customers’ privacy through monitoring and staying informed about the data collection and dissemination of the GenAI model, thus promoting transparency (Chavali et al., 2024).
  • Principle of judgement of monitored process.
  • Judgement in this research refers to judging whether the above-mentioned self-monitored GenAI performances of the model is beneficial or causing gaps in the business customer experience. This is in line with the first two research objectives. Additionally, it was important for the participants to exercise judgement in identifying the benefits of GenAI in business customer experience in order to create a better understanding of the moral dilemma. This supports the role of adaptive self-regulation by ensuring that there are ongoing assessments and discernment of the GenAI customer experience performances. Again, judgement is a subset of personal standards and performance that is evaluated for self-regulation (Bandura, 1986). Some of the discussions of the codes, identified under the theme of judgement, are that judgement is enabled by integrating human intervention, benchmarking and having knowledge.
  • Principle of the self-reaction of monitored process
  • The principle of self-reaction involves the human agents once they have made their judgements and acknowledged the existence of a gap, they are now participating in undertaking interventions to correct the GenAI models. Participants described that in the event of a wrong result, they were capable of countering that response with the correct response and essentially amending the GenAI model. This plays a role in continuously improving GenAI models. The self-reactive role here is instantaneous as opposed to logging a ticket and awaiting lengthy channels to correct the model. Correcting the model is simply typing back to the GenAI that it has made an error and what the correct information is. However, there is a risk that the model can be amended from the correct information to be trained on incorrect information.
  • Principle of self-efficiency of self-regulation.
  • Self-efficacy is central to the entire self-regulation process and is the malleable belief that is formed on its cognitive ability to execute the self-regulation function (Stajkovic & Stajkovic, 2019). This belief also partially influences how one views their own abilities in the other subfunction self-monitoring, judgement, and self-reaction. It is so important that there be high levels of self-efficacy as it increases the belief of attaining self-regulation goals (Bandura, 1986). Furthermore, the results further corroborated Bandura’s sentiments that the higher the beliefs one has in their self-regulatory capabilities the greater is the commitment or mastery to their goals (Bandura, 1986; Nabavi & Bijandi, 2012). These sentiments of the integral role self-efficacy in influencing self-regulation underscore the need for enhanced user education to improve self-efficacy levels to facilitate enhanced self-regulation guidelines of the GenAI model (Katragadda, 2024).

5.2. Implications for Management

The findings of this study’s interview corroborate existing literature, demonstrating a significant concordance between the theoretical foundations and practical implications of the human relationship in the adoption of these models in the customer experience.
This dual approach was a common theme in the research themes. This can strengthen the role of the adaptive self-regulation guidelines, developing guiding principle where both human judgement and GenAI machine learning is integrative creating a more holistic and a robust GenAI self-regulation structure to cater to their business customer experiences. GenAI customer experience models that accurately self-regulate themselves in a way that is analogous to human self-regulatory cognition is an intricate undertaking. This could be another area of future research or a limitation of the study.

5.3. Implications for Policymakers

The implications of the research to policymakers are far reaching because the rapid advancements of GenAI have clearly outpaced have outpaced the development of appropriate adaptive self-regulatory frameworks, creating a significant gap that necessitates the formation and continued updating of adaptive guidelines. Governing authorities should also be adaptive and staying abreast of the latest trends and frequently adapt and review their legislations, policies, or standards. They should also benchmark with other entities. South Africa is in the right direction with a National AI Policy consultation that involved various public, private, and institutional entities. However, frequent review and assessments are necessary to keep it current in line with the emerging technologies. It is vital to include other professions and discipline outside of technology in the self-regulation consultation process of GenAI models to eliminate biases and to take a human-centered approach.

5.3. Theoretical Contribution of the Study

The theoretical contribution of this research is that combining the merits of cybernetics with Albert Bandura’s self-regulation SCT is critical for developing principles and guidelines for AI self-regulation. This area of theory is also critical for understanding the implications of the moral and ethical experiences of business customers engaging with GenAI models and the cognitive self-regulation capabilities of both human and artificial intelligence (McKnight et al., 2024; Singh et al., 2024). Cybernetics is a field of study that focusses on human-machine interactions. In other words, the triangulation of the cybernetics field of study with Albert Bandura’s SCT theory of self-regulation. Thus, creating a nuanced understanding of the self-regulation where humans and machine interaction is concerned in line with artificial intelligence, specifically GenAI. Furthermore, the theoretical contribution is towards the understanding of the self-regulation of GenAI models, and this is necessary in the current state of technology advancements where GenAI models are updating at a faster pace than the self-regulation thereof (Kanbach et al., 2023). Therein lies an opportunity to contribute theory that businesses can apply to be more adaptive and agile in their GenAI self-regulation approach (Pavón, 2025).

5.4. Limitations and Future Studies

This study has several limitations. First, the study’s scope on the use of GenAI was limited to technology professionals/experts, and future research should aim to recruit a broader and more diverse sample perhaps researching on how GenAI models are affecting society as a whole. Second, the study’s focus on GenAI adoption in business customer experience may limit the generalisability of the findings to other contexts. Furthermore, the study’s geographical scope is limited to South Africa, which may not be representative of global trends. The study time frame is 1.5 years, which may not capture long-term effects or changes in GenAI development and self-regulation opinions. By acknowledging these delimitations, this research aims to provide a detailed understanding of the complex issues surrounding the self-regulation of GenAI business customer experience, while recognising the need for future studies to address broader perspectives and contexts.

6. Conclusions

The research contributes with the guidelines to the development of adaptive self-regulation of GenAI models, an area still in its formative stages as this technology is still new. The study demonstrated the critical need for adaptive self-regulatory approaches to mitigate risks and maximising the benefits associated with the use of GenAI. The empirical analysis yielded seven themes and these themes highlighted the dynamic, interactive relationship that is necessary to develop guiding principles for adaptive self-regulating guidelines for the use of GenAI in business customer experience. These were the principle of clarity, ethical and risk-conscious AI design, security and data protection, empathy simulation (with limits), self-monitoring, judgement of monitored process, self-reaction of monitored process, and self-efficiency of self-regulation. The study’s findings support the intersection of Albert Bandura’s social cognitive theory and Cybernetics theory and provide insights into the development of adaptive self-regulatory capacity in individuals and organisations. As GenAI continues to evolve and become increasingly ubiquitous, it is essential that researchers, practitioners, and policymakers prioritise the development of effective adaptive self-regulatory guidelines to ensure the safe, transparent, and responsible use of this technology especially to protect the customers.

Appendix A. Interview Guide

  • What is your background including your technological background and how many years of experience do you have in technology and also in GenAI for customer experiences?
  • In your own view, what are the current benefits of using GenAI in business customer experiences in South Africa? Explain.
  • In your own view, what are the gaps or shortcomings of the use of business customer experience GenAI applications? Explain
  • In view of the identified gaps, what do you suggest can be done in terms of GenAI self-regulation?
  • In your own opinion, advise the use of GenAI in coding and have you also made use of it? For example, ask GenAI to write code for an application or load code for correction in GenAI tools, such as chatbots. Explain
  • Have you developed any GenAI business customer application that was self-regulated adaptively? Explain.
  • What is your opinion of adaptive self-regulation of GenAI? Explain.
  • In your view, have you mastered the ability to monitor GenAI processes? In other words, are you confident in your ability to keep a close eye on any GenAI business customer experience application and immediately pick up irregularities that contradict regulation measures, and can you immediately address or mitigate any risks effectively?
  • How good is your judgement in terms of classifying whether an incident should be regulated especially where it concerns GenAI processes, irregularities, such as biases, inconsistencies, copyright infringements, security breaches and any other undesirable effects?
  • Are you self-monitoring the performance of GenAI applications that you interact with through continuously learning on GenAI to stay aware of any new development or threats and engaging in new materials?
  • Are you able to take steps that show initiative to self-regulate the use of GenAI effectively?
  • Any other comments related to the subject matter under discussion?

References

  1. Abdullah, S.M. Social Cognitive Theory : A Bandura Thought Review published in 1982-2012. Psikodimensia 2019, 18, 85. [Google Scholar] [CrossRef]
  2. Abdullah, S.M. Social Cognitive Theory : A Bandura Thought Review published in 1982-2012. Psikodimensia 2019, 18, 85. [Google Scholar] [CrossRef]
  3. Bin Akhtar, Z. Generative artificial intelligence (GAI): From large language models (LLMs) to multimodal applications towards fine tuning of models, implications, investigations. Comput. Artif. Intell. 2024, 1498–1498. [Google Scholar] [CrossRef]
  4. Akolkar, H. R. (2024). Examining the Impact of Artificial Intelligence on Customer Satisfaction in the Banking Sector: A Quantitative Analysis [D.B.A., Westcliff University]. https://www.proquest.com/docview/2937182457/abstract/986BB7E52F64062PQ/5.
  5. Ameen, N.; Tarhini, A.; Reppel, A.; Anand, A. Customer experiences in the age of artificial intelligence. Comput. Hum. Behav. 2021, 114, 106548–106548. [Google Scholar] [CrossRef] [PubMed]
  6. Archibald, M.M.; Ambagtsheer, R.C.; Casey, M.G.; Lawless, M. Using Zoom Videoconferencing for Qualitative Data Collection: Perceptions and Experiences of Researchers and Participants. Int. J. Qual. Methods 2019, 18. [Google Scholar] [CrossRef]
  7. Bandura, A. (1986). Social foundations of thought and action: A social cognitive theory (pp. xiii, 617). Prentice-Hall, Inc.
  8. Banh, L.; Strobel, G. Generative artificial intelligence. Electron. Mark. 2023, 33, 1–17. [Google Scholar] [CrossRef]
  9. Bolotskikh, M. , Dorokhova, M., & Serov, I. (2024). Generative AI in the BRICS+ Countries: Trends and Outlook. https://www.yakovpartners.com/upload/iblock/a35/ft76bknzh09znv71qpvyr9k7q7c3wsxc/210125_generative_AI_BRICS_ENG.pdf.
  10. Breidbach, C.F. Responsible algorithmic decision-making. Organ. Dyn. 2024, 53. [Google Scholar] [CrossRef]
  11. Brzozowska, M.; Kolasińska-Morawska, K.; Sułkowski, Ł.; Morawski, P. Artificial-intelligence-powered customer service management in the logistics industry. Entrep. Bus. Econ. Rev. 2023, 11, 109–121. [Google Scholar] [CrossRef]
  12. Capgemini. (2023). Imagining A new era of customer experience with Generative AI. https://www.capgeminicom/us-en/insights/research-library/imagining-a-new-era-of-customer-experience-with-generative-ai/.
  13. Capraro, V.; Lentsch, A.; Acemoglu, D.; Akgun, S.; Akhmedova, A.; Bilancini, E.; Bonnefon, J.-F.; Brañas-Garza, P.; Butera, L.; Douglas, K.M.; et al. The impact of generative artificial intelligence on socioeconomic inequalities and policy making. PNAS Nexus 2023, 3, pgae191. [Google Scholar] [CrossRef]
  14. Carver, C.S.; Scheier, M.F. On the Self-Regulation of Behavior: Principles of Feedback Control; Cambridge University Press: 1998; pp. 10–28. [CrossRef]
  15. Cerullo, M. (2024, February 19). Air Canada chatbot costs airline discount it wrongly offered customer—CBS News. https://www.cbsnews.com/news/aircanada-chatbot-discount-customer/.
  16. Chadha, K.S. Bias and Fairness in Artificial Intelligence: Methods and Mitigation Strategies. Int. J. Res. Publ. Semin. 2024, 15, 36–49. [Google Scholar] [CrossRef]
  17. Chavali, D. , Baburajan, B., Kumar, V., & Katari, S. C. (2024). Regulating Artificial Intelligence Developments and Challenges. International Journal of Pharmaceutical Sciences, 02, 1250–1261. https://doi.org/10.5281/zenodo.10898480. [CrossRef]
  18. Dana, N. , & Zwiegelaar, J. (2023). The implementation of AI Technology on banks interactions creating differentiated customer offerings. International Conference on Information Systems 2023 Special Interest Group on Big Data Proceedings. https://aisel.aisnet.org/sigbd2023/1.
  19. Abu Daqar, M.A.; Smoudy, A.K.A. The role of artificial intelligence on enhancing customer experience. Int. Rev. Manag. Mark. 2019, 9, 22–31. [Google Scholar] [CrossRef]
  20. Espejo, R.; Reyes, A. On Control and Communication: Self-regulation and Coordination of Actions; Springer, Berlin, Heidelberg, 2011; pp. 21–32. [CrossRef]
  21. Ferrara, E. Fairness and Bias in Artificial Intelligence: A Brief Survey of Sources, Impacts, and Mitigation Strategies. Sci 2024, 6, 3. [Google Scholar] [CrossRef]
  22. Ferrario, B.; Stantcheva, S. Eliciting People’s First-Order Concerns: Text Analysis of Open-Ended Survey Questions. AEA Pap. Proc. 2022, 112, 163–169. [Google Scholar] [CrossRef]
  23. Ferraro, C.; Demsar, V.; Sands, S.; Restrepo, M.; Campbell, C. The paradoxes of generative AI-enabled customer service: A guide for managers. Bus. Horizons 2024, 67, 549–559. [Google Scholar] [CrossRef]
  24. Feuerriegel, S.; Hartmann, J.; Janiesch, C.; Zschech, P. Generative AI. Bus. Inf. Syst. Eng. 2024, 66, 111–126. [Google Scholar] [CrossRef]
  25. Funa, A.A.; Gabay, R.A.E. Policy guidelines and recommendations on AI use in teaching and learning: A meta-synthesis study. Soc. Sci. Humanit. Open 2025, 11, 101221. [Google Scholar] [CrossRef]
  26. Gans, J.S. (2018). Self-regulating Artificial General Intelligence. http://www.nber.org/papers/w24352.
  27. Golda, A.; Mekonen, K.; Pandey, A.; Singh, A.; Hassija, V.; Chamola, V.; Sikdar, B. Privacy and Security Concerns in Generative AI: A Comprehensive Survey. IEEE Access 2024, 12, 48126–48144. [Google Scholar] [CrossRef]
  28. Groumpos, P.P. The Cybernetic Artificial Intelligence (CAI): A new scientific field for modelling and controlling Complex Dynamical Systems. IFAC-PapersOnLine 2024, 58, 145–152. [Google Scholar] [CrossRef]
  29. Guetterman, T. C. (2015). Descriptions of Sampling Practices Within Five Approaches to Qualitative Research in Education and the Health Sciences. http://www.qualitative-research.net/.
  30. Gupta, P.; Ding, B.; Guan, C.; Ding, D. Generative AI: A systematic review using topic modelling techniques. Data Inf. Manag. 2024, 8, 100066. [Google Scholar] [CrossRef]
  31. Harinie, L.T.; Sudiro, A.; Rahayu, M.; Fatchan, A. Study of the Bandura’s Social Cognitive Learning Theory for the Entrepreneurship Learning Process. Soc. Sci. 2017, 6, 1. [Google Scholar] [CrossRef]
  32. Hinds, J.; Williams, E.J.; Joinson, A.N. “It wouldn't happen to me”: Privacy concerns and perspectives following the Cambridge Analytica scandal. Int. J. Human-Computer Stud. 2020, 143, 102498. [Google Scholar] [CrossRef]
  33. Hirsch, D. (2021). Business Data Ethics: Emerging Trends in the Governance of Advanced Analytics and AI Final Report. https://www.researchgate.net/publication/351046077_Business_Data_Ethics_Emerging_Trends_in_the_Governance_of_Advanced_Analytics_and_AI.
  34. Holmström, J.; Carroll, N. How organizations can innovate with generative AI. Bus. Horizons 2024. [Google Scholar] [CrossRef]
  35. Isguzar, S.; Fendoglu, E.; SimSek, A.I. Innovative Applications in Businesses: An Evaluation on Generative Artificial Intelligence. www.amfiteatrueconomic.ro 2024, 26, 511–530. [Google Scholar] [CrossRef]
  36. Jawarkar, R. K. (2022). The consequences of algorithmic decision making. https://www.abacademies.org/articles/the-consequences-of-algorithmic-decisionmaking.pdf.
  37. Katragadda, V. (2024). Leveraging Intent Detection and Generative AI for Enhanced Customer Support. https://www.researchgate.net/publication/381767053_Leveraging_Intent_Detection_and_Generative_AI_for_Enhanced_Customer_Support.
  38. Kaushik, P.; Garg, V.; Priya, A.; Kant, S. Financial Fraud and Manipulation: The Malicious Use of Deepfakes in Business; 2024; pp. 173–196. [CrossRef]
  39. Kim, H. Investigating the effects of generative-ai responses on user experience after ai hallucination. In Proceedings of the MBP 2024 Tokyo International Conference on Management & Business Practices, 18-19 January 2024; pp. 92–101. [Google Scholar]
  40. Kirova, V.D.; Ku, C.S.; Laracy, J.R.; Marlowe, T.J. The Ethics of Artificial Intelligence in the Era of Generative AI. J. Syst. Cybern. Inform. 2023, 21, 42–50. [Google Scholar] [CrossRef]
  41. Kshetri, N.; Dwivedi, Y.K.; Davenport, T.H.; Panteli, N. Generative artificial intelligence in marketing: Applications, opportunities, challenges, and research agenda. Int. J. Inf. Manag. 2023, 75. [Google Scholar] [CrossRef]
  42. Kumar, V.; Ashraf, A.R.; Nadeem, W. AI-powered marketing: What, where, and how? Int. J. Inf. Manag. 2024, 77. [Google Scholar] [CrossRef]
  43. Latifi, H. Challenges of Using Artificial Intelligence in the Process of Shi’i Ijtihad. Religions 2024, 15, 541. [Google Scholar] [CrossRef]
  44. Law, M. (2024, April 4). Generative AI is Reshaping the World of Customer Experience. https://technologymagazine.com/articles/generative-ai-is-reshaping-the-world-of-customer-experience.
  45. Lecocq, X.; Warnier, V.; Demil, B.; Plé, L. Using Artificial Intelligence (AI) Generative Technologies For Business Model Design with IDEATe Process: A Speculative Viewpoint. J. Bus. Model. 2024, 12, 21–35. [Google Scholar] [CrossRef]
  46. Marie, J. , & Mathews, J. (2021). A Self-regulatory Framework for AI Ethics: Opportunities and challenges. https://www.researchgate.net/publication/354969383_A_Self-regulatory_Framework_for_AI_Ethics_-_opportunities_and_challenges.
  47. McKnight, M.A.; Gilstrap, C.M.; Gilstrap, C.A.; Bacic, D.; Shemroske, K.; Srivastava, S. Generative Artificial Intelligence in Applied Business Contexts: A Systematic Review, Lexical Analysis, and Research Framework. J. Appl. Bus. Econ. 2024, 26. [Google Scholar] [CrossRef]
  48. Peruchini, M.; da Silva, G.M.; Teixeira, J.M. Between artificial intelligence and customer experience: a literature review on the intersection. Discov. Artif. Intell. 2024, 4, 1–10. [Google Scholar] [CrossRef]
  49. Meskó, B.; Topol, E.J. The imperative for regulatory oversight of large language models (or generative AI) in healthcare. npj Digit. Med. 2023, 6, 120. [Google Scholar] [CrossRef]
  50. Mindell, D. (2000). Cybernetics. https://web.mit.edu/esd.83/www/notebook/Cybernetics.PDF.
  51. Morgan, R.L.; Florez, I.; Falavigna, M.; Kowalski, S.; Akl, E.A.; Thayer, K.A.; Rooney, A.; Schünemann, H.J. Development of rapid guidelines: 3. GIN-McMaster Guideline Development Checklist extension for rapid recommendations. Heal. Res. Policy Syst. 2018, 16, 63. [Google Scholar] [CrossRef] [PubMed]
  52. Morse, J.M.; Barrett, M.; Mayan, M.; Olson, K.; Spiers, J. Verification Strategies for Establishing Reliability and Validity in Qualitative Research. Int. J. Qual. Methods 2002, 1, 13–22. [Google Scholar] [CrossRef]
  53. Moura, S. , Reis, J. L., & Rodrigues, L. S. (2021). The Artificial Intelligence in the Personalisation of the Customer Journey – a literature review. https://aisel.aisnet.org/cgi/viewcontent.cgi?article=1008&context=capsi2021.
  54. Nabavi, R. T. , & Bijandi, M. S. (2012). (2) (PDF) Bandura’s Social Learning Theory & Social Cognitive Learning Theory. ResearchGate. https://www.researchgate.net/publication/267750204_Bandura’s_Social_Learning_Theory_Social_Cognitive_Learning_Theory.
  55. Naeem, M.; Ozuem, W.; Howell, K.; Ranfagni, S. A Step-by-Step Process of Thematic Analysis to Develop a Conceptual Model in Qualitative Research. Int. J. Qual. Methods 2023, 22. [Google Scholar] [CrossRef]
  56. Al Naqbi, H.; Bahroun, Z.; Ahmed, V. Enhancing Work Productivity through Generative Artificial Intelligence: A Comprehensive Literature Review. Sustainability 2024, 16, 1166. [Google Scholar] [CrossRef]
  57. Nicklin, J.M.; Williams, K.J. Self-Regulation of Goals and Performance: Effects of Discrepancy Feedback, Regulatory Focus, and Self-Efficacy. Psychology 2011, 02, 187–201. [Google Scholar] [CrossRef]
  58. Noble, S.U. Algorithms of Oppression: How Search Engines Reinforce Racism; JSTOR: New York, NY, United States, 2018. [Google Scholar]
  59. Nowell, L.S.; Norris, J.M.; White, D.E.; Moules, N.J. Thematic Analysis: Striving to Meet the Trustworthiness Criteria. Int. J. Qual. Methods 2017, 16, 1–13. [Google Scholar] [CrossRef]
  60. Nyaaba, M. , Wright, A., & Choi, G. (2024). Generative AI and Digital Neocolonialism in Global Education: Towards an Equitable Framework. [CrossRef]
  61. Oanh, V.T.K. Evolving Landscape Of E-Commerce, Marketing, and Customer Service: the Impact of Ai Integration. J. Electr. Syst. 2024, 20, 1125–1137. [Google Scholar] [CrossRef]
  62. Oliver, D.; Jacobs, C. Developing guiding principles: an organizational learning perspective. J. Organ. Chang. Manag. 2007, 20, 813–828. [Google Scholar] [CrossRef]
  63. Palaniappan, K.; Lin, E.Y.T.; Vogel, S. Global Regulatory Frameworks for the Use of Artificial Intelligence (AI) in the Healthcare Services Sector. Healthcare 2024, 12, 562. [Google Scholar] [CrossRef] [PubMed]
  64. American Research, Thoughts; Pandey, P.; Pandey, M.M. American Research Thoughts; Pandey, P.; Pandey, M.M.(2024). Research methodology: Tools & techniques.
  65. Rajaram, K.; Tinguely, P.N. Generative artificial intelligence in small and medium enterprises: Navigating its promises and challenges. Bus. Horizons 2024, 67, 629–648. [Google Scholar] [CrossRef]
  66. Rane, N. , Paramesha, M., Choudhary, S., & Rane, J. (2024). Artificial Intelligence in Sales and Marketing: Enhancing Customer Satisfaction, Experience and Loyalty. SSRN Electronic Journal. [CrossRef]
  67. Rivas, E.O.D.; Chiappe, A.; Sagredo, A.V. Cybernetics of self-regulation, homeostasis, and fuzzy logic: foundational triad for assessing learning using artificial intelligence. 33, 2024; 33, e0254918. [Google Scholar] [CrossRef]
  68. Rivas, E.O.D.; Chiappe, A.; Sagredo, A.V. Cybernetics of self-regulation, homeostasis, and fuzzy logic: foundational triad for assessing learning using artificial intelligence. 33, 2025; 33, e0254918. [Google Scholar] [CrossRef]
  69. Sai, S.; Gaur, A.; Sai, R.; Chamola, V.; Guizani, M.; Rodrigues, J.J.P.C. Generative AI for Transformative Healthcare: A Comprehensive Study of Emerging Models, Applications, Case Studies, and Limitations. IEEE Access 2024, 12, 31078–31106. [Google Scholar] [CrossRef]
  70. Salazar, L.R.; Peeples, S.F.; Brooks, M.E. (2024). Generative AI Ethical Considerations and Discriminatory Biases on Diverse Students Within the Classroom (pp. 191–213). [CrossRef]
  71. Saunders. (2016). Research Methods for Business Students. www.pearson.com/uk.
  72. Schunk, D.H.; DiBenedetto, M.K. (2020). Social Cognitive Theory, Self-Efficacy, and Students with Disabilities: Implications for Students with Learning Disabilities, Reading Disabilities, and Attention-Deficit/Hyperactivity Disorder. Routledge. [CrossRef]
  73. Scott, W.D.; Cervone, D.; Ebiringah, O.U. The social-cognitive clinician: On the implications of social cognitive theory for psychotherapy and assessment—Scott—2024—International Journal of Psychology—Wiley Online Library. https://onlinelibrary.wiley.com/doi/full10.1002/ijop.13125.
  74. Isguzar, S.; Fendoglu, E.; SimSek, A.I. Innovative Applications in Businesses: An Evaluation on Generative Artificial Intelligence. www.amfiteatrueconomic.ro 2024, 26. [Google Scholar] [CrossRef]
  75. Sekine, T. (2025). Security Risks of Generative AI and Countermeasures, and Its Impact on Cybersecurity. NTT DATA. https://www.nttdata.com/global/en/insights/focus/security-risks-of-generative-ai-and-countermeasures.
  76. Shukla, S. (2023, August 16). Four Reasons Regulations on Generative AI May Do More Harm than Good. IEEE Computer Society. https://www.computer.org/publications/tech-news/community-voices/regulations-on-generative-ai/.
  77. Sieber, A. Does Facebook Violate Its Users’ Basic Human Rights? NanoEthics 2019, 13, 139–145. [Google Scholar] [CrossRef]
  78. Singh, A. (2021). An Introduction to Experimental and Exploratory Research. SSRN Electronic Journal. [CrossRef]
  79. Singh, K.; Chatterjee, S.; Mariani, M. Applications of generative AI and future organizational performance: The mediating role of explorative and exploitative innovation and the moderating role of ethical dilemmas and environmental dynamism. Technovation 2024, 133, 103021. [Google Scholar] [CrossRef]
  80. South Africa’s National AI Policy Framework. (2024). South Africa’s National AI Policy Framework. https://fwblaw.co.za/wp-content/uploads/2024/08/South-Africa-National-AI-Policy-Framework.pdf.
  81. Stajkovic, A. , & Stajkovic, K. (2019). Social Cognitive Theory. [CrossRef]
  82. Tiutiu, M.; Dabija, D.-C. Improving Customer Experience Using Artificial Intelligence in Online Retail. Proc. Int. Conf. Bus. Excel. 2023, 17, 1139–1147. [Google Scholar] [CrossRef]
  83. Tzafestas, S. G. (2017). Systems, Cybernetics, Control, and Automation. https://www.riverpublishers.com/book_details.php?book_id=455.
  84. Ullah, A. (2023). Impact of Artificial Intelligence on Customer Experience. https://www.researchgate.net/publication/383561910_Impact_of_Artificial_Intelligence_on_Customer_Journey_Mapping_and_Experience_Design.
  85. Usher, E.L.; Schunk, D.H. (2017). Social Cognitive Theoretical Perspective of Self-Regulation (pp. 19–35). Routledge, 19–35. [CrossRef]
  86. Varsha, P. S. How can we manage biases in artificial intelligence systems – A systematic literature review. Int. J. Inf. Manag. Data Insights 2023, 3. [Google Scholar] [CrossRef]
  87. Whittaker, L.; Letheren, K.; Mulcahy, R. The Rise of Deepfakes: A Conceptual Framework and Research Agenda for Marketing. Australas. Mark. J. 2021, 29, 204–214. [Google Scholar] [CrossRef]
  88. Wiener, N. (1948). CYBERNETICS. J. Wiley. https://books.google.co.zw/books/about/Cybernetics.html?id=9ntYxAEACAAJ&redir_esc=y.
  89. Williamson, S.M.; Prybutok, V. The Era of Artificial Intelligence Deception: Unraveling the Complexities of False Realities and Emerging Threats of Misinformation. Information 2024, 15, 299. [Google Scholar] [CrossRef]
  90. Woolley, D. (2024). South African organisations are embracing GenAI as a catalyst for innovation. Bizcommunity. https://www.bizcommunity.com/article/south-african-organisations-are-embracing-genai-as-a-catalyst-for-innovation-231017a.
  91. Yan, W.; Nakajima, T.; Sawada, R. Benefits and Challenges of Collaboration between Students and Conversational Generative Artificial Intelligence in Programming Learning: An Empirical Case Study. Educ. Sci. 2024, 14, 433. [Google Scholar] [CrossRef]
  92. Yusuff, M. (2024). GenAI-Driven Insights for Proactive Performance Monitoring in Omnichannel Systems. https://www.researchgate.net/publication/386451958_GenAI-Driven_Insights_for_Proactive_Performance_Monitoring_in_Omnichannel_Systems.
  93. Zhai, C.; Wibowo, S.; Li, L.D. The effects of over-reliance on AI dialogue systems on students' cognitive abilities: a systematic review. Smart Learn. Environ. 2024, 11, 28. [Google Scholar] [CrossRef]
  94. Zuboff, S. (2017). The Age of Surveillance Capitalism. https://www.hachettebookgroup.com/titles/shoshana-zuboff/the-age-of-surveillance-capitalism/9781610395694/?lens=publicaffairs.
Figure 1. Thematic analysis to develop a conceptual framework. Source: Authors guided by the approach of Naeem et al. (2023).
Figure 1. Thematic analysis to develop a conceptual framework. Source: Authors guided by the approach of Naeem et al. (2023).
Preprints 156330 g001
Table 1. Features, characteristics, and mechanisms of the SCT and Cybernetics Theories.
Table 1. Features, characteristics, and mechanisms of the SCT and Cybernetics Theories.
Feature Social Cognitive Theory Cybernetics Theory
Focus Human learning, behaviour and cognition, self-regulation
(Schunk & DiBenedetto, 2020; Scott et al., 2024; Usher & Schunk, 2017)
Control and communication through feedback loops
(Carver & Scheier, 1998; Mindell, 2000)
Origin Psychology & Education
(Bandura, 1986)
Engineering, Systems Theory, Cybernetics
(Wiener, 1948)
Key Mechanism Observational learning, self-efficacy, cognitive processes (Scott et al., 2024) Feedback loops, control systems, error correction
(Espejo & Reyes, 2011)
Core Concept Reciprocal determinism: Behaviour, cognition, and environment influence each other (Abdullah, 2019) Feedback and regulation: Input, output, and correction to maintain system balance (Mindell, 2000)
Agency Individuals are active agents with goals and self-regulation Scott et al., 2024; (Usher & Schunk, 2017) Systems may be mechanical, biological, or social, with or without agency ((Tzafestas, 2017)
Learning Process Through observation, imitation, and internal motivation
(Harinie et al., 2017)
Through feedback and adaptation to reduce deviation (Rivas et al., 2025)
Role of Feedback Used for self-regulation, self-reflection (Nicklin & Williams, 2011; Schunk & DiBenedetto, 2020) Central to system functioning and stability (Espejo & Reyes, 2011)
Source: Authors.
Table 2. Profile of the participants.
Table 2. Profile of the participants.
Participant ID Technological Background Years Experience Years Experience in GenAI
Participant 1 IT Lecturer 12 2
Participant 2 Tech entrepreneur 6 1
Participant 3 Entrepreneur in Technology 12 3
Participant 4 Storage Engineer 30 0
Participant 5 Doctorate in Machine Learning and Head of Technology 8 3
Participant 6 Owner of a Tech Company 20 2
Participant 7 Founder of a Tech Company 7 2
Participant 8 Managing Director of a Tech Company 16 5
Participant 9 Software Development in electrical engineering 18 5
Participant 10 Application testing specialist 5 1
Participant 11 Head of Operations in a Technology Company 18 2
Participant 12 Project Manager in a Technology Company 15 2
Participant 13 Coding and Development 6 2
Participant 14 Systems Integration Specialist 10 1
Participant 15 IT Personnel 15 4
Participant 16 Tech Training and Development 10 2
Participant 17 Software Engineering 10 2
Participant 18 Information Systems student and software developer 6 1
Participant 19 Head of Training and Development of a Tech Company 24 15*
Participant 20 Tech entrepreneur 6 1
Participant 21 Modeler 5 1
*- company involved in conceptualisation stages of GenAI. Source: Authors
Table 3. Guiding Principles for Guide Development.
Table 3. Guiding Principles for Guide Development.
Topic Description and rationale Features, characteristics and mechanisms
Focus Mechanism Learning process
  • Principle of clarity
Clarifies what GenAI is designed to do, sets expectations to assist users understand what GenAI can and cannot do, and makes it clear whether a user is engaging with a human or a GenAI system. Clarity, learning Observational learning, Standard Operating Procedures (SOP), Manual Human Learning and Understanding of GenAI from the SOP, manuals and interactions with the model
2.
Principle of accuracy and fairness-conscious AI Design
Ensures that GenAI-generated outcomes (texts, decisions, predictions) are as factually accurate and relevant as possible. Promotes equity and inclusion by identifying and reducing algorithmic bias and encourages intentional, ethical design that reflects fairness across diverse demographics. Accuracy Self-Monitoring algorithm, performance logs, Feedback loops Algorithm learning data, Performance Logs, Feedback
3.
Principle of Security and Data Protection
Protects users’ rights and privacy by ensuring AI does not leak, misuse, or over-collect personal data, maintains trust by minimising vulnerabilities and risk of data breaches, and ensures legal compliance with privacy, and mitigates reputational and operational risks linked to security failures. Security Data Security Control Systems Observational Learning,
4.
Principle of Empathy Simulation (With Limits)
GenAI systems to respond to user emotions in ways that feel human-sensitive, while avoiding over-personification of AI, which can mislead users into forming attachments or expecting understanding AI cannot truly offer. Empathy Cognitive processes (empathy) Observational Learning and Imitation
5.
Principle of self-monitoring
GenAI system performance to be self-monitored to identify any gaps and any anomalies such as bias, hallucinations and security gaps Self-monitoring Observation of GenAI models Observational learning of the patterns and behaviour of the model to mitigate any gaps
6.
Principle of judgement of monitored process
Judging the performance of the GenAI model to discern the quality of output to determine whether business and customers are benefitting or gaps are being created Judgement The imitation of business standards as personal judgement can be biased or subjective Objective learning of business ethics, standards and principles and benchmarking performance in line with expected business standards not personal values
7.
Principle of self-reaction of monitored process
Reacting to performance after discerning the quality. For example, if the GenAI model hallucinates the model requires re-training Self-Reaction Cognitive processes and internal motivation to initiate interventions that regulate the GenAI model Control systems over the GenAI to be able to correct the model when it is malfunctioning
8.
Principle of self-efficiency of self-regulation
The belief in the ability to self-regulate the GenAI model Self-efficacy Cognitive process and internal motivation to become efficient Based on the belief and motivation and confidence that one is efficient and able to self-regulate; making interventions to self-regulate and being efficient in the self-regulation of GenAI model
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated