The Situation in Corporate America
Generative artificial intelligence has swiftly become a transformative force in corporate America, affecting various business functions and industries (Davenport & Ronanki, 2023). There is a need to understand that generative AI is a branch of AI that is more involved in the production of new content, and it has emerged as a pivotal force fostering innovation and efficiency (Nyame & Bengesi, 2024). As of 2024, companies increasingly integrate these technologies into their workflows to boost productivity, innovation, and competitive advantage (McKinsey & Company, 2023). The adoption landscape is diverse, with large tech companies and forward-thinking enterprises leading the charge, while smaller businesses are at various stages of exploration and implementation (Ransbotham et al., 2022).
Generative AI is also making significant impacts across multiple corporate domains. Marketing and content creation are used to develop personalized campaigns and write, copy, and generate visual content (Balis, 2023). Product development and design have been revolutionized, with AI assisting in brainstorming, prototyping, and product design (Verganti et al., 2023). Customer service has been enhanced through AI-powered chatbots and virtual assistants, providing round-the-clock support (Huang & Rust, 2021). Data analytics has been transformed, with AI uncovering trends and patterns to inform decision-making in financial services and retail (Davenport, 2022). Human resources departments use AI to streamline recruitment, generate job descriptions, and develop personalized training modules (Tambe et al., 2023). Legal teams are leveraging AI to draft documents, review for compliance, and suggest amendments based on regulatory changes (Alarie et al., 2022). This widespread adoption is creating a growing demand for AI-related skills and has prompted companies to invest in upskilling programs for their workforce (World Economic Forum, 2023).
As generative AI continues to mature, its impact on corporate America is expected to deepen and potentially reshape business models and competitive landscapes (Iansiti & Lakhani, 2023). However, this technological revolution also brings challenges, including cybersecurity concerns and potential workforce disruptions, which must be addressed to harness AI's potential in the corporate world fully. This study aims to equip corporate strategists and leaders with the knowledge needed to embrace AI, be aware of its possible challenges, and find the best ways to use Generative AI to gain competitive advantage. In addition, it seeks to further equip technology experts and professionals with key information about Generative AI to help them make informed choices when working. The study also aims at research management experts and personnel as it contains vital information that will set the foundation for future research.
Types of Generative AI
AI is considered one of the world’s greatest inventions, and it was invented in 1950 by Alan Turin, who introduced the Imitation game to ascertain whether a machine could think. Through its rapid evolution over the years, it has garnered global attention, especially with the advent of ChatGPT after COVID-19 struck the world in November 2022.
Generative AI uses algorithmic models employed in data generation to produce human-like responses. This section will elaborate on five types of Generative AI, which are still considered under research and evolve with better output daily. One of them is the Generative Adversarial Networks. It employs two neural networks (the generator and the discriminator) to generate data when prompted by the user. The generator produces data from texts, images, and audio, while the role of the discriminator is to filter out the realistic data from the raw data produced by the generator (Trevisan de Souza et al., 2023). As this cycle continues, the generator refines its capabilities by further filtering out data until the purest or most realistic data is the only data existing. As a result, Generative Adversarial Networks have been proven helpful in generating quality image and video outputs.
The second Generative AI worth mentioning is the Variational Autoencoders. Variation Autoencoders is a natural language processing tool that encodes data into a latent space and decrypts it to get the original data using the decoder. They form data from the input they receive; however, this data is different and new. They are primarily used in image and audio generation activities (Zhao et al., 2023).
The third is transformer-based models, also a branch of the large language processing model. This model is given large data sets in a raw and pre-trained state and then processes to generate a human-like output using the neural networks called transformers (Kotei & Thirunavukkarasu, 2023). ChatGPT adopts this model as part of its architectural design and programming. Most companies also utilize this technology to power their AI-driven Chatbots.
Another form of Generative AI is the Autoregressive model. It utilizes machine learning models to interpret and perform prediction analysis on data based on previously imputed data in the model (Wei et al., 2023). The autoregressive AI model often predicts data in a sequential order. Most developers incorporate this technology in their applications for visual object tracking and comprehensible text generation (Wei et al., 2023).
Generative Artificial Intelligence Innovation in Corporate America
Artificial Intelligence (AI) has become a transformative tool in corporate America, significantly altering business processes and contributing to organizational success. This article provides a comprehensive analysis of AI's impact on business processes, organizational outcomes, and the factors that drive its adoption in corporate America from the perspective of a corporate strategist.
Impact Of AI on Business Processes and Organizational Success
Generative AI improves efficiency and productivity by automating routine tasks and performing complex data analyses. AI-driven systems can manage customer service requests, allowing human workers to focus more on strategic activities. This shift reduces operational costs and increases service delivery quality and speed (Chui et al., 2023). The integration of AI fosters innovation by enabling companies to create new products and services. It can be used to develop marketing campaigns, design product prototypes, and even write software code. Organizations that embrace these capabilities often achieve a competitive edge by swiftly adapting to market trends to meet customer’s demands (Sedkaoui & Benaichouba, 2024).
AI facilitates informed decision-making by analyzing large datasets and discovering patterns that human analysis may not comprehend quickly. This results in better strategic decision-making across various domains, such as financial forecasting, supply chain optimization, and customer relationship management. (Singh Sengar et al., 2024). Making decisions based on real-time data is beneficial for maintaining agility in a rapidly changing business environment.
AI-driven personalization allows companies to enhance customer experiences by tailoring services to individual needs. The system helps analyze customer behavior, recommend products, forecast future needs, and offer personalized support, which usually leads to higher customer satisfaction and loyalty. (Chui et al., 2023). It is important to note that Generative AI is transforming corporate America by enhancing business processes, fostering innovation, and promoting organizational success. These are influenced by the company’s provision of technological infrastructure that would meet regulatory compliance without affecting the organizational culture and the workforce level.
Factors Influencing the Use of AI in Corporate America
The successful implementation of AI in corporate settings largely depends on a company's technological infrastructure. Companies require advanced computational power, efficient data storage, and high-speed internet access to utilize these AI capabilities entirely. Investing in these technologies is essential to take advantage of AI's potential benefits (Sedkaoui & Benaichouba, 2024).
Adopting AI is significantly affected by data privacy, security, and ethical use regulations. These regulations are essential for companies seeking to avoid legal pitfalls while maximizing the potential of AI systems. Observing rules like the General Data Protection Regulation (GDPR) ensures the use of AI responsibly. (Chui et al., 2023).
An organization's culture and leadership play pivotal roles in embracing AI technologies. Companies that cultivate a culture of innovation, led by forward-thinking executives, are more likely to integrate AI effectively. The Management that champions the use of AI understands its benefits can significantly drive the successful deployment of AI organization-wide (Sedkaoui & Benaichouba, 2024).
Employees' skill sets are a critical factor in determining AI adoption rates. Organizations must ensure their employees are proficient in AI technologies and possess the required skill set to use the systems. This would entail investment in human resource development through continuous training programs, which is the key to building an effective team utilizing the AI system. (Singh Sengar et al., 2024).
Challenges and Regulations
Organizations have ongoing systems monitoring and vulnerability assessments, which are essential tools to mitigate these risks. (Chui et al., 2023). The AI systems are vulnerable to cyberattacks, which pose risks to sensitive data and operational continuity. This is why companies must prioritize developing and implementing advanced cybersecurity measures to safeguard their AI platforms.
Establishing ethical guidelines and performing regular audits of AI systems are critical steps to ensure fairness and transparency in AI-driven decision-making. (Singh Sengar et al., 2024). Ethical issues regarding bias and fairness are crucial to using AI systems. This can unintentionally perpetuate biases present in training data, resulting in skewed outcomes.
The regulatory landscape is challenging for organizations to navigate and adopt AI systems. These regulations ensure that businesses use AI responsibly and transparently. (Chui et al., 2023). Data privacy regulations such as the GDPR require strict regulations on how companies handle data, and compliance is necessary to avoid legal consequences and maintain consumer trust.
In conclusion, Generative AI is revolutionizing corporate America by enhancing business processes, fostering innovation, and promoting organizational success. However, its implementation is influenced by several factors, such as the strength of a company's technological infrastructure, regulatory compliance, organizational culture, and the level of the workforce. Companies must address the challenges related to AI security, ethics, and compliance to ensure this technology's proper and effective use. As businesses continue to explore AI's potential, it is clear that generative AI will remain a critical factor in the future of corporate innovation.
Benefits of using Generative AI in corporate America
Generative artificial intelligence has numerous benefits when shaping corporate American businesses. Generative Artificial Intelligence will go the extra mile to put the United States of America ahead on the geographical map in its quest to dominate the AI space if constructive measures are implemented to leverage AI in all aspects of American Industries.
Examples of the Use of Generative Artificial Intelligence in Corporate America
Generative Artificial Intelligence has enhanced productivity in corporate America. Enhanced productivity in the sense that it can automate routine and repetitive tasks would have made it more complex and tedious for workers to complete a task within a specified period. This will go a long way in helping employees focus on higher work value. Please take it in the scenario for content creators and data entry employees who must be entering data daily on behalf of the company they are working for.
Generative artificial intelligence can enter data automatically without the assistance of a human. The human will be there to ensure the data imported into a system is accurate. On the other hand, with content creators, Generative Artificial Intelligence can solve that problem for them by, instead of them developing the data, they can automate the system to generate content based on their interests. Compared to the old ways of generating content by having a video camera and capturing a video of oneself creating content (Ooi et al., 2023).
Generative AI has improved Cost efficiency. Cost efficiency is also essential, and it serves as one of the benefits of Generative AI in corporate America. It solely reduces cost by automating workflows and reducing the need for human efforts for tasks like customer service. Chatbots replace the traditional -brick-and-mortar way of doing business regarding customer service experience. It creates an avenue where automated chatbots can interact in real-time with customers to answer questions on behalf of customer service personnel. It sometimes answers complex questions that the customer service personnel cannot adequately address. On the social side, it can reduce arguments or physical problems when customers need to deal with complex issues that cannot be addressed well by the customer service department within an organization (Cook et al., 2024).
When businesses adopt Generative Artificial Intelligence, it helps them deliver personalized marketing messages through product recommendations and customer interactions based on customers' preferences and behaviors. With the introduction of personalization, AI-powered chatbots help businesses improve customer inquiries by providing 24-hour day and 7-day week customer services to customers, and this is not only limited to a particular geographical location (Ooi et al., 2023).
Another critical factor or benefit of Generative AI in corporate America is its ability to detect fraud and prevent it as soon as possible before it causes devastating problems to systems within organizations. This is done by identifying patterns of fraudulent activities that might have been occurring continuously. It also helps in regulatory compliance by analyzing large legal datasets with legal documentation and flagging potential issues, ensuring organizations stay compliant with relevant laws and regulations within their jurisdictions. (Ooi et al., 2023).
Risks Associated With the Use of Generative Artificial Intelligence in Corporate America.
The introduction and evolution of Generative artificial intelligence presents a complex array of risks for corporate America. Data privacy and security concerns are paramount, as AI systems require vast amounts of potentially sensitive information, increasing vulnerability to breaches and cyberattacks. Intellectual property issues are also rising, with AI-generated content potentially infringing on copyrights, particularly in creative industries. There are also ethical considerations around the use of generative AI; for instance, AI systems may perpetuate biases and contribute to spreading misinformation that could potentially damage a company's reputation and erode trust. Over-reliance on AI systems also poses operational risks that can lead to flawed decision-making and financial losses. The adoption of AI also raises concerns about workforce disruption and job displacement; this necessitates a rising need for investments in reskilling initiatives.
Financial risks involve high implementation costs and uncertain returns on investment. Regulatory compliance presents many challenges as companies navigate evolving AI legislation and unclear liability frameworks. The quality and reliability of AI outputs pose additional risks, including errors, inconsistencies, and the phenomenon of AI hallucination. Potential erosion of competitive advantage and overestimation of AI capabilities are some strategic risks involved. It is evident that maintaining customer trust while leveraging AI for personalized services requires careful balance and transparency (Lobschat et al., 2021).
To effectively manage these risks, companies must develop comprehensive strategies, including robust risk management frameworks, ethical AI practices, and transparent communication about AI use (Zednik, 2021). This approach will enable organizations to harness the benefits of generative AI while mitigating its inherent risks, thereby ensuring responsible and sustainable adoption in corporate environments.
Cyber Security Issues Associated with Generative Artificial Intelligence
Generative artificial intelligence (AI) has introduced a range of complex cybersecurity challenges for corporate America, arising from these systems' unique characteristics, reliance on vast amounts of sensitive data, and interactions with existing corporate infrastructure (Liao et al., 2023). Data privacy and theft are primary concerns, as AI systems often require large datasets containing sensitive information, making them prime targets for cybercriminals (Truong et al., 2021). Data poisoning attacks pose another significant threat, where adversaries can manipulate training data to produce biased or malicious outputs to compromise decision-making processes and generate harmful content (Li et al., 2022).
Adversarial attacks on AI models are also a growing concern because cyber criminals can manipulate inputs to trick systems into making incorrect or harmful decisions (Xu et al., 2022). Creating highly realistic synthetic content, including deep fakes, could facilitate risks of impersonation and fraud (Lyu, 2023). Also, model inversion and data leakage can expose sensitive information used to train AI models (Boenisch et al., 2023). The AI supply chain introduces vulnerabilities through open-source software, third-party components, and cloud services, each representing a potential cyberattack entry point (Suresh et al., 2023). Cybercriminals may also leverage generative AI to enhance their attack methods by automating the creation of sophisticated malware and phishing emails (Liao et al., 2023).
The lack of explainability in many AI systems, often called the "black box" problem, creates challenges for corporate cybersecurity teams in understanding how models interact with sensitive data or detect abnormal behavior (Truong et al., 2021). As AI systems grow in complexity and scale, securing them becomes increasingly challenging due to the expanded attack surface and difficulty maintaining consistent security standards (Li et al., 2022). Companies must implement robust cybersecurity frameworks, secure AI supply chains, develop tools to detect AI-generated threats and ensure transparency and accountability in AI decision-making (Xu et al., 2022). Regular audits, employee training, and investment in explainable AI techniques are crucial as generative AI continues to evolve and integrate deeper into corporate operations (Lyu, 2023).
Misuse of Generative AI
The advent of Generative AI has made giant strides in efficiently streamlining everyday tasks, automating processes for optimum results, and driving economic growth. However, the dark side of generative AI, which pertains to abusing and exploiting AI for harmful and deceitful activities, needs urgent attention from law enforcers to establish regulations and industry standards governing its usage. Otherwise, it will wreak havoc in the technological landscape if left unchecked.
The first instance of Generative AI misuse is the Deepfakes technology, which AI generates as videos where the people involved in the video carry out specific actions that never happened in reality. This is a way to weaponize AI by deceiving people, tarnishing the image of individuals involved, and emotionally and intellectually manipulating the public to earn their trust. In 2022, a deepfake video of President Vladimir Putin portrayed him telling all his fellow citizens to lay down their arms for peace on Twitter. Likewise, a manipulated video also popped up on YouTube and Twitter, with President Zelensky informing all Ukrainians to end the war with the Russians (Bakir & McStay, 2022). This exposed him to much public contempt until it was proven that the video was AI-generated. In severe cases, it can lead to civil unrest and exposure to danger due to the anger and resentment of the people towards their political leaders. This further shows the damage to victims' reputations in a deepfake video when false information is widely circulated to assuage people. In other instances, fraudsters use AI to engineer their voices and employ them in fraudulent activities. In such situations, people lacking security training and awareness fall victim to their schemes. According to the Deloitte Center for Financial Services report, Generative AI fraud losses are estimated to reach US$ 40 billion by 2027(Deloitte, 2024).
The second instance is using Generative AI as an assistant for completing academic work without proper citations to sources. In some situations, students use AI tools to work on their projects and assignments and submit them as original work. AI is considered to be in its early experimental stage; thus, data generated is sometimes inaccurate and unreliable. Using it as a sole guide for studying without supporting materials from accredited sources can generate wrong results for the student. Submitting such results downgrades the academic integrity of the institution as well as the quality of students churned out after graduation. Overreliance on it by students can dull their critical thinking and analytical abilities, which are pertinent for their transition into the workforce (Farrelly & Baker, 2023).
The third instance of the wrongful application of Generative AI is the emergence of Dark AI tools such as WormGPT, which operates like ChatGPT but is modeled on tons of hacking-related data sets and programming for malicious intent, and PoisonGPT, which is based on a Language Learning Model, which produces biased and harmful information using bots for illegal purposes (Capraro et al., 2024). Using the bots is one of the main ways viruses, malware, DDOS attacks, and other remote execution attacks are released to compromise networks and system applications. There is another tool called FraudGPT; it enables hackers and cybercriminals to craft deceptive content to be delivered to its intended targets using specific prompts to trigger a response from it (Charfeddine et al., 2024). All the tools mentioned above have no in-built security framework and are accessible to the public at no charge.
The Need for Regulations on Artificial Intelligence
Regulating Generative AI in Corporate America is very important for Several Reasons. It helps to increase technology’s ability to address ethical, legal, and societal concerns. When there are no regulations regarding Generative AI in corporations, it will lead to many devastating problems, including harm to individuals within organizations, businesses, and society at large. This part highlights some primary reasons why regulation is essential in corporate America.
First, some form of regulation must be implemented regarding Generative AI use in corporate America. One is that it will help ensure accountability and transparency, especially with deep learning models, since AI models do not interpret decision-making processes. When regulations are set, it sets the tone to make it more transparent and enables stakeholders to understand and trust AI technologies (Cheong et al.,2024).
Second, when Generative AI and stringent decisions that cause any form of damage are put in place, it can help establish clear lines of accountability. Deepfakes and misinformation can also be a big problem when it comes to Generative AI; for this reason, there are regulations to be put in place to prevent the spread of misinformation, which can damage reputation and manipulate public opinions. When implemented, regulations can help mitigate risk used for malicious endeavors, especially in the corporate setting (Hacker et al.,2023).
Conclusion
Successful innovation within the business climate using Generative AI depends on balancing transparency, ownership, and accountability. AI-enhanced systems can perform tasks efficiently, increasing productivity and contributing to their appeal in corporate America. However, there is much to be said about the margin of error, risk of data security and privacy, and intellectual property challenges associated with its use. American corporations must strive to create this balance by investing in their workforce's technical growth and literacy in AI-related matters (Singh Sengar et al., 2024).
This ensures an understanding of the potential risks and mindful interaction between employees and generative AI systems. The result of these mindful interactions is compliance with existing data privacy and security regulations and policies. Management must understand the need to establish and enforce an organization-wide AI interaction policy to govern how data and other sensitive information is shared. In addition, it is necessary to invest in the technical skills and infrastructure within the organization, while encouraging collaboration between technical and non-technical teams to further strengthen user knowledge, understanding, and acceptance.
It is important to emphasize ethical considerations within organizations to continually ensure regulatory compliance and the authenticity of work produced using generative AI systems. Established ethical guidelines and regular audits help ensure the integrity of AI decision support systems while safeguarding intellectual property and authenticity. Workforce training with an ethical focus can help remove biases that influence the output of AI models (Singh Sengaer et al., 2024). This reinforces the role of generative AI in data-driven decision-making, resulting in more accurate and well-informed decision support. These well-informed decisions have a positive effect on product and service offerings, ultimately creating avenues for improved customer service and support.
Generative AI is revolutionizing corporate America by enhancing business processes, fostering innovation, and promoting organizational success. However, its implementation is influenced by several factors, such as the strength of a company's technological infrastructure, regulatory compliance, organizational culture, and the level of the workforce. Companies must address the challenges related to AI security, ethics, and compliance to ensure the proper and effective use of this technology. As businesses continue to explore AI's potential, it is clear that generative AI will remain a key factor in the future of corporate innovation.
Recommendations
Generative AI will continue to evolve and make improvements in corporate America, increasing efficiency and productivity in business operations, while contributing to the creative process of developing and selecting new product and service offerings (Kanbach et al., 2023). Corporations must prepare to face this constant evolution, and remain agile in the face of continually changing trends. This invites the need for further study in this field, assessing alternative ways in which collaboration between human experts and this technology can be enhanced.
An important aspect of embracing any new technology is understanding how it can be adapted to multiple scenarios. There is opportunity to explore how Gen AI can be used in offensive protection strategies, and not just the defensive (Kanbach et al., 2023). As better models are built to solve more complex problems, it is possible to apply this technology in developing offensive mechanisms, with a proactive view to infrastructure protection.
However, the major concern with the future of this technology is maintaining fairness and accountability, which making strides to competitively position the organization. In light of this, further study must be conducted on the opportunity to invite experts in various fields, that can create impact and leave a lasting impression. This involves collaboration with experts across various knowledge areas to ensure ethical compliance and the elimination of bias in building AI models. The importance of these ethical limitations being set stretch beyond the current climate, to a future where human input in these matters may not be received or supported.
References
- Alarie, B., Niblett, A., & Yoon, A. H. (2022). How artificial intelligence will affect the practice of law. University of Toronto Law Journal, 72(2), 219-241. [CrossRef]
- Balis, J. (2023). AI Is Transforming Marketing: Here's How to Use It Effectively. Harvard Business Review Digital Articles, 2-6.
- Bakir, V., & McStay, A. (2022). The nature and circulation of false information. Optimizing Emotions, Incubating Falsehoods, 71–102. [CrossRef]
- Boenisch, F., Dziedzic, A., Schuster, R., Shamsabadi, A. S., Shumailov, I., & Papernot, N. (2023). When the curious abandon honesty: Federated learning is not private. Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security, 2023-2040.
- Capraro, V., Lentsch, A., Acemoglu, D., Akgun, S., Akhmedova, A., Bilancini, E., Bonnefon, J.-F., Brañas-Garza, P., Butera, L., Douglas, K. M., Everett, J., Gigerenzer, G., Greenhow, C., Hashimoto, D., Holt-Lunstad, J., Jetten, J., Johnson, S., Kunz, W. H., Longoni, C., … Viale, R. (2024, January 18). The impact of generative artificial intelligence on Socioeconomic Inequalities and policy making. SSRN. Available online: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4666103.
- Charfeddine, M., Kammoun, H. M., Hamdaoui, B., & Guizani, M. (2024). CHATGPT’s security risks and benefits: Offensive and defensive use-cases, mitigation measures, and future implications. IEEE Access, 12, 30263–30310. [CrossRef]
- Cheong, I., Caliskan, A., & Kohno, T. (2024). Safeguarding human values: rethinking US law for generative AI’s societal impacts. AI and Ethics, 1-27. [CrossRef]
- Cook, S., Hagiu, A., & Wright, J. (2024). Turn generative AI from an existential threat into a competitive advantage. Harvard Business Review, 102(1), 118-125.
- Davenport, T. H. (2022). The AI-Powered Organization. Harvard Business Review, 100(4), 108-117.
- Davenport, T. H., & Ronanki, R. (2023). Generative AI: The Next Productivity Frontier. MIT Sloan Management Review, 64(3), 1-5.
- Deloitte. (2024, June 7). Generative AI is expected to magnify the risk of deepfakes and other fraud in banking. Deloitte Insights. Available online: https://www2.deloitte.com/us/en/insights/industry/financial-services/financial-services-industry-predictions/2024/deepfake-banking-fraud-risk-on-the-rise.html.
- Fuentes-Peñailillo, F., Gutter, K., Vega, R., & Silva, G. C. (2024). Transformative technologies in digital agriculture: Leveraging internet of things, remote sensing, and Artificial Intelligence for smart crop management. Journal of Sensor and Actuator Networks, 13(4), 39. [CrossRef]
- Hacker, P., Engel, A., & Mauer, M. (2023, June). Regulating ChatGPT and other large generative AI models. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (pp. 1112-1123).
- Huang, M. H., & Rust, R. T. (2021). A strategic framework for artificial intelligence in marketing. Journal of the Academy of Marketing Science, 49(1), 30-50. [CrossRef]
- Iansiti, M., & Lakhani, K. R. (2023). Competing in the Age of AI: Strategy and Leadership When Algorithms and Networks Run the World. Harvard Business Review Press.
- Kanbach, D. K., Heiduk, L., Blueher, G., Schreiter, M., & Lahmann, A. (2023). The GENAI is out of the bottle: Generative Artificial Intelligence from a Business Model Innovation Perspective. Review of Managerial Science, 18(4), 1189–1220. [CrossRef]
- Kotei, E., & Thirunavukarasu, R. (2023). A systematic review of transformer-based pre-trained language models through self-supervised learning. Information, 14(3), 187. [CrossRef]
- Li, Y., Jiang, Y., Shen, T., Zhang, W., Xia, X., & Zhao, S. (2022). A survey on deep learning backdoor attacks and defenses for image classification. Neurocomputing, 505, 97-113.
- Liao, X., Wang, X., Cui, W., Wang, X., & Hu, H. (2023). Generative AI meets security and privacy: Opportunities and challenges. IEEE Security & Privacy, 21(4), 51-59.
- Lobschat, L., Mueller, B., Eggers, F., Brandimarte, L., Diefenbach, S., Kroschke, M., & Wirtz, J. (2021). Corporate digital responsibility. Journal of Business Research, 122, 875-888.Lyu, S. (2023). DeepFake detection: Current challenges and next steps. IEEE MultiMedia, 30(1), 62-68.
- McKinsey & Company. (2023). The State of AI in 2023: Generative AI's Breakout Year. Available online: https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-in-2023-generative-ais-breakout-year.
- Nyame, L., & Bengesi, S. (2024). Generative Artificial Intelligence Trend on Video Generation. Preprints. [CrossRef]
- Ooi, K. B., Tan, G. W. H., Al-Emran, M., Al-Sharafi, M. A., Capatina, A., Chakraborty, A., … Wong, L. W. (2023). The Potential of Generative Artificial Intelligence Across Disciplines: Perspectives and Future Directions. Journal of Computer Information Systems, 1–32. [CrossRef]
- Ransbotham, S., Khodabandeh, S., Fehling, R., LaFountain, B., & Kiron, D. (2022). Expanding AI's Impact With Organizational Learning. MIT Sloan Management Review, 63(4), 1-5.
- Suresh, A. T., Subramanya, A., Gamage, N., & Rohrbach, A. (2023). Shortcut learning in large language models in open-ended generation: Detection and mitigation. Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, 14151-14171.
- Tambe, P., Cappelli, P., & Yakubovich, V. (2023). Artificial Intelligence in Human Resources Management: Challenges and a Path Forward. California Management Review, 61(4), 15-42. [CrossRef]
- Trevisan de Souza, V. L., Marques, B. A., Batagelo, H. C., & Gois, J. P. (2023). A review on generative adversarial networks for Image Generation. Computers & Graphics, 114, 13–25. [CrossRef]
- Truong, L., Jones, C., Hutchison, B., August, A., Praggastis, B., Jasper, R., Nichols, N., & Tuor, A. (2021). Systematic review of cybersecurity and privacy issues in artificial intelligence. Journal of Information Security and Applications, 61, 102920.
- Verganti, R., Vendraminelli, L., & Iansiti, M. (2023). Design in the Age of Artificial Intelligence. Harvard Business School Working Paper, 23-029.
- Wei, X., Bai, Y., Zheng, Y., Shi, D., & Gong, Y. (2023a). Autoregressive Visual Tracking. 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). [CrossRef]
- World Economic Forum. (2023). Jobs of Tomorrow: Large Language Models and Generative AI Impact on the Future of Work. Available online: https://www3.weforum.org/docs/WEF_Jobs_of_Tomorrow_2023.pdf.
- Xu, M., Zhu, Y., Zhao, J., & Lin, J. (2022). A survey on adversarial attacks in natural language processing. ACM Computing Surveys, 55(10), 1-67.
- Zednik, C. (2021). Solving the black box problem: A normative framework for explainable artificial intelligence. Philosophy & Technology, 34(2), 265-288. [CrossRef]
- Zhao, Y., Linderman, S. W., Yixiu ZhaoApplied Physics Department, S. U. P., & Scott W. LindermanDepartment of Statistics and the Wu Tsai Neurosciences Institute, S. U. P. (2023, July 23). Revisiting structured variational autoencoders: Proceedings of the 40th International Conference on Machine Learning. Guide Proceedings. Available online: https://dl.acm.org/doi/10.5555/3618408.3620176.
|
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).