Preprint
Article

Generative Artificial Intelligence Trend on Video Generation

Altmetrics

Downloads

348

Views

472

Comments

0

This version is not peer-reviewed

Submitted:

02 September 2024

Posted:

03 September 2024

You are already at the latest version

Alerts
Abstract
This study comprehensively reviews Generative AI and its relationship to video generation, emphasizing video cloning. The study is intended for Geekscode LLC’s strategic development; Geekscode LLC is a new company considering creating video cloning software as its main product. Furthermore, the study seeks to enhance the understanding of researchers in the information technology field and provide information technology professionals with insights into video generation using modern AI tools. This study highlights both the transformative potential and the legal, ethical, and technical challenges of AI-driven video creation. The study examines these key aspects of Video generation, including diffusion models, autoregressive models, input modalities, and Generative Adversarial Networks. This study highlights the significance of interdisciplinary approaches in navigating the area of AI-powered video production and the need to balance innovation and responsible ethical practices.
Keywords: 
Subject: Computer Science and Mathematics  -   Artificial Intelligence and Machine Learning

I. Introduction

Generative Artificial Intelligence (AI) is another field of artificial intelligence and machine learning that is advancing quickly in the 21st century Bozkurt et al.[1]. They can bring about new results based on the user's input. They can also produce results based on the data used by software developers to train them. This remarkable phenomenon can deliver results in many forms, including texts, numbers, symbols, music, etc. [2]. These tools come in many forms and have many different capabilities [3]. The purpose of this study is to do a comprehensive review of Generative AI and its relationship with video Generation. This will help to inform Geekscode LLC, a new business with AI Video Cloning as one of its potential products. Furthermore, it is meant to provide researchers in the field of Information Technology with more understanding of Video Cloning. Additionally, it is meant to provide experts in information technology with the new trends in video generation with AI modern AI tools.

II. Advancement of Artificial Intelligence

A. Selecting a Template (Heading 2)

The creation and introduction of tools such as Chat GPT gained the attention of the whole world, according to Bengesi et al.[4]. Moreover, it is important to note that there is more to AI than Chat GPT as this field of computing keeps evolving steadily. The quick advancement of artificial intelligence (AI) technologies has led to a new era in digital content creation, especially in video generation. There is a need to understand that generative AI is a subset of AI that is more focused on producing new content, and it has emerged as a pivotal force fostering innovation and efficiency in video generation. It capitalizes on advanced algorithms and deep learning models to create novel, customized, highly realistic, engaging video content without too much human intervention. [5]. This evolution in video creation technology is ready to cause a transformation in various industries [6], including entertainment, marketing, and education, by allowing the production of customized, quality video content within a very short time, which increases productivity.

III. Generative AI in Video Creation

The combination of generative AI in the production of video workflows may lead to remarkable economic and creative benefits by incorporating recurring and labor-intensive work, including video editing and special effects. Generative AI allows its users to pay more attention to the strategic aspect of video making. Furthermore, the scalability feature associated with AI video generation can enable content creators to meet the increasing demand for video content on various digital platforms, which provides both individuals and companies with a competitive edge in gaining audience attention. This paper explores the rising trend of generative AI in video creation, assessing its underlying technologies, types, video generation methods, tools, and challenges. Through a thorough review of current methodologies and case studies, this study seeks to illuminate the transformative potential of generative AI in reshaping the future of video generation.

IV. Types of Generative Artificial Intelligence Ai-Models

The quick development in artificial intelligence (AI) has led to the creation of various AI model applications that produce new content, including texts, images, music, and videos. This paper focuses on the types of generative AI and discusses their structures, functions, and applications. Furthermore, the unique features and capabilities of each model are explained.

A. Generative Adversarial Networks (GANS)

Many key developments support the current trend of generative AI in video creation as the emergence of Generative Adversarial Networks (GANs) has led to a revolution in how AI learns and makes a replica of complex visual and audio content and their patterns [7]. GANs, which are made up of two neural networks known as the generator and the discriminator, work together to generate videos that effectively imitate real-life scenarios Bengesi et al.[4]. In addition, increased use of natural language processing (NLP) and computer vision have allowed AI to understand and produce contextually applicable video content. These technological advancements have led to the possibility of customized content creation, which involves AI customizing videos to user preferences and engagement standards. The study seeks to deliver a complete knowledge of the current landscape of generative AI technologies in video generation.
Generative Adversarial Networks are one of the models of AI that have caused a revolution in the aspect of Generative AI [8]. This model is made up of two neural networks known as the generator and the discriminator [9]. The difference between the two is that the generator has the capability to give rise to synthetic data, and on the other hand, the discriminator is responsible for evaluating the authenticity of the generator-generated data[10]. A fantastic feature of the generator is that it can improve its results by deceiving the discriminator through adversarial training, which leads to highly realistic content [11]. Together, these two features make GAN a powerful tool widely used to generate videos, synthesize images, and create realistic animations. Furthermore, as this field continues to grow every now and then, it has now developed more capabilities that allow it to create more complex art, enhance picture resolution, and transfer style[12].

B. Variation Autoencoders

Variational Autoencoders (VAEs) are created using the autoencoder framework [13]. The autoencoder framework allow them to integrate a probabilistic element that allows them to produce new data [14]. VAEs encode input data into a latent space and then proceed to decode the data back to its original space, doing this allows it to introduce randomness when encoding, thereby making it possible for getting variations in the generated output [15]. It is worth noting that, this feature makes it possible for them to create new pictures and data reconstruction and for smooth interpolation among different data points.

C. Diffusion Models in Artificial Intelligence

Diffusion models refer to models that cause a rescission in the diffusion process by eliminating data little by little [16]. This generates simple data by incrementally disassociating it from the noise. Models like these begin with random noise and implement revocated modifications to develop consistent and credible outcomes. Diffusion models have demonstrated outstanding potential in producing superior images and other alternative media variations, including 3D images and videos [17].

D. Transformers

Transformers process a whole data sequence at the same time instead of using a step-by-step procedure [18]. This is possible because transformers possess’ attention mechanisms that allow them to process the data simultaneously [19]. This whole process enables transformers to capture long-range dependencies successfully. Transformers are very significant as they have performed extraordinarily in advanced natural language processing tasks, including text generation, translation, and summarization. Furthermore, when this is included with convolutional networks, transformers are also used for tasks involving picture and video creation [20].

E. Autoregressive Models in Artificial Intelligence

Autoregressive models produce data systematically in chronological order, where each point is mutually dependent on prior points [21]. These models simulate the probability distribution of data and produce new data by surveying from this distribution. Autoregressive models are very important as they are often applied to generate text and time-series predictions and also produce structured data, including music and code [22].

V. Video Generation

The concept of video generation, a swiftly developing area within the broader domain of generative artificial intelligence (AI), has earned profound interest in recent years [23]. This paper focuses on the core aspects of this field, including video cloning, video generation inputs, tools for generating videos, related drawbacks with generative AI, legal complications, and imitations, in addition to the defense measures to consider in compliance with the usage of this emerging innovation. The inception of artificial intelligence (AI)-driven video creation has significantly changed how we produce, view, and engage with visual material in the quickly changing world of digital media. The distinction between artificial and reality has become hazier due to these technical developments, ushering in a new age of creative possibilities.
The emergence of generative AI models have facilitated the creation of superior artistic media in many fields, such as visual arts, music, and literature [24]. The significance of Generative AI tools is that, by increasing the productivity of media production, these tools can transform the creative process [25]. In recent times, Artificial Intelligence (AI) has found widespread use in video production and cloning, where it can produce or alter material in previously unthinkable ways. According to Patel et al. [26], the development of deepfake technology has brought out both opportunities and difficulties. Although technological advancements can facilitate inventive uses, including customized video content and improved visual effects, they also give rise to worries regarding the possibility of abuse, false information, and a decline in public confidence in digital media [26].
As mentioned in Lu & Ebrahimi [27], the extensive accessibility of tools for altering video content also presents serious concerns regarding security and privacy, as well as the possibility of social and political abuse. To successfully navigate very challenging landscapes, it is necessary to adopt a multidisciplinary method. This may involve generating insights from areas including computer vision, studies related to the media, and ethical issues. Users can endeavor to create responsible and ethical frameworks for using AI-powered video production and cloning by comprehending the technical foundations of these technologies [28], as well as the societal ramifications of these developments.
Achieving a balance between utilizing the revolutionary potential of AI-powered media creation and reducing its associated legal concerns is crucial as we push the boundaries of this field. To protect the integrity of online content, researchers and developers must consider the ethical ramifications of these technologies as they advance and create reliable detection systems [29]. AI-powered video cloning and creation have far-reaching effects outside of the entertainment sector. Personalized, realistic video content can open new learning, training, and communication channels in diverse fields, including healthcare and education.

VI. Video Cloning

Artificial intelligence is utilized in a new technology called AI video cloning, or deep fakes, to manipulate and synthesize video content, which allows for creating realistic-looking footage in which a person's body or face can be swapped with that of another person [30]. Due to its support for a wide range of applications, from educational and creative endeavors to more alarming uses like political deceit, financial fraud, and revenge porn, this technology has drawn much interest recently (Kirchengast, 2020).
The fundamental component of AI video cloning is the implementation of machine learning algorithms, particularly generative adversarial networks, which can be developed on massive datasets of head positions, facial expressions, and other features [31]. The technique can smoothly merge the facial traits of one person into the video of another by mapping these qualities from a source video to a target video, producing an output that looks incredibly genuine and convincing [32]. An important aspect of AI video cloning is its capacity to produce photorealistic fake videos, or "deep fakes," as they are sometimes called [33]. Deepfakes can produce films of people saying or doing things they never did, which raises severe concerns about the possibility of misuse, fraud, and misinformation [34].
AI video clones are typically developed through several processes.
To train the machine learning models, a sizable dataset of bodily and facial traits is first gathered as the data size continues to grow [35]. Subsequently, the model is trained with both the source and target videos, learning to map features such as facial expressions from the source to the target, resulting in a smooth merging of the two [36]. Although AI video cloning has potential advantages in education and art, the technology has also brought up severe moral and legal issues [37]. There have been appeals for further regulation and the creation of efficient detection techniques due to the simplicity with which these modified videos can be produced and the possibility that they could be used maliciously [38].

VII. Inputs to Video Generation

In recent years, there has been significant progress in video generation as academia has been experimenting with several input modalities to drive the production of dynamic visual material. Additionally, the variety of input sources has added to the depth and adaptability of video-generating techniques, enabling text-to-video, 3D model-to-video, data-driven, and multimodal approaches [39].

A. Text to Video Models

Text-to-video models, like Imagen Video, Meta Make A Video, Phenaki, and Runway Gen-2, are leading events of this evolution [40], [41]. These models demonstrate the power of language-driven video production by using natural language inputs to produce corresponding video outputs [42]. Apart from text, three-dimensional models have also been used as input sources, which allows three-dimensional representations to be translated into dynamic video sequences [43].

B. Text to Text Models

It is interesting to note that distinct video creation methods frequently straddle multiple domains when incorporating techniques from them. For example, text-to-text generation techniques can be used by data-to-text generation systems to express data in various original and imaginative ways [44]. Multimodal techniques incorporating many input modalities, including text, image, and audio, have become increasingly popular in video production. These approaches yield visually striking video outputs. These developments indicate the revolutionary potential of generative AI in the field of video production, opening a wide range of applications from motion capture and digital human movies to video dubbing and beyond [45]. Despite the tremendous advancements in this field, video generation is still tricky, and academics are always looking for new methods and designs to push the envelope.

VIII. Video Editing Software

Adobe Premier Pro is widely used for professional video editing across various industries, including film, commercials [46], and YouTube content creation, Adobe Premiere Pro is renowned for its sophisticated editing tools, multi-camera editing, color grading, and seamless connections with other Adobe Creative Cloud applications [47]. Secondly, Final Cut Pro is another tool that Professionals widely use in video production [48]. They frequently use Final Cut Pro because it is a feature-rich editing experience tailored for Mac users. Its features include a magnetic timeline, sophisticated color grading skills, support for virtual reality headsets, and potent motion graphics tools [49]. DaVinci Resolve is another tool that is the go-to option for professionals seeking a robust and adaptable video editing solution since it has industry-leading color correction capabilities and strong audio post-production and visual effects features [50].

A. Animation and Motion Graphics

Adobe After Effects is a major application with extensive plugin support and a smooth interface with other Adobe products [51]. After Effects is a top compositing and motion graphics application that enables users to create intricate animations, visual effects, and motion graphics [52]. In addition, Blender is also an alternative option for both enthusiasts and professionals in the animation business [53]. This open-source 3D animation and visual effects software offers a flexible platform for 3D modeling, animation, simulation, and video editing [54]. Furthermore, Toon Boom Harmony is also an animation-to-motion application [55]. Toon Boom Harmony is a 2D animation toolkit that includes all the functionalities needed to rig, composite, and produce high-end animations for TV series, movies, and video games [56].

IX. Tools for Video Generation

In today's digital world, video content is more important than ever because it engages viewers on various devices. As a result, there is a greater need than ever for flexible video creation tools to meet the various demands of both customers and professionals. Another objective of this study is to present a thorough analysis of the most popular tools for creating videos, including video editing software, motion graphics and animation, text-to-video, audio-to-video, and image-to-video programs [57].

A. Text to Video Tools

One of the main texts to video tools is Lumen [58]. This tool operates by utilizing AI-powered technology that makes it possible to turn text into exciting video material. It also provides a range of media assets, branding choices, and templates to make the video production process more efficient [59]. Another text-to-video tool for video generation is Animoto [60]. This tool is an intuitive platform that makes creating promotional and marketing movies easy with its adjustable templates, drag-and-drop interface, and extensive music collection.

B. Audio to Video Tools

Headliner is an Audio-to-video tool that is known for its automated transcription and audiogram production capacity [61]. Headliner Tool is another tool that offers a unique way to turn podcasts into videos. These features can increase the interaction and exposure of audio material on social media platforms. Wavve is another technology for audio-to-video transformation [62]. It features capabilities like editable audio waveform animations, text overlays, and templates to improve how audio-based material is presented. Wavve excels at creating animated video content from audio files [63].

C. Image to Video Tools

Animoto, as mentioned earlier, is also known for creating exciting video slideshows from photo collections, adding text overlays and transitions, and enabling the creation of videos [64]. Producers, marketers, and companies may now quickly and affordably make high-caliber video content due to the widespread availability of video creation tools. By utilizing these tools' many functionalities, experts may address a broad spectrum of video production requirements, ranging from high-quality editing to artificial intelligence-driven video creation, meeting the increasing need for visually engaging content on multiple digital channels [65].

X. Challenges Associated with AI Video Generation

The video creation industry has experienced a transformational and complicated impact from the ongoing evolution of generative AI [66]. Although these cutting-edge technologies open new possibilities for content creation, they also result in difficulties that must be overcome to realize their full potential. The problem of quality and realism is one of the limitations of generative AI in video creation [67]. The "uncanny valley" effect, in which the output looks almost human but not quite, can frequently occur from generating videos, especially those with human features and motions, unsettling viewers Zhang et al.[16]. Maintaining narrative and visual coherence across the entire video can also be difficult, especially in more prolonged or complicated situations [10].
Significant ethical issues, such as identity theft, disinformation, and defamation, are brought up by the possible misuse of generative AI to create deepfakes [68]. Furthermore, using artificial intelligence (AI) to create content that unintentionally imitates already published works may result in copyright infringement difficulties [69]. Therefore, intellectual property rights must be carefully considered. Artificial intelligence (AI) can produce high-quality movies, but doing so frequently calls for powerful computers and sophisticated gear, which can be costly and restrict accessibility [70]. Production delays may result from the length of time needed to create movies, particularly for complicated or high-resolution videos [71].
The caliber and broad spectrum of the training data significantly impacts the video's overall quality Epstein et al.[10]. Privacy risks and potential breaches may arise when personal or proprietary data is used to train AI models. Poor or biased input might also result in inferior or biased outputs [72]. While AI can automate specific steps in the video creation process, it may also restrict human producers' creative freedom, resulting in less distinctive or original work [73]. Furthermore, biases already present in the training data may be reinforced by AI models, amplifying them and producing unfair or biased representations.
According to Epstein et al [10], upcoming content creators and firms may find it more challenging to obtain generative AI models due to the high costs associated with development and upkeep. Barriers to entry, such as exorbitant expenses and intricate technicalities, may also contribute to industry division. To meet these issues as the area of generative AI develops, cooperation between academics, business leaders, and legislators is essential. The video creation sector can more effectively leverage the revolutionary potential of these cutting-edge technologies while reducing its downsides by looking at solutions to these problems.

XI. Conclusion

The quick evolution of generative AI has triggered curiosity in research and across various disciplines, including manufacturing, entertainment, banking, healthcare, and insurance, owing to its ability to revolutionize data augmentation, the creation of contents, and mechanisms for solving complex problems. Artificial intelligence video cloning is an increasingly advancing technology that has garnered global attention from academics and the public. Because of its capacity to produce incredibly realistic and compelling visual content, it may be used for both good and bad, emphasizing the necessity of continuing research and creating strong safety measures to reduce the hazards connected with this technology.
AI will allow more innovation, eliminating many boundaries associated with current video generation systems. AI will bring about new possibilities in video generation by providing new tools that provide content creators with a seamless blending of different input modalities, including text, photos, audio, and signs. AI will continue making complex video generation easy and quick with little human effort. However, there is a need for advanced security mechanisms to ensure privacy and security. Establishing robust ethical frameworks to ensure safety and security is also important.

References

  1. A. Bozkurt, “Generative artificial intelligence (AI) powered conversational educational agents: The inevitable paradigm shift,” Asian Journal of Distance Education, vol. 18, no. 1, Art. no. 1, Mar. 2023, Accessed: Aug. 18, 2024. [Online]. Available: https://www.asianjde.com/ojs/index.php/AsianJDE/article/view/718.
  2. S. Feuerriegel, J. Hartmann, C. Janiesch, and P. Zschech, “Generative AI,” Bus Inf Syst Eng, vol. 66, no. 1, pp. 111–126, Feb. 2024. [CrossRef]
  3. B. L. Moorhouse, M. A. Yeo, and Y. Wan, “Generative AI tools and assessment: Guidelines of the world’s top-ranking universities,” Computers and Education Open, vol. 5, p. 100151, Dec. 2023. [CrossRef]
  4. S. Bengesi, H. El-Sayed, M. K. Sarker, Y. Houkpati, J. Irungu, and T. Oladunni, “Advancements in Generative AI: A Comprehensive Review of GANs, GPT, Autoencoders, Diffusion Model, and Transformers,” IEEE Access, vol. 12, pp. 69812–69837, 2024. [CrossRef]
  5. C. Longoni, A. Fradkin, L. Cian, and G. Pennycook, “News from Generative Artificial Intelligence Is Believed Less,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, in FAccT ’22. New York, NY, USA: Association for Computing Machinery, Jun. 2022, pp. 97–106. [CrossRef]
  6. N. Kshetri, Y. K. Dwivedi, T. H. Davenport, and N. Panteli, “Generative artificial intelligence in marketing: Applications, opportunities, challenges, and research agenda,” International Journal of Information Management, vol. 75, p. 102716, Apr. 2024. [CrossRef]
  7. S. K. J. Rizvi, M. A. Azad, and M. M. Fraz, “Spectrum of Advancements and Developments in Multidisciplinary Domains for Generative Adversarial Networks (GANs),” Arch Computat Methods Eng, vol. 28, no. 7, pp. 4503–4521, Dec. 2021. [CrossRef]
  8. S. Feuerriegel, J. Hartmann, C. Janiesch, and P. Zschech, “Generative AI,” Bus Inf Syst Eng, vol. 66, no. 1, pp. 111–126, Feb. 2024. [CrossRef]
  9. S. Karthika. and M. Durgadevi, “Generative Adversarial Network (GAN): a general review on different variants of GAN and applications,” in 2021 6th International Conference on Communication and Electronics Systems (ICCES), Jul. 2021, pp. 1–8. [CrossRef]
  10. Z. Epstein et al., “Art and the science of generative AI: A deeper dive,” Science, vol. 380, no. 6650, pp. 1110–1111, Jun. 2023. [CrossRef]
  11. L. Jiang, B. Dai, W. Wu, and C. C. Loy, “Deceive D: Adaptive Pseudo Augmentation for GAN Training with Limited Data,” in Advances in Neural Information Processing Systems, Curran Associates, Inc., 2021, pp. 21655–21667. Accessed: Aug. 18, 2024. [Online]. Available: https://proceedings.neurips.cc/paper/2021/hash/b534ba68236ba543ae44b22bd110a1d6-Abstract.html.
  12. A. S. Kumar, L. Tesfaye Jule, K. Ramaswamy, S. Sountharrajan, N. Yuuvaraj, and A. H. Gandomi, “Chapter 12 - Analysis of false data detection rate in generative adversarial networks using recurrent neural network,” in Generative Adversarial Networks for Image-to-Image Translation, A. Solanki, A. Nayyar, and M. Naved, Eds., Academic Press, 2021, pp. 289–312. [CrossRef]
  13. Y. Zhao and S. Linderman, “Revisiting Structured Variational Autoencoders,” in Proceedings of the 40th International Conference on Machine Learning, PMLR, Jul. 2023, pp. 42046–42057. Accessed: Aug. 18, 2024. [Online]. Available: https://proceedings.mlr.press/v202/zhao23c.html.
  14. Misino, G. Marra, and E. Sansone, “VAEL: Bridging Variational Autoencoders and Probabilistic Logic Programming,” Advances in Neural Information Processing Systems, vol. 35, pp. 4667–4679, Dec. 2022.
  15. Y. Liu et al., “Cloud-VAE: Variational autoencoder with concepts embedded,” Pattern Recognition, vol. 140, p. 109530, Aug. 2023. [CrossRef]
  16. C. Zhang, C. Zhang, M. Zhang, and I. S. Kweon, “Text-to-image Diffusion Models in Generative AI: A Survey,” Apr. 02, 2023, arXiv: arXiv:2303.07909. [CrossRef]
  17. H. Cao et al., “A Survey on Generative Diffusion Models,” IEEE Transactions on Knowledge and Data Engineering, vol. 36, no. 7, pp. 2814–2830, Jul. 2024. [CrossRef]
  18. D. Rothman, Transformers for Natural Language Processing: Build, train, and fine-tune deep neural network architectures for NLP with Python, Hugging Face, and OpenAI’s GPT-3, ChatGPT, and GPT-4. Packt Publishing Ltd, 2022.
  19. D. Soydaner, “Attention mechanism in neural networks: where it comes and where it goes,” Neural Comput & Applic, vol. 34, no. 16, pp. 13371–13385, Aug. 2022. [CrossRef]
  20. S. Khan, M. Naseer, M. Hayat, S. W. Zamir, F. S. Khan, and M. Shah, “Transformers in Vision: A Survey,” ACM Comput. Surv., vol. 54, no. 10s, p. 200:1-200:41, Sep. 2022. [CrossRef]
  21. D. Djeudeu, S. Moebus, and K. Ickstadt, “Multilevel Conditional Autoregressive models for longitudinal and spatially referenced epidemiological data,” Spatial and Spatio-temporal Epidemiology, vol. 41, p. 100477, Jun. 2022. [CrossRef]
  22. A. Zeng, M. Chen, L. Zhang, and Q. Xu, “Are Transformers Effective for Time Series Forecasting?,” Proceedings of the AAAI Conference on Artificial Intelligence, vol. 37, no. 9, Art. no. 9, Jun. 2023. [CrossRef]
  23. S. Yazdani, N. Saxena, Z. Wang, Y. Wu, and W. Zhang, A Comprehensive Survey of Image and Video Generative AI: Recent Advances, Variants, and Applications. 2024. [CrossRef]
  24. D. Grba, “Deep Else: A Critical Framework for AI Art,” Digital, vol. 2, no. 1, Art. no. 1, Mar. 2022. [CrossRef]
  25. D. Leiker, A. R. Gyllen, I. Eldesouky, and M. Cukurova, “Generative AI for learning: Investigating the potential of synthetic learning videos,” May 03, 2023, arXiv: arXiv:2304.03784. [CrossRef]
  26. M. Patel, A. Gupta, S. Tanwar, and M. S. Obaidat, “Trans-DF: A Transfer Learning-based end-to-end Deepfake Detector,” in 2020 IEEE 5th International Conference on Computing Communication and Automation (ICCCA), Oct. 2020, pp. 796–801. [CrossRef]
  27. Y. Lu and T. Ebrahimi, “Impact of Video Processing Operations in Deepfake Detection,” Mar. 30, 2023, arXiv: arXiv:2303.17247. [CrossRef]
  28. T.-N. Le, H. H. Nguyen, J. Yamagishi, and I. Echizen, “Robust Deepfake On Unrestricted Media: Generation And Detection,” Feb. 13, 2022, arXiv: arXiv:2202.06228. [CrossRef]
  29. M. J. Israel and A. Amer, “Rethinking data infrastructure and its ethical implications in the face of automated digital content generation,” AI Ethics, vol. 3, no. 2, pp. 427–439, May 2023. [CrossRef]
  30. S. Lyu, “DeepFake Detection: Current Challenges and Next Steps,” Mar. 11, 2020, arXiv: arXiv:2003.09234. [CrossRef]
  31. T.-N. Le, H. H. Nguyen, J. Yamagishi, and I. Echizen, “Robust Deepfake On Unrestricted Media: Generation And Detection,” Feb. 13, 2022, arXiv: arXiv:2202.06228. [CrossRef]
  32. J. Akers et al., “Technology-Enabled Disinformation: Summary, Lessons, and Recommendations,” Jan. 03, 2019, arXiv: arXiv:1812.09383. [CrossRef]
  33. J. Akers et al., “Technology-Enabled Disinformation: Summary, Lessons, and Recommendations,” Jan. 03, 2019, arXiv: arXiv:1812.09383. [CrossRef]
  34. T. Kirchengast, “Deepfakes and image manipulation: criminalisation and control,” Information & Communications Technology Law, vol. 29, no. 3, pp. 308–323, Sep. 2020. [CrossRef]
  35. A. K. Tiwari, A. Sharma, P. Rayakar, M. K. Bhavriya, and Nisha, “AI-Generated Video Forgery Detection and Authentication,” in 2024 IEEE 9th International Conference for Convergence in Technology (I2CT), Apr. 2024, pp. 1–8. [CrossRef]
  36. M. Masood, M. Nawaz, K. M. Malik, A. Javed, A. Irtaza, and H. Malik, “Deepfakes generation and detection: state-of-the-art, open challenges, countermeasures, and way forward,” Appl Intell, vol. 53, no. 4, pp. 3974–4026, Feb. 2023. [CrossRef]
  37. A. Swenson, “Teaching digital identity: opportunities, challenges, and ethical considerations for avatar creation in educational settings,” Brazilian Creative Industries Journal, vol. 3, no. 2, Art. no. 2, Dec. 2023.
  38. T.-N. Le, H. H. Nguyen, J. Yamagishi, and I. Echizen, “Robust Deepfake On Unrestricted Media: Generation And Detection,” Feb. 13, 2022, arXiv: arXiv:2202.06228. [CrossRef]
  39. R. Gozalo-Brizuela and E. C. Garrido-Merchán, “A survey of Generative AI Applications,” Jun. 14, 2023, arXiv: arXiv:2306.02781. [CrossRef]
  40. U. Singer et al., “Make-A-Video: Text-to-Video Generation without Text-Video Data,” Sep. 29, 2022, arXiv: arXiv:2209.14792. [CrossRef]
  41. J. Wang et al., “AesopAgent: Agent-driven Evolutionary System on Story-to-Video Production,” Mar. 11, 2024, arXiv: arXiv:2403.07952. [CrossRef]
  42. R. Gozalo-Brizuela and E. C. Garrido-Merchán, “A survey of Generative AI Applications,” Jun. 14, 2023, arXiv: arXiv:2306.02781. [CrossRef]
  43. R. Bhagwatkar, S. Bachu, K. Fitter, A. Kulkarni, and S. Chiddarwar, “A Review of Video Generation Approaches,” in 2020 International Conference on Power, Instrumentation, Control and Computing (PICC), Dec. 2020, pp. 1–5. [CrossRef]
  44. M. Kale and A. Rastogi, “Text-to-Text Pre-Training for Data-to-Text Tasks,” arXiv.org. Accessed: Aug. 19, 2024. [Online]. Available: https://arxiv.org/abs/2005.10433v3.
  45. R. Gozalo-Brizuela and E. C. Garrido-Merchán, “A survey of Generative AI Applications,” Jun. 14, 2023, arXiv: arXiv:2306.02781. [CrossRef]
  46. J. Yang, “Assessment of the strength and weakness of production design platforms in arts and entertainment management,” May 2023, Accessed: Aug. 20, 2024. [Online]. Available: https://hdl.handle.net/2346/96017.
  47. O. Karras and K. Schneider, “Software Professionals are Not Directors: What Constitutes a Good Video?,” Aug. 15, 2018, arXiv: arXiv:1808.04986. [CrossRef]
  48. M. G. Jones and L. Harris, “Audio and Video Production for Instructional Design Professionals,” 2021.
  49. J. Yang, “Assessment of the strength and weakness of production design platforms in arts and entertainment management,” May 2023, Accessed: Aug. 20, 2024. [Online]. Available: https://hdl.handle.net/2346/96017.
  50. D. Wei, “Construction of a Digital Color Grading Laboratory Based on DaVinci Resolve,” FSST, vol. 5, no. 14, 2023. [CrossRef]
  51. L. Fridsma and B. Gyncild, Adobe After Effects Classroom in a Book (2021 release). Adobe Press, 2020.
  52. D. A. Hussain, “The Effective Motion Graphics Production,” vol. 7, no. 8, 2022.
  53. V. Maselli and L. D. Cecca, “Collaborative production model and the animation industry: The role of the Blender community in the making of the Italian short film Arturo e il gabbiano,” Animation Practice, Process & Production, vol. 11, no. 1, pp. 7–29, Jun. 2022. [CrossRef]
  54. O. Villar, Learning Blender. Addison-Wesley Professional, 2021.
  55. B. Hasirci and D. Hasirci, “FOSTERING CREATIVITY IN EDUCATION WITH THE DESIGN OF ANIMATED SHOWS FOR CHILDREN,” EDULEARN22 Proceedings, pp. 4970–4974, 2022. [CrossRef]
  56. J. Chambless, “2D Animation of the 21st Century: The Digital Age,” Electronic Theses and Dissertations, 2020-2023, Jan. 2022, [Online]. Available: https://stars.library.ucf.edu/etd2020/986.
  57. E. Navarrete, A. Nehring, S. Schanze, R. Ewerth, and A. Hoppe, “A Closer Look into Recent Video-based Learning Research: A Comprehensive Review of Video Characteristics, Tools, Technologies, and Learning Effectiveness,” Aug. 11, 2023, arXiv: arXiv:2301.13617. [CrossRef]
  58. T. W. Tan, “Mastering Lumen Global Illumination in Unreal Engine 5,” in Game Development with Unreal Engine 5 Volume 1: Design Phase, T. W. Tan, Ed., Berkeley, CA: Apress, 2024, pp. 223–275. [CrossRef]
  59. T. Volarić, Z. Tomić, and H. Ljubić, “Artificial Intelligence Tools for Public Relations Practitioners: An Overview,” in 2024 IEEE 28th International Conference on Intelligent Engineering Systems (INES), Jul. 2024, pp. 000031–000036. [CrossRef]
  60. A. Sufian, “AI-Generated Videos and Deepfakes: A Technical Primer,” Aug. 12, 2024. [CrossRef]
  61. “The Headliner Blog,” The Headliner Blog. Accessed: Aug. 26, 2024. [Online]. Available: https://www.headliner.app/blog/.
  62. J. H. Park, “The Growth of OTT Platforms’ Investments in Korean Content and Opportunities for Global Expansion,” Dec. 31, 2023, Rochester, NY: 4677552. [CrossRef]
  63. “Wavve Blog,” Wavve. Accessed: Aug. 26, 2024. [Online]. Available: https://wavve.co/blog/.
  64. Y. Wu, X. Shen, T. Mei, X. Tian, N. Yu, and Y. Rui, “Monet: A System for Reliving Your Memories by Theme-Based Photo Storytelling,” IEEE Transactions on Multimedia, vol. 18, no. 11, pp. 2206–2216, Nov. 2016. [CrossRef]
  65. W. Kung, “Using the PESTEL Analysis to Determine the Effectiveness of New Digital Media Strategies,” Advances in Economics, Management and Political Sciences, vol. 5, pp. 19–25, Apr. 2023. [CrossRef]
  66. J. Amankwah-Amoah, S. Abdalla, E. Mogaji, A. Elbanna, and Y. K. Dwivedi, “The impending disruption of creative industries by generative AI: Opportunities, challenges, and research agenda,” International Journal of Information Management, vol. 79, p. 102759, Dec. 2024. [CrossRef]
  67. A. Bandi, P. V. S. R. Adapa, and Y. E. V. P. K. Kuchi, “The Power of Generative AI: A Review of Requirements, Models, Input–Output Formats, Evaluation Metrics, and Challenges,” Future Internet, vol. 15, no. 8, Art. no. 8, Aug. 2023. [CrossRef]
  68. T. C. Helmus, “Artificial Intelligence, Deepfakes, and Disinformation: A Primer,” RAND Corporation, 2022. Accessed: Aug. 19, 2024. [Online]. Available: https://www.jstor.org/stable/resrep42027.
  69. R. Abbott and E. Rothman, “Disrupting Creativity: Copyright Law in the Age of Generative Artificial Intelligence,” Fla. L. Rev., vol. 75, p. 1141, 2023.
  70. J. K. P. Seng, K. L. Ang, E. Peter, and A. Mmonyi, “Artificial Intelligence (AI) and Machine Learning for Multimedia and Edge Information Processing,” Electronics, vol. 11, no. 14, Art. no. 14, Jan. 2022. [CrossRef]
  71. M. Zink, R. Sitaraman, and K. Nahrstedt, “Scalable 360 Video Stream Delivery: Challenges, Solutions, and Opportunities,” Proceedings of the IEEE, vol. 107, no. 4, pp. 639–650, Apr. 2019. [CrossRef]
  72. R. Nishant, D. Schneckenberg, and M. Ravishankar, “The formal rationality of artificial intelligence-based algorithms and the problem of bias,” Journal of Information Technology, vol. 39, no. 1, pp. 19–40, Mar. 2024. [CrossRef]
  73. F. Magni, J. Park, and M. M. Chao, “Humans as Creativity Gatekeepers: Are We Biased Against AI Creativity?,” J Bus Psychol, vol. 39, no. 3, pp. 643–656, Jun. 2024. [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated