Preprint Review Version 1 This version is not peer-reviewed

Assessing the Societal Risks of AI’s Rapid Advancement: Innovation or Threat to Safety?

Version 1 : Received: 16 September 2024 / Approved: 16 September 2024 / Online: 17 September 2024 (02:55:57 CEST)

How to cite: Zaib, O.; Ali, A. R. Assessing the Societal Risks of AI’s Rapid Advancement: Innovation or Threat to Safety?. Preprints 2024, 2024091217. https://doi.org/10.20944/preprints202409.1217.v1 Zaib, O.; Ali, A. R. Assessing the Societal Risks of AI’s Rapid Advancement: Innovation or Threat to Safety?. Preprints 2024, 2024091217. https://doi.org/10.20944/preprints202409.1217.v1

Abstract

The rapid acceleration of artificial intelligence (AI) has sparked extensive debate about its impact on the safety and security of society. While AI has shown trans- formative potential, particularly in areas such as healthcare and independent structures where it complements productivity and minimizes human error, accu- racy and protection. However, the misuse of artificial intelligence poses real risks, along with threats such as deep counterfeiting, computer cyber-attacks and the manipulation of social behavior through disinformation. These risks vary from privacy and cyber security breaches to moral dilemmas around liability. The unchecked improvement of AI is expected to threaten the social order, so the relationship between innovation and governance is essential so that AI-related threats do not overshadow its benefits. This article offers a detailed assessment and review of the literature that studies the potential dangers that AI poses to society. The cumulative effects underscore the urgent need for comprehensive regulatory frameworks to mitigate these threats.

Keywords

Artificial Intelligence (AI); Deep Fakes; Autonomous Vehicles (AV); Societal Safety; Cyber Security; Regulatory Frameworks

Subject

Computer Science and Mathematics, Computer Science

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.