Artificial Intelligence (AI) tools and techniques are comparatively recent advances of present-day globalized world. With their easy availability and friendly user interface AI has affected every sphere of human being. These tools are increasingly becoming popular because of the numerous benefits occurred to the society in many ways. However, there are not only the positive side of these technologies, but also many negative sides. Along with the development and adoption of AI, cyber-attacks are increased and disrupting the information system. Threat actors may misuse artificial intelligence. Hence, these tool designers are working and applying the ability of generative AI can tip the scales in cybersecurity. So, in the age of AI, there is always tension between the cyber attackers and the defenders to make a balanced information system. It is a general rule that defenders must be well in advance in their favor and keep ahead of adversaries. The United National and several governments for example the US, UK, and the big AI giants like Microsoft along with OpenAI, are working on emerging cyberattacks. This research paper explores the concept of information resilience in ML and AI systems, focusing on strategies to enhance their ability to withstand uncertainties, adversarial attacks, and data perturbations.