Selecting the optimal activation function for training deep neural networks has always been challenging because it significantly impacts the neural network’s performance and training speed. At this point, researchers are more likely to employ RELU than other well-known activation functions. After RELU, researchers have proposed many activation functions to improve RELU. None of them was capable of surpassing RELU as their most significant rival. SWISH outperformed RELU in several challenging experiments like classification, object detection, and tracking. Replacing RELU units with SWISH, which improves performance in many tasks. This paper proposed a new activation function surpassing Google’s brain’s SWISH function, Which we named AIF. Experiments indicate that our proposed activation function outperforms SWISH in various tasks.