Image clustering and classification are fundamental tasks in computer vision, critical for applications overlap image retrieval, object recognition, and image classification. However, the robustness of clustering algorithms against adversarial attacks remains interesting topic. In this paper, we investigate how adversarial attacks on image classification algorithms impact Image Clustering, similarity obtained using the Dot Product, KNN, HNSW algorithms and model Gradient-Weighted Class Activation Mapping (Grad-CAM). In our work was proposed a targeted study of the impact of adversarial attacks on the clustering ability of ResNet50 under various adversarial scenarios. Was used ResNet50 as the basis for the experiments, a widely used architecture known for its effectiveness in image classification. This network was subjected to various adversarial attacks in order to understand how these perturbations affect its clustering capabilities. By thoroughly examining the resultant clustering outcomes under different attack scenarios, we aim to uncover vulnerabilities and nuances inherent in clustering algorithms and similarity metrics when confronted with adversarial input.