Due to the success of artificial intelligence (AI) applications in the medical field over the past decade, concerns about the explainability of these systems have increased. The reliability requirements of black-box led algorithms for making decisions affecting patients pose a challenge even beyond their accuracy. Recent advances in AI increasingly underscore the need to incorporate explainability into these systems. While most traditional AI methods and expert systems are inherently interpretable, recent literature has focused primarily on explainability techniques for more complex models such as deep learning. This scoping review analyzes the existing literature on explainability and interpretability of AI methods in the medical and clinical field, providing an overview of past and current research trends, and limitations that might impede the development of Explainable Artificial Intelligence (XAI) in medicine, challenges, and possible research directions. In addition, this review discusses possible alternatives for leveraging medical knowledge to improve interpretability in clinical settings, while taking into account the needs of users.