Malaria and Typhoid fever are prevalent diseases in tropical regions, exacerbated by unclear protocols, drug resistance, and environmental factors. Prompt and accurate diagnosis is crucial to improve accessibility and reduce mortality rates. Traditional diagnosis methods cannot effectively capture the complexities of these diseases due to the presence of confusable symptoms. Although machine learning (ML) models offer accurate predictions, they operate as "black boxes" with non-interpretable decision-making processes, making it challenging for healthcare providers to comprehend how the conclusions are reached. This study employs explainable AI (XAI) models such as Local Interpretable Model-agnostic Explanations (LIME), and Large Language Models (LLMs) like ChatGPT, Gemini, and Perplexity, to clarify diagnostic results for healthcare workers, building trust and transparency in medical diagnostics. The models were implemented on Google Colab and Visual Studio Code because of their rich libraries and extensions. Results showed that the Random Forest model outperformed Extreme Gradient Boost and Support Vector Machines and important features for predicting Malaria and Typhoid were identified with the LIME plots. Among LLMs, ChatGPT 3.5 demonstrates a comparative advantage over Gemini and Perplexity, hence highlighting how crucial it is to use a hybrid strategy that combines ML, XAI, and LLMs to improve diagnostic performance and reliability in healthcare applications. The study recommends implementing the integrated ML, LIME, and LLM processes because it streamlines the development and maintenance process's workflow overall, makes use of resources more effectively, and improves explainability by allowing the LLM to consider the complete context that LIME provides.