Disaster issue consultation is crucial for effective emergency management, requiring timely and accurate information to support decision-making processes. This paper explores the integration of large language models (LLMs) with advanced prompt engineering to enhance the performance of AI systems in disaster management. We propose a novel multi-round prompt engineering approach that guides LLMs to iteratively refine their responses based on evolving disaster scenarios. Our method leverages diverse disaster-related datasets and employs a combination of accuracy metrics and a customized GPT-4 scoring system for evaluation. We conduct comprehensive experiments across multiple LLM platforms, including Qwen2, ChatGPT, Claude, and GPT-4, comparing our approach with baseline models and the Tree of Thoughts (ToT) method. The results demonstrate significant improvements in both accuracy and response quality, highlighting the effectiveness of our method in providing contextually relevant and actionable insights. Additional analysis confirms the robustness and generalizability of our approach across various disaster scenarios, establishing it as a valuable tool for enhancing disaster management practices.