Forest fire identification is important for forest resource protection. Effective monitoring of forest fires requires the deployment of multiple monitors with different viewpoints, while most traditional recognition models can only effectively recognize images from a single source, often because they ignore the correlation information between images from different viewpoints, resulting in inaccurate visual similarity estimation for multiple source samples and generating the problems of missed and high false alarm rates. In order to solve the problems, a similarity-guided graph neural network model based on the dynamic characteristics of images is proposed in this paper. The method converts the input features of the nodes on the graph into relational features of different gallery pairs by establishing pairs (nodes) that represent different viewpoint images and gallery images. The dynamic feature update of the image gallery using the new feature-bank relationship enables the estimation of the similarity between images and improves the image recognition rate of the model. Besides, to reduce the complicated pre-processing process and extract the key features in the images effectively, this paper also proposes a dynamic feature extraction method for fire regions based on image segment ability. By setting the threshold value of HSV color space, the fire region is segmented from the image and the fire region frames are calculated for dynamic feature extraction. The experimental results on the open-source forest fire dataset and our collected forest fire dataset show that the performance of the method in this paper is improved by 4% compared with Resnet, the theme during this paper may be tailored to totally different fire eventualities and has sensible generalization and interference resistance.