In recent years, the viability of employing multi-agent reinforcement learning technology for adaptive traffic signal control has been extensively validated. However, owing to restricted communication among agents and the partial observability of the traffic environment, the process of mapping road network states to actions encounters numerous challenges. To address this problem, this paper proposes a multi-agent deep reinforcement learning model with an emphasis on communication content (CMARL). The model decouples the complex relationships between multi-signal agents through centralized training and decentralized execution. Specifically, we first pass the traffic state through an improved deep neural network to achieve the extraction of high-dimensional semantic information and the learning of the communication matrix. Then the agents selectively interact with each other based on the learned communication matrix and generate the final state features. Finally, the features are inputted to the QMIX network to achieve the final action selection. We compare the CMARL model with 6 other baseline algorithms in real traffic networks. The results show that CMARL can significantly reduce vehicle congestion, and run stably in various scenarios.