Current neural network models can demonstrate reasonable-looking behavior, considered by some developers and researchers human-like. For example, a large language model GPT-3 is susceptible to human-like cognitive biases. Yet there is no data of such models solving emotional intelligence (EI) tasks. They are connected to the abilities that has been previously considered as specifically human EI is an important aspect of human communication. The ability to understand and respond to emotional cues is essential for effective communication. Therefore, it is crucial to determine the ways AI models such as ChatGPT demonstrate EI. The present research aims to measure the EI of GPT-4, a large language model trained by OpenAI. Russian version of the Mayer–Salovey–Caruzo Emotional Intelligence Test sections B, C, D, F, G and H were used in this research. High points were obtained in Understanding emotions scale and Strategic EI. Mean points are obtained in Managing emotions scale. Low and less reliable values are obtained in Using emotions to facilitate thought scale. Thus, GPT-4 seems already capable of identifying emotions in text and describing techniques for managing them. However, complex cases and irregular situations requiring emotions qualitative analysis would be a hard task for GPT-4.