The observation of moving bio-object s is currently a topic of great interest in both fundamental and practical research. To illustrate, when developing and testing pharmaceuticals and vaccines, it is imperative to study their impact on the physiological state and behavior of an animal or human subject. The advent of deep learning algorithms has enabled the automation of qualitative and quantitative analysis of the behavior of bio-objects recorded in video format. It is essential that video data undergoes the appropriate preprocessing prior to training deep learning models. It is essential to consider additional factors, such as the background noise in the frame, the rapidity of bio-object movement, and the necessity to reflect information about the previous (past) and subsequent (future) pose of the bio-object in a single video frame. Moreover, the preprocessed dataset should be suitable for verification by its experts. This paper proposes a method of data preprocessing for the identification of bio-object behavior, with the laboratory animal experiments involving video data collection serving as an illustrative example. The method is based on the combination of information about a behavior event presented in a sequence of frames, with the addition of the native image and further boundary extraction by Sobel filter. Consequently, the behavior event representation obtained is readily comprehensible for both human experts and neural networks of varying architectures. The paper presents the outcomes of training multiple neural networks on the acquired data set and proposes an effective neural network architecture (accuracy = 0.95) for the identification of discrete behavior events of bio-objects.