The present work fronts the human pose estimation problem, developing a method that enables automated ergonomic risk assessment.
A research methodology is developed to calculate joint angles from digital snapshots or videos by using computer vision and machine learning techniques to get a more accurate ergonomic risk assessment. Starting from an ergonomic analysis, the study explores the use of a semi-supervised training method to detect the skeletons of workers. The research methodology developed aims to infer the positions and angles of the joints, to calculate the criticality index based on the RULA scores and fuzzy rules.
Then, to prevent work-related musculoskeletal disorders (WMSD), improve production capacity and decrease the ergonomic risk, the system uses joint with a collaborative robot to support the worker in carrying out the most critical operations. The method has been tested on a real industrial case in which manual assembly activities of electrical components are conducted. The approach developed can overcome the limitations of recent developments based on computer vision or wearable measurement, sensors by performing an assessment with an objective and flexible ap-proach to postural analysis development.