Activity Recognition from Newborn Resuscitation Videos

Document Type

Article

Department

Obstetrics and Gynaecology (East Africa)

Abstract

Objective: Birth asphyxia is one of the leading causes of neonatal deaths. A key for survival is performing immediate and continuous quality newborn resuscitation. A dataset of recorded signals during newborn resuscitation, including videos, has been collected in Haydom, Tanzania, and the aim is to analyze the treatment and its effect on the newborn outcome. An important step is to generate timelines of relevant resuscitation activities, including ventilation, stimulation, suction, etc., during the resuscitation episodes.

Methods: We propose a two-step deep neural network system, ORAA-net, utilizing low-quality video recordings of resuscitation episodes to do activity recognition during newborn resuscitation. The first step is to detect and track relevant objects using Convolutional Neural Networks (CNN) and post-processing, and the second step is to analyze the proposed activity regions from step 1 to do activity recognition using 3D CNNs.

Results: The system recognized the activities newborn uncovered, stimulation, ventilation and suction with a mean precision of 77.67 %, a mean recall of 77,64 %, and a mean accuracy of 92.40 %. Moreover, the accuracy of the estimated number of Health Care Providers (HCPs) present during the resuscitation episodes was 68.32 %.

Conclusion: The results indicate that the proposed CNN-based two-step ORAA-net could be used for object detection and activity recognition in noisy low-quality newborn resuscitation videos. Significance: A thorough analysis of the effect the different resuscitation activities have on the newborn outcome could potentially allow us to optimize treatment guidelines, training, debriefing, and local quality improvement in newborn resuscitation.

Publication ( Name of Journal)

IEEE Journal of Biomedical and Health Informatics

Share

COinS