Preprint
Article

Dataset and System Design for Orthopedic Walker Fall Detection and Activity Logging Using Motion Classification

Altmetrics

Downloads

199

Views

77

Comments

0

A peer-reviewed article of this preprint also exists.

This version is not peer-reviewed

Submitted:

22 September 2023

Posted:

25 September 2023

You are already at the latest version

Alerts
Abstract
An accurate, economical, and reliable device for detecting falls in persons ambulating with the assistance of an orthopedic walker is crucially important for the elderly and patients with limited mobility. Existing wearable devices such as wristbands are not designed for walker users, and patients may not wear them at all times. This research proposes a novel idea of attaching an internet-of-things (IoT) device with an inertial measurement unit (IMU) sensor directly to an orthopedic walker to perform real-time fall detection as well as activity logging. A dataset is collected and labeled for walker users in four activities, including idle, motion, step, and fall. Classic machine learning algorithms are evaluated using the dataset by comparing their classification performance. Deep learning with convolutional neural network (CNN) is also explored. Furthermore, the hardware prototype is designed by integrating a low-power microcontroller for onboard machine learning, an IMU sensor, a rechargeable battery, and Bluetooth wireless connectivity. The research results show the promise of improved safety and well-being of walker users.
Keywords: 
Subject: Engineering  -   Electrical and Electronic Engineering

1. Introduction

According to a report from the World Health Organization (WHO), the percentage of the world’s population over 60 years old is projected to nearly double from 12% to 22% between 2015 and 2050, increasing from 1 billion in 2020 to 1.4 billion by 2030 and nearly 2.1 billion by 2050 [1]. One of the common problems in the elderly population is accidental falls, which are common but sometimes life-threatening. The rising demand for fall detection systems, algorithms, and techniques has been evident in the surge of interest observed via Google Trends, a platform dedicated to monitoring internet users’ information search patterns since 2004. Notably, the search term “fall detection" has attained an unprecedented peak, registering a remarkable surge of over 500% within the past five years [2].
There are around 6.1 million people in the United States who use some type of mobility assistance device, including walking canes, orthopedic walkers, and clutches. However, life-threatening falls in the older population are a crucial health and safety issue; about 1.5 million elderly people are injured falling each year and about 47,300 people per year using walking aids suffer injuries from falls that require an emergency room visit [3].
In addition, life-threatening falls also occur often within the rehabilitation process after major surgery to the hip or leg area, which is an increasing concern for patients and medical experts alike. The risk of repeat falls has been shown to be especially high in patients who have already sustained a hip fracture [4]. Thus, there is urgent need for a smart device that can detect falls for orthopedic walker users in real-time and alert caregivers for emergency assistance.
In recent years, wearable smart devices such as Fitbit and Apple Watch have become very popular for activity tracking and physiology monitoring; however, these smart watches are not designed for walker users. Even if a walker user wears such a device on the wrist, the device may not accurately detect motions and hazards since both hands are rested on the walker during movement. In addition, some elderly users may not wear such devices reliably on their own. Traditional at-home health monitoring products often abandon the usage of tracking devices due to irregularities and false positives in favor of simple medical alert systems self-actuated by the subject, which do not consider the possibility that a fall may cause life-threatening injuries that may incapacitate the subject and cause them to be unable to activate such systems. Inhibitions to speech and capability of motion can render medical alert systems that rely on direct communication between emergency services and the user ineffective.
The primary objective of this research is to develop a novel, smart, and low-cost IoT device, namely SafeStride, that overcomes the limitations of the existing wearable and stationary fall detection systems. SafeStride is attached directly to an assistant walker, which continuously records and processes the walker’s motion data in real time. SafeStride is intended to facilitate the development of a comprehensive fall detection system that efficiently tracks movement and promptly generates alerts in the event of a fall. In addition, it provides an activity log, such as walking steps and duration, standing time and stability, etc., to caregivers and physicians to help them assess a patient’s progress with recovery and monitor if the patient has met the prescribed daily movement goals. This paper presents the detailed design and implementation of the hardware prototype, highlighting its potential advantages over existing solutions. The integration of this prototype on the walker holds promise for enhancing wearable fall detection capabilities, ultimately leading to improved safety and well-being of the walker users.
The other objective of this work is to develop a dataset for assistive walker fall detection and motion classification. Machine learning is a powerful technique for data analysis tasks such as classification and detection. However, such a dataset for assistive walkers is not available in the literature. In this work, we performed data collection and labeling, followed by the evaluations of various machine learning models to validate their accuracy. Intuitively, a drastic fall-like motion that causes a user to lose balance or collapse to the ground usually results in significant spikes in acceleration on all three axes and noticeable deviations in gyroscopic position [5]. Similarly, acceleration and rotation data can also be used to detect the motion of intervals of walking and stopping. The primary challenge is to separate the motions of walking from falls while eliminating false positives and negatives. Missing a life-threatening fall is clearly critical, but false alarms can also seriously jeopardize the adoption of this new technology. Our labeled dataset is shared publicly on Kaggle [6] for easy access of the research community.
The main contributions of this work are the following: (1) We developed an IMU-based fall detection dataset for assistive walkers, which are published online for sharing with IoT and healthcare research community; (2) We evaluated different machine learning models and compared their performance using the dataset; (3) We built a low-power, real-time IoT device prototype and validated its functions through experiments. The IoT device can also communicate with others via Bluetooth and Wi-Fi, enabling it to transmit pertinent information to smartphones or a web server, facilitating prompt alerts in the event of falls. Additionally, the device collects and maintains various statistics, including the number of steps taken by the user and their durations. These statistics can offer valuable insights to physicians who monitor a patient’s progress in recovery.
In the following sections of this paper, the design, implementation, and evaluation of the proposed fall detection system will be presented. In Section II, a review of related work will be conducted for recognizing existing approaches and identifying gaps. The detailed design and implementation of the novel hardware prototype will be presented in Section III. Section IV will describe the procedure for data acquisition and labelling, as well as the machine learning and deep learning models to be used for fall detection. Results and discussion of the SafeStride system evaluation will be included in Section IV. Finally, Section V will summarize the project, describe the main findings, and identify potential areas for future work.

2. Related Work

A wide range of approaches have been explored to design effective fall detection systems, including the use of wearable devices [7,8,9,10,11] and smartphones [12,13,14,15] with microphones [16], cameras [17], accelerometers and gyroscopes [18,19], GPS [20], and combinations of multiple sensors [21].
In addition to wearable devices, alternative approaches have been extensively investigated in the development of fall detection systems. Depth sensors, such as Microsoft Kinect [22] or time-of-flight cameras [23], have been implemented with the primary goal of enhancing the accuracy and effectiveness of fall detection systems. Infrared sensors have been used to enable the detection of changes in infrared radiation within specific environments [24]. Doppler radar systems provide a non-invasive and privacy-preserving approach for timely and accurate fall detection among the elderly, analyzing unique time-frequency characteristics to identify fall events regardless of lighting conditions [25].
Vision-based methods analyze video data from cameras, employing techniques such as optical flow analysis [26], object tracking [27], and human pose estimation to detect falls based on changes in motion or posture [28]. Acoustic sensors, such as microphones or sound arrays, capture audio signals and employ signal processing techniques to identify sudden impact sounds, screams, or other acoustic patterns associated with falls [29].
Signal processing techniques play a crucial role in all the aforementioned fall detection systems, as they extract meaningful features from sensor data, facilitating the accurate detection of fall events across diverse system types. Multiple signal processing techniques have been employed in fall detection systems, including orientation filters [9], quaternions [10], thresholding techniques [13,18,19,20], histograms of oriented gradients [17], clustering algorithms [21], Bayesian segmentation approaches [23], spatial-temporal analysis methods [27], Kalman Filters [30], sensor data fusion [31], and wavelet transforms [32].
Furthermore, machine learning methods have been applied extensively in recent years. In this field, a variety of machine learning models have been employed, ranging from simpler approaches like decision trees [22] and k-nearest neighbors (kNN) [7,11,14,21,24,29], to more complex methods such as Bayesian classifiers [23,25], support vector machines (SVM) [28], neural networks [16], and deep learning models [15,26].
A closely related research field is human activities recognition (HAR) by classifying human activities from motion sensor data. A number of public datasets for fall detection using wearable sensors [33] are available; a dataset [34] in which data were labeled as fall, near fall and activities of daily living (ADL) was most useful for us. Both the signal processing method [35] and the machine learning approach [5,36] are widely used in this field. In signal processing approaches, methods based on thresholds compare data with preset thresholds to detect a step [37] , methods based on peak detection count steps by counting the peaks of sensor readings [38], and methods based on the correlation analysis count steps by calculating and comparing the correlation coefficients between two neighboring windows of sensor readings [39]. In machine learning methods, authors design HAR algorithms based on Convolutional Neural Networks (CNN) [40,41], and researchers in [36]have developed an algorithm based on long short-term memory (LSTM) to recognize human activities. It is worth mentioning that all existing HAR datasets were based on wearable IMU sensors directly attached to the human body. In contrast, this project involves fixing the sensor to a walker, so the data characteristics are quite different.
The research and development of fall detection systems specifically for assistive walkers and rollators is relatively limited. However, there have been several notable endeavors to create technology-integrated devices to improve the safety and functionality of these mobility aids.
In [42] a rollator-based ambulatory assistive device with an integrated non-obtrusive monitoring system was introduced, aiming to enhance the functionality and capabilities of traditional rollators. The Smart Rollator prototype incorporates multiple subsystems, including distance/speed measurement, acceleration analysis, force sensing, seat usage tracking, and physiological monitoring. Collected data is stored locally within a microprocessor system and periodically transferred to a remote data server via a local data terminal. This enables remote access and analysis of the data, contributing to improved monitoring and support for individuals using rollators.
The research work in [43] presents a fall detection system designed specifically for smart walkers. The system combines signal processing techniques with the Probability Likelihood Ratio Test and Sequential Probability Ratio Test (PLT-SPRT) algorithm to achieve accurate fall detection. Simulation experiments were conducted to identify the control model of the walker and analyze limb movements, while real-world experiments were performed to validate the system’s performance.
The study in [44] demonstrates experimental results of fall detection and prevention using a cane robot. Non-disabled male participants walked with the cane robot, and their "normal walking" and "abnormal walking" data were recorded. The fall detection rate was evaluated using the Center of Pressure-Based Fall Detection (COP-FD) and Leg-Motion-Based Fall Detection (LM-FD) methods. COP-FD detected falls by calculating the center of pressure (COP) during walking and comparing it to predefined thresholds. LM-FD used a laser range finder (LRF) to measure the relative distance between the robot and the user’s legs, enabling the detection of leg motion abnormalities associated with stumbling. The results showed successful fall detection for both COP-FD and LM-FD, with some instances of false positives and false negatives.
While various machine learning techniques using sensor data have been explored for fall detection systems, a major obstacle in these projects is the lack of reliable data. Obtaining large and diverse datasets that capture different fall scenarios and environmental conditions is crucial for training robust models, posing a significant challenge that researchers and developers need to address for effective machine learning-based fall detection systems.
The study in [33] presents a comprehensive analysis of publicly available datasets used for research on wearable fall detection systems. The study examines twelve datasets and compares them based on various factors, including experimental setup, sensor characteristics, movement emulation, and data format. The datasets primarily use accelerometers, gyroscopes, and magnetometers as the main sensors for capturing movement and orientation data. However, there is a lack of consensus and standardization among the datasets in terms of the number and position of sensors, as well as the specific models and characteristics of the sensors employed. This heterogeneity makes it challenging to compare and evaluate fall detection systems effectively.
In summary, current research on fall detection systems has focused primarily on wearable devices and stationary systems, with limited exploration of technology integration in walkers, rollators, canes, and wheelchairs. The integration of technology into walkers and rollators has great potential to improve fall detection capabilities and meet the specific needs of people who rely on these mobility aids. However, it is important to note that data availability remains a major challenge in this field, and the creation of reliable datasets is crucial to further progress in this research.

3. System Design and Hardware Prototype

This section provides an overview of SafeStride, a novel prototype designed as an add-on device for assistive walkers, as seen in Figure 1. SafeStride offers comprehensive fall detection, step counting, and activity tracking capabilities. The hardware design includes the selection of components, their functionalities, and the integration process and collect statistical data on step count and duration of use.
An important feature of SafeStride is its onboard machine learning capability, enabling the device to run machine learning algorithms directly in real-time. By leveraging data from the Inertial Measurement Unit (IMU) and microphone sensors, SafeStride analyzes movement patterns and acoustic features to accurately detect falls, even when the walker is not in motion. With the implementation of the IoT, SafeStride can establish wireless connections with other devices, such as smartphones and web servers, enabling real-time alerts and intelligent monitoring. Moreover, healthcare professionals can access the data log to monitor a patient’s recovery progress and provide personalized care.

3.1. Microcontroller

The decision to use a microcontroller instead of a microprocessor as the core of the SafeStride prototype was motivated by the necessity to fulfill low-power requirements. Taking into account the device’s intended use, which entails prolonged operation on a battery without frequent recharging, optimal power efficiency becomes imperative. By incorporating a microcontroller, power usage can be minimized, ensuring the prototype’s efficient performance for a prolonged period per battery charge.
Considering the requirement for machine learning capabilities, it might be assumed that the use of a microprocessor is essential. However, recent advancements have enabled the direct deployment of machine learning algorithms onto microcontrollers [45]. There are cost-effective hardware development boards available off-the-shelf that integrate machine learning capabilities, low power consumption, IMU units, wireless connectivity, and a microphone into a single board, making them well-suited for the SafeStride prototype.
The Arduino Nano 33 BLE Sense, which incorporates the nRF52840 microcontroller, LSM9DS1 IMU, MP34DT05 microphone and BLE connectivity, is a prominent example of such a device [46]. It has been widely used in research experiments, demonstrating its effectiveness in various applications that demand machine learning capabilities, low power consumption, and integrated sensor functionalities [47,48,49]. Alternatively, the Seeed Studio XIAO nRF52840 microcontroller provides comparable capabilities [50]. During the development of this project, the dataset was developed using the Arduino Nano 33 BLE Sense board.
In this IoT application, the communication characteristics of the hardware components used are critical. Specifically, while the Arduino Nano is are equipped with Bluetooth Low Energy, it does not supports Wi-Fi wireless protocol. To address this limitation and enable wireless communication with both Bluetooth and Wi-Fi devices, as well as facilitate cloud communication with a web server, the ESP8266 has been incorporated as the main core. This component handles wireless communication by integrating Bluetooth and Wi-Fi functionalities. The nRF52840 microcontroller on the Arduino Nano board, meanwhile, is used for sensor data sampling and machine learning classification tasks. As illustrated in Figure 2, this configuration enables extensive wireless communication functions, paving the way for seamless connectivity and cloud-based interactions across multiple devices.

3.2. Power Source and Protective Casing

The SafeStride fall detection system is designed to operate on a LiPO battery, which was chosen due to its high energy density, lightweight construction, and rechargeable capability. The LiPO battery provides the necessary power capacity to support the device’s operation for extended periods without frequent recharges. To facilitate charging, a charging circuit is incorporated using power from a micro USB port. In addition, a boosting circuit is employed to elevate the battery’s voltage from 3.7 volts to the 5 volts required to power the circuit. It is important to note that the logic levels used within the circuit remain at 3.3 volts.
To ensure the protection and integration of the system components, a PCB design has been developed. The components are carefully planned to be integrated and enclosed within a 3D printed case, providing secure and robust housing. This protective case is specifically designed to withstand potential impacts from falls and protect the electronics inside. Special precautions are taken to securely fasten the case to the walker using Velcro straps to ensure stability during use.

4. Building the Walker Dataset

A significant challenge arises in this particular research due to the lack of data available for the proposed IMU-based fall detection method. As a result, it is necessary to create a dataset that includes IMU data samples, covering various scenarios including accidental falls and normal daily activities when using the walker, in order to train and evaluate a machine learning model. The data generated during this research are compiled into the "Walker Dataset" that is published on Kaggle [6]. The dataset serves as an open-source platform that we share with other researchers in this field.

4.1. Sampling setup

The sampling setup in the fall detection prototype focused on wireless communication and power autonomy. The microcontroller’s built-in BLE capability enabled the wireless transmission of sensor data to a computer, eliminating the need for cables and allowing unrestricted movement during data collection. Real-time communication with the computer was established using a Python script and BLE protocols.
To ensure precise capture of motion and orientation information, the sampling rate was optimized to achieve the highest possible data rate, reaching approximately 100 samples per second. This high sampling rate enabled a detailed analysis of movement patterns. The acquired sensor data, which included XYZ acceleration and gyroscope measurements, was stored in JSON format on the computer, simplifying subsequent parsing and extraction during the data processing stages.

4.1.1. Data acquisition

For the IMU-based fall detection algorithm, the data acquisition process involves capturing motion and orientation data using the sensor integrated into the device. In this project, the fall detection problem is treated as a classification task, where four classes are considered: idle, motion, step, and fall.
  • Idle: The idle class represents periods of no movement or minimal activity when the walker is stationary and the user is not actively engaged in any motion.
  • Motion: The motion class captures random motion, which means that the walker is being moved, but not necessarily by someone actively walking with it.
  • Step: The step class corresponds to the specific movement when someone takes a step with the walker. This class focuses on capturing the gait pattern associated with normal walking using the walker.
  • Fall: The fall class represents situations where the walker falls to the floor, indicating a potential fall event.
Data for this study was collected using the walker in 5 groups. Each group contains about 100 150 samples of fall, motion, and step data.
To record idle data, multiple 5-minute duration files were recorded with the walker placed in various positions and kept still throughout the recording. The walker was placed in different orientations to capture variations in sensor readings, ensuring a diverse dataset.
For motion data, random movements were recorded by actively moving the walker in different directions and at varying speeds. This allowed for the capture of a wide range of motion patterns, simulating everyday movements that may occur between adjacent walking steps.
To simulate the walking of an elderly person, walker data was recorded in an indoor open space. Pauses were introduced between each step to mimic the typical walking pattern of an elderly and facilitate the separation of steps.
For falls, deliberate actions were performed in which the walker was intentionally thrown to the floor multiple times and then picked up again. Additionally, external forces, such as pushing the walker, were applied to simulate the motion associated with a fall event. These intentional actions aimed to capture the distinct patterns and characteristics of falls in the acquired sensor data.

4.2. Training Data Labeling

After recording the data on the computer, several steps were taken to process it. The first step involved downsampling the data from approximately 100 Hz to 80 Hz. This adjustment was necessary because the original sampling rate was not consistent, and the acceleration and gyro data were received separately, each with its own timestamp. By downsampling the data to a lower rate, interpolation was applied to ensure a fixed time delay of 12.5 ms between samples, corresponding to a sampling rate of 80 Hz. This process effectively unified the gyro and acceleration data, resulting in six dimensions for each timestamp.
Once the sampling rate was normalized and the data dimensions unified, the next step was to separate the groups of steps within the recorded data. In the experimental setup, each recording file contained multiple clusters of steps mixed with some turning movements. Typically, 20 to 25 steps forward were taken, followed by a turn and then another series of walking steps.
To facilitate this separation, a Java interface was developed that allows step groups to be manually identified and isolated from turning movements, ensuring that each group is extracted individually for further analysis and classification. The interface provides the ability to set a lower and upper threshold, as seen in Figure 3, allowing information between the defined ranges to be exported to a new, separate JSON file. This feature allows the extraction of specific segments of logged data, providing more focused datasets for further analysis and processing.
An algorithm was developed to extract and analyze individual steps after isolating groups of steps. This algorithm uses signal processing and machine learning techniques to process the recorded data. To create a single motion metric, accelerometer and gyroscope data are combined using root mean square (RMS) calculation. This involves normalizing the data and calculating the square root of the average of the squared values, providing a concise representation of the overall motion intensity for each timestamp.
After RMS calculation and normalization, a running window averaging filter with 50 samples is applied to smooth the data. This filter reduces noise and jitter in the signal, resulting in a smoother representation of the intensity of motion over time. Subsequently, the dataset is analyzed using a Hidden Markov Model (HMM) algorithm to identify hidden states. The HMM algorithm is a statistical model capable of analyzing sequential data with unobservable states. In this context, it distinguishes between steps and non-steps based on the intensity of movement captured from the RMS values, as observed in Figure 4.
Step identification using the HMM algorithm allows for various analyses and measurements. It can determine the average length of a step by analyzing the step patterns identified in the dataset. This information provides valuable insight into the characteristics of the steps recorded during the experiment and allows the selection of an appropriate window size.
By selecting the window size based on the average length of steps recorded in the dataset, it is possible to capture the correct number of features needed to represent a step, optimizing classification accuracy and adhering to microcontroller limitations. Choosing a window size that is too short can result in insufficient information for accurate step identification, while a window size that is too long could consume too much memory and exhaust the processing capabilities of the microcontroller.
After analyzing the step lengths in the recorded data, a window size of 160 samples was chosen for further processing. Using this window size, all steps were extracted by calculating the centroid of each step, as determined by the HMM algorithm. The centroid represents the center of the step pattern in the time domain and provides a reference point for extracting the step segment.
To extract the step sample, 80 samples to the left of the centroid and 80 samples to the right were selected, ensuring that each step was captured within a total of 160 samples. By extracting the step segments in this way, individual JSON files were created for each step, resulting in 600 unique and independent step samples extracted from the logged data.
Similarly, for the idle and motion classes, 620 samples were randomly selected from the raw files that were recorded during the data acquisition phase. The same procedure described for the steps was applied, using the HMM algorithm, to isolate the falls from the recorded data. This resulted in the creation of 620 fall samples.
All extracted samples were consolidated into a unified data set. Included in this data set is a label column with the corresponding class for each sample, which could be "steps", "idle", "motion", or "fall". In addition, 960 different features are included, with all six dimensions (XYZ acceleration and XYZ gyroscope) and 160 samples contributed for each dimension.

5. Machine Learning and Deep Learning classifications

IMU-based fall detection is a widely explored approach that leverages the motion and orientation data captured by inertial measurement units to detect fall events. By analyzing the patterns and characteristics of the IMU data, it becomes possible to identify sudden changes or anomalies that may indicate a fall. This section outlines a study of different machine learning and deep learning algorithms for assistive walker fall detection and motion classification using the walker dataset that we developed in Section 4.

5.1. Machine Learning Classifiers and Performance

The optimal classification algorithm for this task was determined using Scikit-Learn’s all_estimators tool, which allows 35 different classification algorithms to be tested. A k-fold cross-validation technique was applied to each algorithm, with 10 rounds of training and testing processes. During this process, different performance indicators such as average accuracy, precision, recall and F1 score were evaluated.
Based on the average accuracy, the 10 best classification algorithms were selected. It should be noted that, during this initial testing phase, each algorithm operated with its default parameters. This implies that there was potential for improvement by fine-tuning the hyperparameters.
The data set was split in a ratio of 80 : 20 for training and testing, respectively. 80% of the data was used to train the model, while the remaining 20% was kept as separate data that was subsequently used to test the selected model.
In Table 1, the performance results of the top 10 algorithms, evaluated by the above metrics, were documented.
After identifying the RandomForestClassifier as the best performer, further analysis was conducted to explore its potential for accuracy improvement. Due to the incorporation of randomness in its training process, the algorithm was executed multiple times to evaluate variations in performance. A total of 100 runs were performed, and the highest accuracy attained during these runs was found to be 99.07%. It is important to note that these results were obtained using a testing set that was different from the training set, indicating that the RandomForestClassifier’s accuracy is already close to its maximum potential without any additional hyperparameter tuning.
Figure 5 presents the confusion matrix with the results of applying the trained algorithm to the test data set. The model shows good performance on the fall detection task, as expected. The falls, due to their extensive motions and clear differentiation from other classes, are easily identified. However, occasional failures are observed when the algorithm attempts to discriminate between motion and steps, attributable to the similar inertial characteristics presented by these classes. In spite of this, the model presents superior overall performance.

5.2. A Deep Learning Model

We also explore the performance of deep learning using a six-layered convolutional neural network (CNN) that is adapted from the classic Lenet-5 architecture for image classification. The CNN architecture is shown in Figure 6. The input of the CNN network is 100 points of 6-axis data collected by the IMU sensor, so the input feature size is 6-by-100. Data are first fed through four convolution layers used for feature extraction. Each layer encompasses 2-D filters for convolution, data pooling, and reLU functions. Data then enter two fully connected layers and a softmax layer. Finally, the CNN outputs the 4-class classification results.
A CNN was trained using the aforementioned walker dataset to perform the classification task. 80% of samples were randomly selected for training and the remaining 20% of samples were used for validation. The model was trained for 10 epochs. Results show that the CNN reached an overall classification accuracy 97.4%, which is lower than the Random Forest Model presented in Section 5.1. Although there are many high-performance deep CNN models that may be trained to reach superior accuracy using the same dataset, the microcontroller of the hardware prototype can only handle a simple CNN with a small number of layers and parameters due to limited processing power and memory.

5.3. Software Implementation

From the study of the machine learning and deep learning classifiers, we choose to the Random Forest model for embedded software implementation. In order to implement the trained machine learning model in a microcontroller environment, the next step is to convert the model to an executable C code file. This conversion was performed using the m2cgen Python library [51], which allows the transformation of a trained machine learning model into a format compatible with microcontrollers. By generating the C code file, the model can be uploaded to the microcontroller, enabling it to perform classification tasks directly on the device. The generated code file, totaling 1.2 MB in size (approximately 25,500 lines of code), is fully compatible with microcontrollers without causing any concerns regarding available flash memory capacity.
The input for the trained model consists of 160 samples, each capturing data from six dimensions: accelerometer and gyroscope data in X, Y and Z. These samples are directly obtained from the built-in Inertial Measurement Unit (IMU) within the microcontroller board. The combination of the six dimensions results in a total of 960 features, which are used as input for the trained model.
Once the model is uploaded to the microcontroller, it can process the input data from the IMU and execute the machine learning C code to obtain an output. This output corresponds to the classification of the sampled movement as idle, motion, step, or fall, providing real-time fall detection capabilities directly within the microcontroller.
By converting the trained model into executable C code and uploading it to the microcontroller, the fall detection system can operate autonomously without relying on external computational resources. This integration enables efficient and prompt classification of movement data, enhancing the overall performance and responsiveness of the fall detection system.

6. Results and Discussion

This section presents the results of the evaluation of the SafeStride system and discusses its implications. The system was tested in a lab environment and proved successful in detecting falls and steps for walker users.
Continuous, unseen data was used to test the IMU-based detection method, aiming to replicate a real-time setup. Integrating the machine learning model into the microcontroller of the SafeStride system facilitated the experimentation with features transmitted to the MCU via serial communication. To ensure efficient data processing, a graphical interface was developed, enabling the loading and visualization of continuous samples and their subsequent transmission to the controller, as can be seen in Figure 7. The implementation employed a sliding window technique, employing a window size of 160 samples with a 50% overlap. Subsequently, the data was received by the microcontroller, classification was performed, and the corresponding result was returned. By comparing the recorded and transmitted data, highly favorable outcomes were obtained.
In contrast to traditional machine learning testing methods, this experimental setup distinguishes itself by employing continuous data instead of pre-processed and curated static samples. Within this framework, various critical aspects were comprehensively evaluated, including accurate step detection, the mitigation of missed steps or indications of falls arising from random motion, and the immediate triggering of fall events when they occur. Real-time laboratory tests demonstrated the satisfactory performance of the trained algorithm, thereby confirming the robustness of both the model and the training dataset. Additionally, this experiment convincingly demonstrated the feasibility of implementing complex machine learning algorithms on an IoT device, particularly for tasks involving real-time and continuous data processing.
In addition, we take advantage of an existing microphone on the Arduino Nano board by implementing a keyword spotting function on SafeStride as an extra layer of safety measure if a person falls. The keyword spotting fall detection technique was based on the Cyberon Speech Recognition Engine, which provided the ability to download a pre-trained machine learning model from its web platform for easy, fast, and effective implementation on microcontrollers such as the Arduino Nano 33 BLE Sense. This technique is used in speech recognition systems to detect predefined "keywords" in continuous speech. In the context of fall detection, it can be employed to detect verbal expressions commonly uttered during or immediately after a fall, such as exclamations of surprise, pain, or calls for help. The use of keyword spotting for fall detection offers the potential advantage of not only detecting a fall event but also capturing the urgency or severity of the fall. For instance, if the system detects phrases like "Help", "Help me" or "Help, I’ve fallen", it can provide additional evidence of a fall event and potentially trigger more urgent assistance.
In the initial phase of this project, an attempt was made to create a machine learning model using a publicly available dataset for keyword spotting; however, a dataset that addressed the specific contextual requirements for detecting words or phrases potentially uttered during a fall event could not be identified. Creating such a dataset is very difficult due to the inherent variability observed in pitch, accent, speed, and other speech characteristics. To overcome these challenges, the decision was made to utilize the built-in library of the Cyberon speech recognition engine. The engine-generated model was tested in a lab environment, where its ability to accurately identify a specific keyword "help" was demonstrated. Keyword spotting was included as a supplementary tool in case a rare fall event was not captured by the motion and sound detection algorithms.

7. Conclusion and Future Work

In this research, an IMU-based walker dataset is developed for the fall detection and activity classification of assistive walkers. Various efficient machine learning algorithms are evaluated using the dataset, and the Random Forest classifier is among the best which achieves the overall accuracy of 98.51% with no missed falls. Therefore, the proposed SafeStride system has demonstrated accurate and reliable performance in detecting falls. The IMU-based fall detection algorithm achieved an accuracy of. Additionally, the integration of IoT capabilities enhances the system’s functionality by enabling communication with cloud resources, further expanding its capabilities and potential for improved fall detection and response.
For future work, one promising avenue is to explore the integration of other sensing modalities into the SafeStride system, such as cameras, radars, pressure sensors and etc., to create a more robust and comprehensive fall detection system. Moreover, user feedback and field testing should be pursued to validate the system in real-world scenarios, which could involve various types of falls and environmental conditions.
An important contribution of this work is the creation of the walker dataset shared on Kaggle [6]. This dataset serves as a valuable resource for training and evaluating machine learning and deep learning models for fall detection. The availability of this dataset to the public also encourages further research and development in the field, thereby driving progress in fall detection and enhancing care for the elderly and individuals with physical disabilities. Overall, the SafeStride system represents a significant step forward in the field of fall detection, harnessing the power of machine learning to provide a more reliable and responsive solution. The results obtained are very encouraging and demonstrate the potential for new advances in IoT for healthcare.

References

  1. “WHO Ageing and Health,” 2023, https://www.who.int/news-room/fact-sheets/detail/ageing-and-health. [Online]. Available: https://www.who.int/news-room/fact-sheets/detail/ageing-and-health.
  2. X. Wang, J. Ellul, and G. Azzopardi, “Elderly fall detection systems: A literature survey,” Frontiers in Robotics and AI, vol. 7, 2020. [CrossRef]
  3. “CDC Newsroom,” Jan. 2016. [Online]. Available: https://www.cdc.gov/media/releases/2016/p0922-older-adult-falls.html.
  4. T. P. Weil, “Patient falls in hospitals: An increasing problem,” Geriatric Nursing, vol. 36, no. 5, pp. 324–347, 2015. [CrossRef]
  5. A. Chelli and M. Pätzold, “A Machine Learning Approach for Fall Detection and Daily Living Activity Recognition,” IEEE Access, vol. 7, pp. 38 670–38 687, 2019, conference Name: IEEE Access.
  6. A. G. Gonzalez, “Walker Fall Detection Dataset,” 2023, https://www.kaggle.com/dsv/6493939. [Online]. Available: https://www.kaggle.com/dsv/6493939.
  7. T. de Quadros, A. E. Lazzaretti, and F. K. Schneider, “A Movement Decomposition and Machine Learning-Based Fall Detection System Using Wrist Wearable Device,” IEEE Sensors Journal, vol. 18, no. 12, pp. 5082–5089, Jun. 2018. [CrossRef]
  8. S. Majumder, T. Mondal, and M. J. Deen, “Wearable Sensors for Remote Health Monitoring,” Sensors, vol. 17, no. 1, p. 130, Jan. 2017.
  9. P. Pierleoni and others, “A high reliability wearable device for elderly fall detection,” IEEE Sensors Journal, vol. 15, no. 8, pp. 4544–4553, 2015. [CrossRef]
  10. F. Wu, H. Zhao, Y. Zhao, and H. Zhong, “Development of a wearable-sensor-based Fall Detection System,” International Journal of Telemedicine and Applications, vol. 2015, pp. 1–11, 2015. [CrossRef]
  11. F. Attal and others, “Physical human activity recognition using wearable sensors,” Sensors, vol. 15, no. 12, pp. 31 314–31 338, 2015. [CrossRef]
  12. W. Sousa Lima, E. Souto, K. El-Khatib, R. Jalali, and J. Gama, “Human activity recognition using inertial sensors in a smartphone: An overview,” Sensors, vol. 19, no. 14, p. 3213, 2019. [CrossRef]
  13. Y. Cao, Y. Yang, and W. Liu, “E-FallD: A fall detection system using Android-based smartphone,” in 2012 9th International Conference on Fuzzy Systems and Knowledge Discovery, 2012, pp. 1509–1513, place: Chongqing, China.
  14. P. Tsinganos and A. Skodras, “A smartphone-based fall detection system for the elderly,” in Proceedings of the 10th International Symposium on Image and Signal Processing and Analysis, 2017, pp. 53–58, place: Ljubljana, Slovenia.
  15. M. M. Hassan, A. Gumaei, G. Aloi, G. Fortino, and M. Zhou, “A Smartphone-Enabled Fall Detection Framework for Elderly People in Connected Home Healthcare,” IEEE Network, vol. 33, no. 6, pp. 58–63, Nov. 2019. [CrossRef]
  16. M. Cheffena, “Fall Detection Using Smartphone Audio Features,” IEEE Journal of Biomedical and Health Informatics, vol. 20, no. 4, pp. 1073–1080, Jul. 2016. [CrossRef]
  17. K. Ozcan and S. Velipasalar, “Wearable Camera- and Accelerometer-Based Fall Detection on Portable Devices,” IEEE Embedded Systems Letters, vol. 8, no. 1, pp. 6–9, Mar. 2016. [CrossRef]
  18. Y. He, Y. Li, and S.-D. Bao, “Fall detection by built-in tri-accelerometer of smartphone,” in Proceedings of 2012 IEEE-EMBS International Conference on Biomedical and Health Informatics, 2012, pp. 184–187, place: Hong Kong, China.
  19. A. Z. Rakhman, L. E. Nugroho, Widyawan, and Kurnianingsih, “Fall detection system using accelerometer and gyroscope based on smartphone,” in 2014 The 1st International Conference on Information Technology, Computer, and Electrical Engineering, 2014, pp. 99–104, place: Semarang, Indonesia.
  20. N. M. Fung, J. Wong Sing Ann, Y. H. Tung, C. Seng Kheau, and A. Chekima, “Elderly Fall Detection and Location Tracking System Using Heterogeneous Wireless Networks,” in 2019 IEEE 9th Symposium on Computer Applications & Industrial Electronics (ISCAIE), 2019, pp. 44–49, place: Malaysia.
  21. P. Tsinganos and A. Skodras, “On the comparison of wearable sensor data fusion to a single sensor machine learning technique in fall detection,” Sensors, vol. 18, no. 2, p. 592, 2018. [CrossRef]
  22. E. E. Stone and M. Skubic, “Fall Detection in Homes of Older Adults Using the Microsoft Kinect,” IEEE Journal of Biomedical and Health Informatics, vol. 19, no. 1, pp. 290–301, Jan. 2015. [CrossRef]
  23. G. Diraco, A. Leone, and P. Siciliano, “An active vision system for fall detection and posture recognition in elderly healthcare,” in 2010 Design, Automation & Test in Europe Conference & Exhibition (DATE 2010), 2010, pp. 1536–1541, place: Dresden, Germany.
  24. W.-H. Chen and H.-P. Ma, “A fall detection system based on infrared array sensors with tracking capability for the elderly at home,” in 2015 17th International Conference on E-health Networking, Application & Services (HealthCom), 2015, pp. 428–434, place: Boston, MA, USA.
  25. Q. Wu, Y. D. Zhang, W. Tao, and M. G. Amin, “Radar-based fall detection based on Doppler time-frequency signatures for,”..., 2015.
  26. Y.-Z. Hsieh and Y.-L. Jeng, “Development of Home Intelligent Fall Detection IoT System Based on Feedback Optical Flow Convolutional Neural Network,” IEEE Access, vol. 6, pp. 6048–6057, 2018. [CrossRef]
  27. Y.-S. Lee and H. Lee, “Multiple object tracking for fall detection in real-time surveillance system,” in 2009 11th International Conference on Advanced Communication Technology, 2009, pp. 2308–2312, place: Gangwon, Korea (South).
  28. Z. Huang, Y. Liu, Y. Fang, and B. K. P. Horn, “Video-based Fall Detection for Seniors with Human Pose Estimation,” in 2018 4th International Conference on Universal Village (UV), 2018, pp. 1–4, place: Boston, MA, USA.
  29. Y. Li, K. C. Ho, and M. Popescu, “A Microphone Array System for Automatic Fall Detection,” IEEE Transactions on Biomedical Engineering, vol. 59, no. 5, pp. 1291–1301, May 2012. [CrossRef]
  30. J. He, S. Bai, and X. Wang, “An unobtrusive fall detection and alerting system based on Kalman filter and Bayes network classifier,” Sensors, vol. 17, no. 6, pp. 1393–1405, 2017. [CrossRef]
  31. H. Li, A. Shrestha, F. Fioranelli, J. Le Kernec, H. Heidari, M. Pepa, E. Cippitelli, E. Gambi, and S. Spinsante, “Multisensor data fusion for human activities classification and fall detection,” in 2017 IEEE SENSORS, Oct. 2017, pp. 1–3. [CrossRef]
  32. L. Palmerini and et al., “A wavelet-based approach to fall detection,” Sensors, vol. 15, no. 5, pp. 11 575–11 586, 2015. [CrossRef]
  33. E. Casilari, J.-A. Santoyo-Ramón, and J.-M. Cano-García, “Analysis of Public Datasets for Wearable Fall Detection Systems,” Sensors, vol. 17, no. 7, p. 1513, Jul. 2017, number: 7 Publisher: Multidisciplinary Digital Publishing Institute. [Online]. Available: https://www.mdpi.com/1424-8220/17/7/1513.
  34. O. Ojetola, E. Gaura, and J. Brusey, “Data set for fall events and daily activities from inertial sensors,” in Proceedings of the 6th ACM Multimedia Systems Conference, ser. MMSys ’15. New York, NY, USA: Association for Computing Machinery, Mar. 2015, pp. 243–248. [Online]. Available. [CrossRef]
  35. F. Gu, K. Khoshelham, J. Shang, F. Yu, and Z. Wei, “Robust and Accurate Smartphone-Based Step Counting for Indoor Localization,” IEEE Sensors Journal, vol. 17, no. 11, pp. 3453–3460, Jun. 2017, conference Name: IEEE Sensors Journal.
  36. F. J. Ordóñez and D. Roggen, “Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition,” Sensors, vol. 16, no. 1, p. 115, Jan. 2016, number: 1 Publisher: Multidisciplinary Digital Publishing Institute. [Online]. Available: https://www.mdpi.com/1424-8220/16/1/115.
  37. J. A. B. Link, P. Smith, N. Viol, and K. Wehrle, “FootPath: Accurate map-based indoor navigation using smartphones,” in 2011 International Conference on Indoor Positioning and Indoor Navigation, Sep. 2011, pp. 1–8.
  38. H. Zhang, W. Yuan, Q. Shen, T. Li, and H. Chang, “A Handheld Inertial Pedestrian Navigation System With Accurate Step Modes and Device Poses Recognition,” IEEE Sensors Journal, vol. 15, no. 3, pp. 1421–1429, Mar. 2015, conference Name: IEEE Sensors Journal.
  39. M.-S. Pan and H.-W. Lin, “A Step Counting Algorithm for Smartphone Users: Design and Implementation,” IEEE Sensors Journal, vol. 15, no. 4, pp. 2296–2305, Apr. 2015, conference Name: IEEE Sensors Journal.
  40. S. Ha and S. Choi, “Convolutional neural networks for human activity recognition using multiple accelerometer and gyroscope sensors,” in 2016 International Joint Conference on Neural Networks (IJCNN), Jul. 2016, pp. 381–388, iSSN: 2161-4407.
  41. M. Zeng, L. T. Nguyen, B. Yu, O. J. Mengshoel, J. Zhu, P. Wu, and J. Zhang, “Convolutional Neural Networks for human activity recognition using mobile sensors,” in 6th International Conference on Mobile Computing, Applications and Services, Nov. 2014, pp. 197–205.
  42. A. D. C. Chan and J. R. Green, “Smart Rollator Prototype,” in 2008 IEEE International Workshop on Medical Measurements and Applications, 2008, pp. 97–100, place: Ottawa, ON, Canada.
  43. D.-M. Ding, Y.-G. Wang, W. Zhang, and Q. Chen, “Fall Detection System on Smart Walker Based on Multisensor Data Fusion and SPRT Method,” IEEE Access, vol. 10, pp. 80 932–80 948, 2022. [CrossRef]
  44. P. Di and et al., “Fall Detection and Prevention Control Using Walking-Aid Cane Robot,” IEEE/ASME Transactions on Mechatronics, vol. 21, no. 2, pp. 625–637, Apr. 2016. [CrossRef]
  45. P. Warden and D. Situnayake, Tinyml: Machine learning with TensorFlow Lite on Arduino and ultra-low-power microcontrollers. O’Reilly Media, 2019.
  46. Arduino, “Arduino Nano 33 BLE Sense,” 2023, https://store-usa.arduino.cc/products/arduino-nano-33-ble-sense. [Online]. Available: https://store-usa.arduino.cc/products/arduino-nano-33-ble-sense.
  47. D. M. Waqar, T. S. Gunawan, M. A. Morshidi, and M. Kartiwi, “Design of a Speech Anger Recognition System on Arduino Nano 33 BLE Sense,” in 2021 IEEE 7th International Conference on Smart Instrumentation, Measurement and Applications (ICSIMA), 2021, pp. 64–69, place: Bandung, Indonesia.
  48. V. A. Wardhany, Subono, A. Hidayat, S. W. Utami, and D. S. Bastiana, “Arduino Nano 33 BLE Sense Performance for Cough Detection by Using NN Classifier,” in 2022 6th International Conference on Information Technology, Information Systems and Electrical Engineering (ICITISEE), 2022, pp. 455–458, place: Yogyakarta, Indonesia.
  49. K. Trivedi and H. Shroff, “Identification of Deadliest Mosquitoes Using Wing Beats Sound Classification on Tiny Embedded System Using Machine Learning and Edge Impulse Platform,” in 2021 ITU Kaleidoscope: Connecting Physical and Virtual Worlds (ITU K), 2021, pp. 1–6, place: Geneva, Switzerland.
  50. G. B. Moon and others, “Seeed Studio XIAO nRF52840 Sense - TinyML/TensorFlow Lite - IMU / Microphone - Bluetooth5,” 2023, https://www.seeedstudio.com/Seeed-XIAO-BLE-Sense-nRF52840-p-5253.html. [Online]. Available: https://www.seeedstudio.com/Seeed-XIAO-BLE-Sense-nRF52840-p-5253.html.
  51. BayesWitnesses, “Bayeswitnesses/m2cgen: Transform ML models into a native code (Java, C, Python, Go, JavaScript, Visual Basic, C#, R, PowerShell, PHP, Dart, Haskell, Ruby, F#, Rust) with Zero dependencies,” 2023, https://github.com/BayesWitnesses/m2cgen. [Online]. Available: https://github.com/BayesWitnesses/m2cgen.
Figure 1. Visual depiction of a SafeStride prototype integrated into a walker device.
Figure 1. Visual depiction of a SafeStride prototype integrated into a walker device.
Preprints 85910 g001
Figure 2. Graphical representation of the elements incorporated into this design. Main Core is a ESP8266 board and Data Acquisition Core is based on nRF52840.
Figure 2. Graphical representation of the elements incorporated into this design. Main Core is a ESP8266 board and Data Acquisition Core is based on nRF52840.
Preprints 85910 g002
Figure 3. The Java interface is a user-friendly graphical tool designed for identifying and separating step groups from turning movements.
Figure 3. The Java interface is a user-friendly graphical tool designed for identifying and separating step groups from turning movements.
Preprints 85910 g003
Figure 4. RMS motion values and Hidden Markov Model observed states, representing detected steps
Figure 4. RMS motion values and Hidden Markov Model observed states, representing detected steps
Preprints 85910 g004
Figure 5. Confusion matrix with the performance of the RandomForestClassifier on the test dataset.
Figure 5. Confusion matrix with the performance of the RandomForestClassifier on the test dataset.
Preprints 85910 g005
Figure 6. Architecture of a simple CNN for motion classification
Figure 6. Architecture of a simple CNN for motion classification
Preprints 85910 g006
Figure 7. A graphical interface was developed to test the Safestride prototype with continuous data. Recorded data was loaded and sent to the prototype via Serial Communication, utilizing a sliding window (indicated by red vertical lines). The prototype performed Machine Learning classification using its trained model and promptly returned the result to the application (see the table in the right side).
Figure 7. A graphical interface was developed to test the Safestride prototype with continuous data. Recorded data was loaded and sent to the prototype via Serial Communication, utilizing a sliding window (indicated by red vertical lines). The prototype performed Machine Learning classification using its trained model and promptly returned the result to the application (see the table in the right side).
Preprints 85910 g007
Table 1. Performance Metrics of Multiple Machine Learning Classifiers for the IMU dataset
Table 1. Performance Metrics of Multiple Machine Learning Classifiers for the IMU dataset
Classifier Model Accuracy Precision Recall F1-Score
RandomForest 98.51% 98.56% 98.51% 98.51%
HistGradientBoosting 97.86% 97.93% 97.86% 97.85%
ExtraTreesClassifier 97.86% 97.94% 97.86% 97.84%
BaggingClassifier 96.74% 96.87% 96.74% 96.73%
DecisionTreeClassifier 94.04% 94.32% 94.04% 94.05%
MLPClassifier 92.08% 92.79% 92.08% 92.05%
ExtraTreeClassifier 90.13% 90.63% 90.13% 90.11%
BernoulliNB 89.94% 91.17% 89.94% 89.37%
GaussianNB 78.49% 81.08% 78.49% 74.21%
AdaBoostClassifier 78.12% 80.10% 78.12% 73.74%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated