Preprint
Article

Machine Learning-Based Human Life Detection behind Walls Exploiting a UWB Radar Sensor

Altmetrics

Downloads

217

Views

113

Comments

0

Submitted:

05 March 2024

Posted:

06 March 2024

You are already at the latest version

Alerts
Abstract
The existence of low-cost and accurate tools that can equip the First Responders (FRs) to detect trapped victims and provide insights about their health condition in the aftermath of a disaster, is imperative. To address the problem of detecting trapped victims behind walls or debris, we have developed a tool that exploits an Ultra-Wide Band (UWB) radar sensor for data collection and Machine Learning (ML) algorithms for data analysis. To evaluate the efficacy of our approach, we collected data from nine humans, both in standing and lying down positions. Next, we applied various ML algorithms to the collected dataset for two discrete sub-tasks that are of interest from an FR’s perspective. The first task is the detection of the victim’s presence, where the algorithms attained more than 95% accuracy and F1-Score. The second task is the estimation of the distance between the radar sensor and the victim, where the tool showed an average error of less than 40 cm.
Keywords: 
Subject: Engineering  -   Electrical and Electronic Engineering

1. Introduction

First Responders (FRs) are the vanguard in cases of natural or manmade disasters as they save lives, prevent the spread of panic, and generally manage aftermath crises. To be as effective as possible, the FRs must be equipped with a broad gamut of tools to assist their senses during field operations. The most desirable characteristics of these tools are: a) low cost, in order to equip all members of the FR unit and thereupon minimize the detection time of trapped victims, b) easily operated, lightweight and fast deployable, in order to be carried around easily by any FR team member, with minimum training and regardless their body shape and gender, and c) high accuracy and reliability, in order to aid FRs to avoid the search of areas where victims are absent and, in this way, lose valuable time.
In prior art, various technologies and methods based on radar sensors have been proposed [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17] to detect human presence behind walls and large obstacles. These solutions were mainly aiming to sense the victim’s chest movement due to breathing and identify it as a sign of life. In these works, various methods were employed e.g., the frequency domain, arctangent demodulation, ensemble empirical mode decomposition-based frequency, accumulation method, discrete wavelet transform, convolutional neural networks and variable mode decomposition, etc. and various tasks are addressed, mainly i) the estimation of the breathing and heart rate, obtaining a less than 10% error compared with ground truth measurements [1,2,3,4,5,6,7,8,9,10,12,13,14,17] ii) the distance between the victim and the radar, obtaining an error of up to several cm [2,4,5,6,7,8,10,13,14,15,16] iii) the number of humans within the scanned area, attaining high accuracy in the particular cases [11], [14] and iv) the designation of human presence or absence [15,16].
In addition to the various solutions that are proposed in the literature, a number of commercially available tools is already in the market [18,19,20,21,22,23,24,25]. A comparative analysis has been performed in [26], showing that their accuracy can be severely limited by a number of factors, like the sensitivity to the motion of other humans around the radar, the wall’s thickness and material, the victim’s position and the extent of his/her chest movement. Moreover, most of them are bulky and of high cost, limiting the possibility of equipping more than one search and rescue team member.
In this work, we utilize the available knowledge and we present a tool that has been designed and implemented during the RESCUER project [27] to meet the needs of FRs during search and rescue operations. This tool can provide valuable insights regarding the presence of a victim and its distance from the radar. To the best of our knowledge, this paper is the only work that employs ML algorithms to estimate the human presence and the distance between the victim and the radar. In addition, the proposed method is validated for both standing and lying down humans. Further, the proposed approach is of a much lower cost compared to commercial tools; e.g., our tool’s cost upon reach to market is estimated at circa 600 EUR, which is one to two orders of magnitude less than the available commercial tools [26]. Furthermore, it is small (~10 x 10 x 10 cm3 in volume) and very lightweight (~ 600 gr). These characteristics combined are a great advantage as it can be purchased en masse (low cost) and be readily deployed by more than one member of an FR unit (lightweight and small, easy to carry around). Individual FRs will thus be able to simultaneously scan for survivors over a large area and pinpoint the exact location of trapped victims within the scene of operations. For these reasons, this tool can dramatically reduce the detection time, especially in cases where there is a need to search a number of rooms/places for trapped victims in a very short time, e.g., in a case of fire.
The developed tool employs a commercially available radar, which is the X4M200 Ultra-Wide Band (UWB) radar sensor by Novelda [28], and Machine Learning (ML) methods to process the received signal and perform estimations about the target quantities. To validate our approach, two datasets with nine humans are collected, the first in standing and the second in lying down position. The ML algorithms are applied to both datasets demonstrating high accuracy in both wanted quantities, which are the human presence/absence and the distance between the human and the radar. The results validate the proposed solutions efficacy and confirm that implementing a radar-based tool, which is low-cost, lightweight, low-volume, and easy to use, that can detect trapped victims behind walls, is feasible.
The main contributions of this work compared with the previous works in [15,16,26] are the following: a) the proposed solution employs ML methods for victim and distance detection, extending our previous approach which incorporated closed-form expressions, leading to significantly higher accuracy, especially in the case of lying down victims, b) the collected dataset incorporates ground truth measurements of the breathing rate of the victims, and c) both datasets are published in an open access repository [29], to invite other researchers to explore, study, even improve the accuracy of estimations and as a consequence aid in the important field of detecting victims behind large obstacles using radar-based solutions.

2. Solution for Signs of Life Detection

2.1. Technical Requirements and Specifications

The FRs within the RESCUER project [27] have defined a number of technical requirements that are exploited to define the specifications of the candidate tool and allow us to select the most suitable sub-systems for this purpose, e.g., radar sensor, Single Board Computer (SBC), etc. The technical requirements are the following. The tool (i) should have a bootup time of less than 60 s (from turning on to first sensor readout), (ii) should be turned on and off by the user, (iii) could have a battery life of at least 45 minutes. Regarding these requirements, there is a need to attain a bootup time of less than 60 seconds (from turning on to the first sensor readout), as time-saving in a search-and-rescue operation is critical. Next, the tool needs to be turned on and off by the user, which is important for the FRs as it can provide data on demand, minimizing in this way the non-useful information as well as the energy consumption of the tool, which can be critical during a search and rescue operation. The battery life is required to be at least 45 minutes, which is considered sufficient for a search and rescue operation, given the fact that the batteries can last more than 45 minutes and their change with backup ones is convenient.
To satisfy these requirements, we exploited a radar sensor, the corresponding firmware and software needed to communicate and save the data using the proper format in the computing machine, e.g., NVIDIA Jetson Nano [30], as well as the algorithms for detecting the victim’s presence and his/her vital signs. Moreover, regarding the tool’s capabilities, it needs to operate efficiently even when large obstacles or walls between the radar and the victim are present. These walls and obstacles are expected to be built with materials that allow the penetration of signal, e.g., brick and wood (which covers the majority of domestic structures). Further, the effective range of observation can be up to 9 m, considering an unobstructed view, using a resolution of ~5 cm, which is sufficient to detect the human’s chest movement and obtain the breathing pattern. It is worth mentioning that the attainable range of observation strongly depends on the obstacle’s thickness, material, and density and is expected to be smaller when an obstacle is present compared to the free space case. Another critical capability is the collection time, which is the time that the sensor needs to collect measurements in order to deliver reliable results. In the proposed solution, this time is set to 20 seconds.
In the next Section 2.2, we discuss the relevant tools proposed in the literature as well as the commercially available solutions, while in Section 2.3, we present the developed method and the two datasets collected for the purposes of the current study.

2.2. Related Work, Existing Tools and Capabilities

In the previous work in [26], we reviewed the most relevant commercially available tools and literature-proposed methods for the detection of victims behind walls and large obstacles. In particular, [26] summarizes the operational characteristics, the algorithms employed, as well as the capabilities of a) UWB-based solutions, b) through-the-wall imaging (TWI) methods, and c) available products.
From [26], it can be derived that the main tasks that the UWB radar-based solutions can address are five: the estimation of the breathing rate and heart rate, the distance between the victim and the radar, the number of humans within the scanned area, and the designation of human presence or absence. In these solutions, the most examined metric is the breathing rate, and for this purpose, the UWB radar sensor is the most favorable solution. To derive the target quantities, radars with an emission frequency of 0.4 GHz or within the range of 7-10 GHz were exploited. The former frequency allows for efficient detection of the breathing rate, heart rate, and distance from the victim, penetrating thick walls up to 1 m, while the latter frequency can operate efficiently behind walls of up to 40 cm thickness. For the extraction of the breathing and heart rate, the frequency domain was selected from the majority of the methods, as using Fast Fourier Transform (FFT), the breathing rate can be extracted simply by selecting the dominant frequency and multiplying it by 60. For longer distances or in cases with low signal-to-noise-ratio, e.g., due to higher signal attenuation from a denser wall, more sophisticated algorithms need to be employed, such as arctangent demodulation, ensemble empirical mode decomposition-based frequency accumulation method, discrete wavelet transform, convolutional neural networks, and variable mode decomposition. All these methods have attained very high accuracy in all tasks that they addressed. However, they have been tested in a limited number of scenarios, e.g., behind specific walls and in a limited number of humans. For this reason, there is a strong need to validate those methods in a significantly larger number of scenarios and human subjects by including humans with different characteristics, such as breathing patterns, weight, height, etc. In addition, a very small number of openly accessible datasets is available, e.g., one in [16], and for this reason, a significant number of datasets needs to be published in open access, such as the collected for the purposes of current work in [29], to give the opportunity to all researchers around the globe to work on the very important task of human detection behind large obstacles, walls and debris. Further, more sophisticated algorithms, such as Deep Learning methods, need to be considered in order to detect humans and their vital signs, especially in cases of low signal-to-noise ratio (longer distances, denser walls, etc.).
A broad gamut of products is also available in the market. The comparative analysis in [26] has revealed that most of them are bulky and heavy (more than 3 kilos and more than 30 cm), rendering them difficult to deploy. This is a limiting factor in search and rescue operations that need to be completed in a very short time interval, e.g., a case of fire. Further, the penetration capability of these tools is at least 30 cm for standard building materials, such as brick, concrete, wood, etc. Moreover, the majority of radar sensors possess the ability to transmit signals in the Super High Frequency (SHF) spectrum. A significant drawback is that some of these products are impacted by motion within 15 meters from the sensor, e.g., due to the FRs, wind-blown grass, overhead trees, or debris, and this type of motion must be minimized to obtain reliable results. Further, the outputs of these solutions can vary from simply designating human presence to estimating his/her breathing rate.

2.3. Proposed Tool and Solution

Based on the conclusions derived in the previous sub-section, we developed a method that exploits the X4M200 UWB radar sensor [28] to collect data and the NVIDIA Jetson Nano [30] to store and locally process it, providing the victim detection and metrics that are requested by the FR teams of the RESCUER project. In the previous works of [15,16], we have concluded that the optimal radar height for the detection of standing victims is 1 m in order to face the human’s chest directly, while in the case of lying down victims, the optimal height is 20 cm and the radar should be facing the wall. Moreover, in this study, the radar is placed between 20 and 50 cm from the wall, and the wall’s material is Ytong, which is a lightweight, precast, and cellular concrete building material. The two cases of interest are pictorially described in Figure 1. A video demonstrating the tool and its capabilities can be found in [31].
The work presented extends [15,16], as it incorporates ML methods to increase the accuracy of estimations, especially in the case of lying down position, where the accuracy in true positive (estimated human presence when a human was present within the area of observation) and true negative (estimated human absence when a human was absent from the area of observation) cases was 95% and 70%, respectively. In addition, the estimation of the distance between the human and the radar is erroneous using the method of [16], as in the case of lying down victims, the signal is very weak due to the higher attenuation, and more sophisticated methods, like the ML methods considered here, need to be employed.
Two datasets were collected to validate the accuracy of the proposed approach. The first includes data recorded from standing humans positioned behind a wall of 30 cm thickness, and the second incorporates data gathered from lying down humans positioned behind a wall 20 cm thick and distant from the radar between 1 and 4 m. In both datasets, the BioHarness Zephyr 3 belt [32] that estimates the breathing rate of the victims was exploited in order to enrich the provided dataset with additional data, i.e., the ground truth measurements of the breathing rate in each data collection session, with the aim of being more useful to the research community in future endeavors. Each session lasted about 90 seconds, and the belt provided the breathing rate every second. To obtain reliable measurements, we averaged the breathing rate after the first 60 seconds to acquire a single value for all samples of the same data collection session. Next, the human was moved 0.5 m away from its current position to gather data from the next session. This procedure was followed until the human reached a 4 m distance from the radar, allowing the acquisition of data from a total of 7 sessions (from 1 to 4 m). The details of the dataset with the lying down persons are tabulated in Table 1, and the details of the dataset of standing persons are tabulated in Table 2. In both cases, all samples include data of 20-sec duration, exploiting an overlap of 15 sec over the 20-sec window in order to increase the number of available samples. We can observe that, in both datasets, the breathing rates span over a wide range, with a lower limit of approximately 6.8 breaths per minute and an upper limit of approximately 22 breaths per minute. Additionally, we collected a number of data from the same places without any human presence to train our algorithms to detect both human presence and absence efficiently. In particular, the number of samples without human presence is 1569 for the dataset including standing persons, and 1058 for the dataset incorporating lying down persons.
For the human presence detection, we examined six ML algorithms of different computational complexity (Logistic Regression (LR), Support Vector Machine (SVM), Decision Tree (DT), Random Forest (RF), k Nearest Neighbors (kNN), Convolutional Neural Network (CNN)) and an analytical method, named as “threshold method”, as in [16], in which a threshold was set and compared with the average standard deviation of each sample as follows. If the average standard deviation clears the pre-set threshold, it can designate a human presence as a result of the torso movement due to breathing. On the other hand, when no victim is present, the average standard deviation is expected to be low and human absence is considered. For the distance estimation, four ML algorithms are also examined: Support Vector Regression (SVR), DT, RF, kNN and a low complexity method that is edge deployable, hereinafter named “std method”, which designates presence in the distance with the highest standard deviation.
To feed the ML methods (except CNN), we examined the impact of ten time-dependent features, namely (a) maximum value, (b) minimum value, (c) average value, (d) median value, (e) standard deviation, (f) kurtosis, (g) skewness, (h) number of zero crossings (for normalized signal), (i) 25% percentile and (j) 75% percentile, and only the feature standard deviation was selected, as the other nine were not able to provide any significant improvement in the prediction accuracy. The standard deviation was computed for each distance, reducing the dimension of each input sample from 340 x 109 to 1 x 109. The value 109 denotes the distances in each sample as the detection range is between 0.5 and 6 m, considering a distance step of about 0.05144 m, and 340 samples in time, for a sample rate of 17 samples per second.
Next, for the training of the CNN for both presence and distance detection as well as for the training of the other five ML methods for presence detection, the radar sensor’s values in each distance were normalized by subtracting the mean value and dividing by the standard deviation as:
Ζ i = S i μ σ
where Si denotes the ith sample of the radar sensor (e.g., in a specific distance), Zi is its normalized representation and μ and σ denote their mean and standard deviation values, respectively. A custom-built CNN was developed to solve the victim detection problem; After a random hyperparameter search, the proposed CNN consists of the following layers:
layer 1: 16 convolutional filters with a size of (1,25). This is followed by a ReLU activation function, a (4,4) strided max-pooling operation and a dropout probability equal to 0.5.
layer 2: 24 convolutional filters with a size of (1,20). Similar to the first layer, this is followed by a ReLU activation function, a (4,4) strided max-pooling operation and a dropout probability equal to 0.5.
layer 3: 32 convolutional filters with a size of (3,7). The 2D convolution operation is followed by a ReLU activation function, a 2D global max-pooling operation and a dropout probability equal to 0.5.
layer 4: 2 fully connected hidden units, followed by a sigmoid activation function.
Finally, during the data analysis for (i) presence detection and (ii) distance detection, which includes the data processing and the training and evaluation of ML methods, we used Python as the programming language, and specifically the Numpy library for matrix multiplications and data pre-processing as well as Pandas library to tabulate the radar input values, load and store them in Comma Separated Values (CSV) formatted files.

4. Results

In this section, we present the results for two tasks of interest set by the FRs. These tasks are (a) human presence or absence (binary classification task)—sub-section 4.1 and (b) distance from human (regression task)—sub-section 4.2.

4.1. Results for Human Presence Detection

In the first set of results, we estimate human presence or absence, with a wall of thickness equal to 20 cm for the dataset with lying down victims (presented in Table 1) and 30 cm for the dataset including standing victims (presented in Table 2). The employed ML algorithms were trained on the dataset that includes lying down victims executing a Leave One Subject Out (LOSO) strategy. Then, for additional validation, the trained algorithms were applied to the dataset incorporating standing victims. In each case, ten runs were performed and the average accuracy and F1-Score are shown in Figure 2 and Figure 3, respectively. The threshold method applied in [16] is also shown. As it is evident, all six ML algorithms attain an accuracy of more than 98.4% in the lying down humans’ case and up to 95.9% accuracy in the standing humans’ case, validating the efficacy of the developed solution. Finally, the calculated F1-Score is more than 98.4% in the lying down human case and up to 95.9% in the standing human case, verifying that the developed method can detect both human absence and presence with very high accuracy.
To further probe in these results, Figure 4 illustrates the confusion matrices for all seven methods. The first observation is that CNN and SVM not only attained very high accuracy in both datasets but also the number of false negatives (predicting human absence while a human is present) is zero or close-to-zero. This is critical in a search and rescue operation, as it will allow the detection of the victim at an almost 100% rate. A second observation is that RF outperforms DT both in true positives and true negatives. This is attributed to the fact that RF exploits a higher number of trees than DT. Third, LR and kNN show almost similar performance in both datasets, however, their accuracy is lower than SVM and CNN. Finally, the threshold method, which is the least computationally complex, fails to provide reliable results, especially in the lying down case, due to the low received signal strength.

4.2. Results for Distance Detection

In the second set of results, four ML algorithms are employed to estimate the distance between the radar and the victim. CNN models, the one employed in the previous Task included, did not provide a satisfactory accuracy, so are omitted from our discussion in this subsection 4.2. As with the case of presence detection, the ML algorithms were trained on the dataset including lying down victims executing the LOSO strategy. Then, for additional validation, the trained algorithms were applied to the dataset incorporating standing victims. In each case, ten runs were considered and the average absolute mismatch is shown in Figure 5. The “std method” is also shown as a benchmark. As it is evident, DT and RF attained a discrepancy of less than 0.3 m in the lying-down human case and less than 0.4 m in the standing human case, validating the high accuracy of the developed solution in distance detection. The k-NN and SVR showed the highest error in the lying down humans’ case, up to 0.53 m, which is considered acceptable for the extent of the examined area ranging from 1 to 4 m. Finally, the “std method” failed to estimate the correct distance in the case of lying victims, however, it managed to predict the distance with an average discrepancy of less than 0.3 m in the case of standing victims. This can be attributed to the fact that a) the signal of the chest movement due to breathing is higher in a standing position than in a lying down position and b) a standing human may slightly move back and forth for a few cm, contributing in this way to the increase of the standard deviation. Overall, the results verify that the developed method, when employing ML algorithms, can accurately predict the distance between the radar sensor and the victim both when the victim is standing and lying down.

5. Conclusions and Future Directions

In this paper, we have analyzed and evaluated the efficacy of a low-cost, lightweight, low-volume and easy-to-use solution for victim detection behind large obstacles, such as walls and doors. This tool can provide results in real-time, as it incorporates ML algorithms that are lite, and thus, they can be incorporated into an SBC. The results demonstrate that the developed tool managed to detect the victim correctly in more than 95% of the cases in two datasets comprising humans in standing and lying down positions, while it also attained more than 95% accuracy in classifying human absence correctly. Moreover, the average error on distance detection was less than 40 cm in all cases and in both standing and lying down humans. These results are very promising, making the proposed solution a strong candidate to equip and aid the members of an FR team in detecting trapped victims behind walls or debris. Moreover, the accuracy of the proposed method can be further enhanced by exploiting signal processing techniques that can improve the signal’s quality, e.g., by filtering out unwanted frequencies, while our proposed approach can be further evaluated in real-life operations and in different types of large obstacles, e.g., walls made of different materials and in different rooms of various sizes. Finally, another task of interest for future work is the breathing rate estimation, since this information may offer FRs the opportunity to remotely assess the health conditions of trapped victims.

Author Contributions

Conceptualization, methodology approach development, editing, writing, and reviewing: D.U., P.K., C.P., S.M. All authors have read and agreed to the published version of the manuscript.

Funding

This work has received funding from the European Union’s Horizon 2020 Research & Innovation Programme under RESCUER project, Grant Agreement No. 101021836.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study. Written informed consent has been obtained from the subjects to publish this paper.

Data Availability Statement

The dataset can be found in: https://zenodo.org/records/10779256.

Acknowledgments

The authors would like to thank V. Doulgerakis for his aid in the box creation as well as the smooth placement of the Jetson Nano and the radar sensor inside, and C. Chatzigeorgiou for his participation in the interfacing of the radar sensor with the Jetson Nano. This work has received funding from the European Union’s Horizon 2020 Research & Innovation Programme under RESCUER project, Grant Agreement No. 101021836.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. J. Li, L. Liu, Z. Zeng and F. Liu, “Advanced Signal Processing for Vital Sign Extraction With Applications in UWB Radar Detection of Trapped Victims in Complex Environments,” in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 7, no. 3, pp. 783-791, March 2014. [CrossRef]
  2. A.N. Gaikwad and K. S. Dongre, “Extraction of life sign of human being hidden behind the wall,” 2016 11th International Conference on Industrial and Information Systems (ICIIS), Roorkee, India, 2016, pp. 925-929. [CrossRef]
  3. J. Yan, H. Hong, H. Zhao, Y. Li, C. Gu, and X. Zhu, “Through-Wall Multiple Targets Vital Signs Tracking Based on VMD Algorithm,” Sensors, vol. 16, no. 8, p. 1293, Aug. 2016. [CrossRef]
  4. X. Liang, H. Zhang, G. Fang, S. Ye and T. A. Gulliver, “An Improved Algorithm for Through-Wall Target Detection Using Ultra-Wideband Impulse Radar,” in IEEE Access, vol. 5, pp. 22101-22118, 2017. [CrossRef]
  5. Liang, Xiaolin, et al. “Improved denoising method for through-wall vital sign detection using UWB impulse radar”, in Digital Signal Processing, vol. 74, p. 72-93, 2018. [CrossRef]
  6. Liang, X., Lv, T., Zhang, H. et al. “Through-wall human being detection using UWB impulse radar”, in J. Wireless Com. Network, 2018. [CrossRef]
  7. Liang, Xiaolin, et al. “Ultra-wide band impulse radar for life detection using wavelet packet decomposition”, in Physical Communication vol. 29, p. 31-47, 2018. [CrossRef]
  8. Liang, X., Deng, J., Zhang, H. et al., “Ultra-Wideband Impulse Radar Through-Wall Detection of Vital Signs”, in Sci Rep 8, 13367 (2018). [CrossRef]
  9. A. Sarkar and D. Ghosh, “Through-Wall Heartbeat Frequency Detection Using Ultra-Wideband Impulse Radar,” 2019 International Conference on Range Technology (ICORT), Balasore, India, 2019, pp. 1-5. [CrossRef]
  10. Liang, Xiaolin, and Hao Zhang., “Remotely detectable signs of life based on impulse UWB radar”, in Multimedia Tools and Applications vol. 78, p. 10583-10599, 2019. [CrossRef]
  11. C. Shi, Z. Zheng, J. Pan, Z.-K. Ni, S. Ye, and G. Fang, “Multiple Stationary Human Targets Detection in Through-Wall UWB Radar Based on Convolutional Neural Network,” Applied Sciences, vol. 12, no. 9, p. 4720, May 2022. [CrossRef]
  12. W. Xian, Q. Qi, S. Liu, T. Ma, H. Cheng and J. Chai, “Improved Denoising Method for UWB Vital Signs Detection and Extraction,” 2022 7th International Conference on Signal and Image Processing (ICSIP), Suzhou, China, 2022, pp. 39-43. [CrossRef]
  13. A.A. Pramudita et al., “Radar System for Detecting Respiration Vital Sign of Live Victim Behind the Wall,” in IEEE Sensors Journal, vol. 22, no. 15, pp. 14670-14685, 1 Aug.1, 2022. [CrossRef]
  14. Sarkar, Amit, and Debalina Ghosh. “Accurate sensing of multiple humans buried under rubble using IR-UWB SISO radar during search and rescue.” Sensors and Actuators A: Physical 348 (2022): 113975. [CrossRef]
  15. D. Uzunidis, P. Kasnesis, E. Margaritis, M. Feidakis, C. Z. Patrikakis and S. A. Mitilineos, “Real-Time Human Detection Behind Obstacles Based on a low-cost UWB Radar Sensor,” 2022 IEEE 8th World Forum on Internet of Things (WF-IoT), Yokohama, Japan, 2022, pp. 1-6. [CrossRef]
  16. D. Uzunidis, E. Margaritis, C. Chatzigeorgiou, C. Z. Patrikakis and S. A. Mitilineos, “A Dataset for Aftermath Victim Detection Behind Walls or Obstacles Using an UWB Radar Sensor,” 2023 12th International Conference on Modern Circuits and Systems Technologies (MOCAST), Athens, Greece, 2023, pp. 1-5. [CrossRef]
  17. Q. H. Ramadhamy, A. A. Pramudita and F. Y. Suratman, “Clutter Reduction in Detecting Trapped Human Respiration Under Rubble for FMCW Radar System,” 2023 International Seminar on Intelligent Technology and Its Applications (ISITIA), Surabaya, Indonesia, 2023, pp. 716-721., pp. 154–196. [CrossRef]
  18. Geophysical, LifeLocator TRx, available at https://www.geophysical.com/products/lifelocator-trx. (accessed on 11/8/2023).
  19. Geozondas portable through-the-wall radar, available at https://geozondas.com/radar_moving_detection.html (accessed on 11/8/2023).
  20. NQ-Defense see through wall radar system, available at https://www.nqdefense.com/products/see-through-wall-radar-system/nd-sv003-see-through-wall-radar-system/ (accessed on 11/8/2023).
  21. Retia A.S. radar system model ReTwis 5, available at https://retwis.eu/ (accessed on 11/8/2023).
  22. YSR-120 Handheld, Through Wall Radar available at https://www.bjltsj.com/en/product/1004.html (accessed on 11/8/2023).
  23. Xaver™ 400 available at https://camero-tech.com/xaver-products/xaver-400/ (accessed on 11/8/2023).
  24. CPR4+ Radar available at https://cinside.se (accessed on 11/8/2023).
  25. WTPL Device https://pdf.indiamart.com/impdf/12350623512/MY-714958/wtpl-see-through-wall-radar-human-detection-device.pdf (accessed on 11/8/2023).
  26. D. Uzunidis, S. A. Mitilineos, C. Ponti, G. Schettini and C. Z. Patrikakis, “Detection of trapped victims behind large obstacles using radar sensors: a review on available technologies and candidate solutions,” 2023 IEEE Conference on Antenna Measurements and Applications (CAMA), Genoa, Italy, 2023, pp. 1025-1030.
  27. “RESCUER Project” https://rescuerproject.eu/ (accessed on 19/2/2023).
  28. https://novelda.com/ (accessed on 19/2/2023).
  29. https://zenodo.org/records/10779256.
  30. https://developer.nvidia.com/embedded/jetson-nano-developer-kit (accessed on 19/2/2023).
  31. https://www.youtube.com/watch?v=2LiWm33e6Z0.
  32. https://www.zephyranywhere.com/media/download/bioharness3-user-manual.pdf (accessed on 19/2/2023).
Figure 1. Schematic illustration of (a) the studied cases considering standing and lying down victims and (b) the developed tool [30].
Figure 1. Schematic illustration of (a) the studied cases considering standing and lying down victims and (b) the developed tool [30].
Preprints 100592 g001
Figure 2. Classification accuracy per method of victim detection behind walls of 20 cm (lying down humans) and 30 cm (standing humans).
Figure 2. Classification accuracy per method of victim detection behind walls of 20 cm (lying down humans) and 30 cm (standing humans).
Preprints 100592 g002
Figure 3. F1-Score per method of victim detection behind walls of 20 cm (lying down humans) and 30 cm (standing humans).
Figure 3. F1-Score per method of victim detection behind walls of 20 cm (lying down humans) and 30 cm (standing humans).
Preprints 100592 g003
Figure 4. Confusion matrices for human presence/absence estimation generated by training six ML methods and an analytical one in lying down victims and applying them to (a) lying down victims and (b) standing victims.
Figure 4. Confusion matrices for human presence/absence estimation generated by training six ML methods and an analytical one in lying down victims and applying them to (a) lying down victims and (b) standing victims.
Preprints 100592 g004
Figure 5. Relative mismatch for different distances estimated using five methods.
Figure 5. Relative mismatch for different distances estimated using five methods.
Preprints 100592 g005
Table 1. Details of the created dataset, including breathing rate estimation and lying down victims.
Table 1. Details of the created dataset, including breathing rate estimation and lying down victims.
Person number Number of samples Average Breathing Rate (breaths/min)
1 143 13.89
2 147 7.98
3 122 7.59
4 123 9.60
5 135 13.30
6 146 15.66
7 138 19.79
Total 954
Table 2. Details of the created dataset, including breathing rate estimation and standing victims.
Table 2. Details of the created dataset, including breathing rate estimation and standing victims.
Person number Number of samples Average Breathing Rate (breaths/min)
1 98 21.94
2 98 7.57
3 212 13.09
4 165 18.52
5 161 11.43
6 146 17.41
7 148 13.66
8 159 14.37
9 154 15.69
Total 1341
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated