The use of ultrasonic technology in sensor implementation for identifying finger motions in prosthetic applications has been researched over the last ten years. A ground-breaking study by Zheng et al. investigated whether ultrasound imaging of the forearm might be used to control a powered prosthesis, and the term ‘sonomyography’ (SMG) was coined by the group [
34]. Ultrasound signals have recently garnered the interest of researchers in the area of HMIs because they can collect information from both superficial and deep muscles and so provide more comprehensive information than other techniques [
35]. Due to the great spatiotemporal resolution and specificity of ultrasound measurements of muscle deformation, researchers have been able to infer fine volitional motor activities such as finger motions and dexterous control of robotic hands [
36,
37]. To retain performance, a prosthesis that responds to the user's physiological signals must be fast to respond. sEMG, EEG, and other intuitive interfaces are capable of detecting neuromuscular signals prior to the beginning of motion; therefore, they are predicted to appear before the motion itself [
38,
39,
40]. However, ultrasound imaging can detect skeletal muscle kinematic and kinetic characteristics [
41], which indicate the continued creation of cross bridges during motor unit recruitment and prior to the generation of muscular force [
39,
42], and these changes occur during sarcomere shortening, when muscle force exceeds segment inertial forces, and before the beginning of joint motion [
39]. It is important to note that the changes in kinetic and kinematic ultrasonography properties of muscles occur prior to joint motion. As a result, prosthetic hands will be able to respond more quickly in the present and future.
3.1. Ultrasound modes used in SMG
Real-time dynamic images of muscle activities can be provided by US imaging systems. There are five different types of ultrasound modes, and each of them generates different information, but only some of them are applicable for use in controlling artificial robotic hands. The most popular ultrasound modes utilized in prosthesis control are A-mode, B-mode, and M-mode.
1) A-mode SMG: One of the most basic types of US is A-mode, which offers data in one dimension in the form of a graph in which the y axis indicates information about echo amplitude and the x axis represents time, similar to the way that EMG signals indicate muscle activity.
In 2008, Guo et al. [
43] introduced a novel HMI method called one-dimensional sonomyography (1D SMG) as a viable alternative to EMG for assessing the muscle activities and controlling protheses. In this study, nine healthy volunteers were asked to perform different types of hand and wrist movements. During these experiments, different data were collected, such as joint angles, EMG signals of forearm muscles, and muscle activities collected from A-mode Ultrasound. The results of their study showed that the 1D SMG technique can be reliable and has the potential to be used for controlling one-degree-of-freedom bionic hands.
A study by Guo et al. [
44] was carried out in order to assess and compare the performance of one-dimensional A-mode SMG and sEMG signals while following guided patterns of wrist extension. They also looked at the possibility of using the 1D SMG to control bionic hands. They invited 16 healthy right-handed participants to conduct a variety of wrist motions with a variety of guided waveforms at a variety of movement speeds for their experiment. During wrist motions, a 1D SMG transducer with a sEMG electrode was connected to the forearm of participants, making it possible for them to record and capture the activity of the participants’ forearm muscle groups. Root mean squares (RMS) were computed from the extensor carpi radialis after normalizing the signals obtained from the SMG and sEMG after they had been collected and normalized, respectively. When comparing the abilities of SMG and sEMG to follow guiding waveform patterns, the paired t test was utilized to make the comparison. In addition, one-way analysis of variance (ANOVA) was utilized to determine the differences in SMG performance at different movement speeds. For sinusoidal, square, and triangular guiding waveforms, the mean RMS tracking errors of SMG were found to be between 13.6% and 21.5%, whereas sEMG was found to be between 24% and 30.7%. The results of a paired t experiment revealed that the RMS errors of SMG tracking were much lower than those of sEMG tracking.
When Guo and her colleagues [
45] successfully tested A-mode US on healthy participants, they used the same procedure on an amputee (
Figure 1A). Participants in the study were instructed to extend their phantom wrist in order to control the prosthetic hand. Her research found a correlation between muscle thickness and wrist extension angle with a correlation coefficient of 0.94. Furthermore, the relationship between wrist angle and muscle thickness was studied, and they calculated the mean ratio of angle deformation, which was around 0.13%.
As a continuous part of their research, Chen et al. [
46] investigated whether it is feasible to control a prosthetic hand with one degree of freedom using muscle thickness variations recorded by a one-degree-of-freedom SMG. With varying patterns and movement speeds, nine right-handed healthy individuals were instructed to operate a prosthetic hand with their wrist motions and match the visual input with the target track. The opening position of the prosthesis was controlled by SMG signals from the subject's extensor carpi radialis muscle. A prosthesis opening position was measured using an electronic goniometer in this investigation. The tracking error between the opening position of the prosthetic hand and the target track was computed in order to evaluate the performance of the controlling system. This study's findings indicated that the SMG control's mean RMS tracking errors ranged from 9.6% to 19.4% while moving at various speeds.
In a study published in 2013, Guo et al. [
47] further employed three different machine learning approaches to estimate the angle of the wrist using a one-dimensional A-mode ultrasonic transducer, and the results were promising. During the experiment, nine healthy volunteers were instructed to execute wrist extension exercises at speeds of 15, 22.5, and 30 cycles per minute, while an A-mode ultrasound transducer recorded data from the participants' forearm muscles (
Figure 1B-C).
Figure 1.
A: The original image of the experimental setting, conducted by Guo and her colleagues in 2010. A-mode SMG setting for collecting SMG and EMG signals from a residual forearm for controlling a prosthesis to compare their performances, with the screen showing the A-mode ultrasound signal (lower half) and the guiding signal for muscle contraction (upper half). B: The placement of the electro goniometer and sensors on healthy volunteers. C: Placing A-mode small transducer (with a diameter of 7 mm) in between sEMG electrodes to collect both EMG and SMG signals from extensor carpi radialis muscle, simultaneously [
47].
Figure 1.
A: The original image of the experimental setting, conducted by Guo and her colleagues in 2010. A-mode SMG setting for collecting SMG and EMG signals from a residual forearm for controlling a prosthesis to compare their performances, with the screen showing the A-mode ultrasound signal (lower half) and the guiding signal for muscle contraction (upper half). B: The placement of the electro goniometer and sensors on healthy volunteers. C: Placing A-mode small transducer (with a diameter of 7 mm) in between sEMG electrodes to collect both EMG and SMG signals from extensor carpi radialis muscle, simultaneously [
47].
Because of the ability of US transducers to detect morphological changes in deep muscles and tendons, Yang et al. [
48] presented a US-driven HMI as a viable alternative to sEMG for dexterous motion identification. Four A-mode piezoelectric ceramic transducers were built for their study. A custom-made armband was constructed to fix the four transducers while capturing the activity of the flexor digitorum superficialis (FDS), flexor digitorum profundus (FDP), flexor pollicis longus (FPL), extensor digitorum communis (EDC), and extensor pollicis longus (EPL), which all play a critical part in finger movements, including flexion and combined finger motions. Participants were asked to make 11 different hand gestures and hold such gestures for 3 to 5 seconds throughout the offline trial. Due to the fact that the raw echo signals obtained from the A-mode ultrasound transducer are constantly distorted by scattering noises and attenuation in tissues, signal processing was accomplished using temporal gain compensation (TGC), Gaussian filtering, Hilbert transform, and log compression [
49].
In 2020 Yang et al. [
50] suggested subclass discriminant analysis (SDA) and principal component analysis (PCA) to simultaneously predict wrist rotation (pronation/supination) and finger motions using wearable 1D SMG system. They carried out trials both offline and online. In offline studies, eight tiny A-mode ultrasound transducers were mounted onto the hands of eight healthy volunteers, and the forearm muscles were captured using the transducers. In their study, the wrist rotations and eight kinds of finger motions (rest, fist, index point, fine pinch, tripod grasp, key grip, peace sign, and hang loose) were investigated. However, in the online test, a customized graphical user interface (GUI) was employed to conduct a tracking task in order to validate the simultaneous wrist and hand control. The results of this study showed that it was possible to classify the finger gestures and wrist rotation simultaneously using the SDA machine learning algorithm with an accuracy of around 99.89% and 95.2%, respectively.
In 2020, Engdahl et al. [
51] proposed a unique wearable low-power SMG system for controlling a prosthetic hand. The proposed SMG system was comprised of four single-element transducers that were driven by a 7.4 V battery and operated at a constant frequency. In their investigation, a portable ultrasound transducer was fixed to the hands of five healthy participants in order to obtain muscle activity data. The data collected from participants were used to train an AI model in order to classify different finger movements. The results of this study showed that, using their proposed method, it was possible to classify nine different finger movements with an accuracy of around 95%.
2) B-mode SMG: B-mode, or 2D mode, provides a cross-sectional image of tissues or organs and is one of the most popular US modes used in a wide range of medical applications. In B-mode US, organs and tissues show up as points of different brightness in 2D greyscale images made from the echoes. B-mode ultrasound can provide a real-time image of muscles under contraction.
Zheng et al. [
34] for the first time studied the potential of a portable B-mode ultrasound scanner for evaluation of the dimensional change of muscles and control of prosthetic hands. In their study six healthy volunteers and three amputee participants were asked to perform wrist flexion and extension in order to capture the activities of forearm muscles (
Figure 2). The morphological deformation of forearm muscles during activities was effectively identified and linearly linked with wrist angle. The mean ratio of wrist angle to percentage of forearm muscle contraction was evaluated in normal participants. When the three amputee participants engaged their residual forearm muscles, the SMG signals from their residual forearms were likewise recognized and recorded satisfactorily. They discovered that SMG may be used to regulate and monitor musculoskeletal disorders as a consequence of their research.
A study by Shi et al. [
52] analysed the possibility of real-time control of a prosthetic hand with one degree of freedom utilizing muscle thickness fluctuations recorded by a US probe. They investigated the feasibility of controlling a prosthetic hand utilizing the extensor carpi radialis thickness deformation and found that a 1-DOF prosthetic hand can be controlled by only one muscle of the forearm using the SMG technique.
Shi et al. [
53] employed B-mode ultrasound imaging to capture muscle activity during a finger’s flexion and extension. Artificial intelligence was then utilized to determine which fingers had been bent in various directions. All of the information was handled offline. A total of 750 sets of US pictures were obtained, with images from each group selected from forearm muscles during finger flexion and extension.
Ortenzi et al. [
17] reported the use of ultrasound as a hand prosthesis HMI. Using a portable ultrasonic scanner equipped with a linear transducer, US pictures were captured and processed in the B-mode (2D imaging) in order to show the transverse section of the forearm underneath the transducer as a greyscale image. In the testing, the US transducer remained in position on the wrist thanks to an elastic band attached to a special plastic cradle. Specifically, this was done in order to limit the amount of motion artefacts that would arise. Specifically, the goal of this research was to evaluate the categorization of ten various hand postures and grab forces.
Employing a computationally efficient approach to distinguish between complicated hand movements, Akhlaghi and colleagues [
54] presented a real-time controlling system in relation to stroke rehabilitation, basic research into motor control biomechanics and artificial robotic limb control to analyse the feasibility of using 2D-mode US as a robust muscle computer interface and evaluate the possible therapeutic applications. They used a B-mode ultrasound transducer to evaluate the possibility of the classification of complex hand gestures and dexterous finger movements. In their study, dynamic ultrasound pictures of six healthy volunteers’ forearm muscles were provided and these data were evaluated to map muscle activity based on the muscle deformation during diverse hand movements.
In 2017, McIntosh et al. [
55] looked at how suitable different forearm mounting positions (transverse, longitudinal, diagonal, wrist, and posterior) were for a wearable ultrasound device. This is because the location of a device has a big impact on how comfortable it is and how well it works. In their study, in order to fix the B-mode US transducer on the participants' arms, they designed a fixture manufactured by a 3D printer and strap. The gloves also had flexible sensors sewn into them so that they could measure the precise angle of each finger's flexion.
In a 2019 study, Akhlaghi et al. [
56] evaluated the impact of employing a sparse set of ultrasound scanlines in order to find the best location on the forearm for capturing the maximal deformation of the primary forearm muscles during finger motions as well as classifying different types of hand gestures and finger movements. Five subjects were asked to make four different hand movements in order to see how the FDS, FDP, and FPL muscles worked.
In 2021, Fernandes et al. [
57] developed a wearable HMI that made use of 2D ultrasonic sensors and non-focused ultrasound. The ultrasound radiofrequency (RF) signals were captured using a B-mode linear array ultrasound probe while five healthy volunteers performed individual finger flexions. To intentionally diminish the lateral resolution of the ultrasound data, RF waves were averaged into fewer lateral columns. For full resolution, the first and third quartiles of classification accuracy were found to be between 80% and 92%. Using the suggested feature extraction approach with discrete wavelet transform, averaging into four RF signals might obtain a median classification accuracy of 87%. According to the results of their study, the authors mentioned that low-resolution images can have the same level of accuracy as high-resolution images.
3) M-mode SMG: An M-mode scan, also known as a motion mode scan, uses a series of A-mode scan signals, normally by selecting one line in B-mode imaging, to depict tissue motion over time. Using the M-mode, it is possible to estimate the velocity of individual organ structures. In comparison to the B and A modes, the motion mode US scans at a greater frequency and provides more comprehensive information about the tissue.
Li et al. [
35] conducted a study to determine the possibility of using M-mode ultrasound to detect wrist and finger movements. They compared M-mode and B-mode ultrasonography performance in the classification of 13 wrist and finger movements. A total of 13 movements were performed on eight healthy participants. Stable ultrasound data were collected by placing an ultrasound probe on an arm with a custom-made transducer holder. In order to cover the muscles of the forearm that are responsible for finger flexion and extension, the transducer was positioned at about half way along the forearm’s length. During the same procedure, to ensure that the comparison was fair, the M-mode and B-mode ultrasound signals were both collected from the forearm. As a consequence of their investigation, M-mode SMG transducers were shown to be as accurate as B-mode SMG signals in detecting wrist and finger movements as well as distinguishing between diverse hand gestures, and they may be employed in HMIs.
3.3. Feature extraction algotithm
To classify the finger movements and different hand gestures, it is important to use different types of algorithms to extract features from signals or images captured by US transducers because machine learning algorithms cannot process all the information. It is worth mentioning that using a machine learning algorithm without extracting features can classify different hand gestures, but the accuracy would be significantly less.
Shi et al. [
53] captured the forearm muscle activities and controlled a hand prosthesis with B-mode ultrasound, and AI was used to classify the finger movements. Before using collected data to train their AI, a deformation field was constructed to extract features from the data after registering the ultrasound image pair with the demons registration algorithm for each group. Valerio Ortenzi [
17] used the SMG technique as a valid HMI method to control a robotic hand. In order to classify ten different hand gestures and grasp forces, visual characteristics such as Regions of Interest gradients and Histogram of Oriented Gradient (HOG) features were extracted from the collected images, and these features were used to train three machine learning algorithms.
The activity pattern was generated using an image processing method developed by Akhlaghi et al. [
54]. MATLAB (MathWorks, Natick, MA, USA) was used to extract the activity patterns for each kind of hand movement from the B-mode ultrasound picture frames. Pixel-wise differences were determined and then averaged across a time span to identify the spatial distribution of intensity variations that corresponded to the muscle activity in each sequential frame of each series (raw activity pattern). A hand motion was mapped to a single activity pattern using this method. On the basis of the global thresholding level and decimal block size, the raw activity pattern was then transformed into a binary image. This database was then used to train the Nearest Neighbour classification algorithm.
McIntosh et al. [
55] collected data from the forearm muscles of subjects in order to evaluate the effect of probe position on the control of a hand prosthesis. They utilized a B-mode US transducer to capture the muscle activities of volunteers. Before using the collected data to train their AI, the optical flow between the first frame of the new session and the base frame of the training set was estimated. The flow was then averaged to generate a 2D translation and to reduce mistakes caused by US displacement, which might result in differing anatomical characteristics. Following that, modification was made to the current video in order to better match the training and sample characteristics.
In the study conducted by Yang et al. [
48], before using the collected data to train the machine learning model, the feature extraction process was carried out using segmentation and linear fitting to increase the accuracy of classification. Inspired by Castellini and colleagues [
59,
60], first-order spatial features were used to guide the feature extraction procedure. After selecting an evenly spaced grid of interest spots in the ultrasound picture, plane fitting was used to identify the spatial first-order features. Nevertheless, in their technique, the plane fitting was turned into linear fitting [
61]. It was because of this change that the approach could be used for one-dimensional ultrasonic data.
Yang et al. [
50] in 2020 classified and detected simultaneous wrist and finger movements using SDA and PCA algorithms. To train their AI model, the characteristics of the data collected from participants were extracted using the Tree Bagger function and the Random Forest method was used to evaluate the significance of characteristics. After that, two kinds of statistically significant characteristics were concatenated together for further analysis.
Fernandes et al. [
57] used the LDA method to classify finger movements using B-mode SMG. To make the classification more reliable and accurate, the authors used two different methodologies to extract characteristics from the data collected from volunteers. First, using the discrete wavelet transform (DWT) approach, the average RF signals were pre-processed prior to being used in the second method. In the next step, the mean absorption value (MAV) of the detail coefficient at various levels, as determined by the DWT approach, was determined. The second technique involves calculating a linear function over segmented portions of the envelope along the depth using linear regression (LR). It was decided to utilize the slopes and intercepts of the predicted linear function as spatial characteristics in this study.
Li et al. [
35] compared the productivity of B and M mode ultrasound transducers in relation to controlling an artificial robotic hand. In their study, they collected data from participants and then the features from signals collected from an M-mode probe were extracted using a linear fitting approach, while the features from pictures captured with a B-mode transducer were extracted using a static ultrasound image method. These features were used for training the SVM algorithm.
3.4. Artificial intelligence in classification
To have dexterous and precise control over prostheses, different deep learning and machine learning algorithms have been developed to classify different hand gestures and intended movements using SMG with high accuracy.
To control a prosthetic device in real time, Shi et al. [
52] looked at the sum of absolute differences (SAD), the two-dimensional logarithmic search (TDL), the cross-correlation (CC) method, and algorithms like SAD and TDL in conjunction with streaming single-instruction multiple-data extensions (SSE). They utilized a block-matching method to measure the muscle deformation during contraction. To compare TDL with and without SSE, the findings revealed good execution efficiency, with a mean correlation coefficient of about 0.99, a mean standard root-mean-square error of less than 0.75, and a mean relative root-mean-square error of less than 8.0%. Tests have shown that a prosthetic hand can be controlled by only one muscle position, which allows for proprioception of muscle tension. They mentioned that SMG is good at controlling prosthetic hands, allowing them to open and close proportionally and quickly.
In order to capture muscle activity in a finger’s flexion and extension and evaluate the potential of using an ultrasound device in HMI, Shi et al. [
53] employed B-mode ultrasound imaging. The deformation field was used to extract features, which were then input into the SVM classifier for the identification of finger movements. The experimental results revealed that the overall mean recognition accuracy was around 94%, indicating that this method has high accuracy and reliability. They assert that the suggested approach might be utilized in place of surface electromyography for determining which fingers move in distinct ways.
Guo and her colleagues [
47] conducted a study and asked nine healthy volunteers to perform different wrist extensions; meanwhile, an A-mode portable probe was used to capture the activities of the extensor carpi radialis muscle. An SVM, radial basis function artificial neural network (RBFANN), and back propagation artificial neural network (BP ANN) were trained by data collected from extension exercises at 22.5 cycles per minute, and the rest of the data were used for cross-validation. For the purpose of evaluating the accuracy of the predictions made by the AI models utilized in their research, correlation coefficients and relative root mean square error (RMSE) were calculated. The findings revealed that the SVM method is the most accurate in predicting the wrist angle, with an RMSE of 13% and a correlation coefficient of 0.975%.
In 2015, Ortenzi et al. [
17] proposed an advanced HMI method using US devices. In their study, data were collected from three healthy participants using B-mode ultrasound in order to train a machine learning algorithm to classify different hand gestures. The first dataset included US pictures of six hand postures and four functional grasps, each with just one degree of grip force. The second dataset was used to evaluate the capacity to recognize various degrees of force for each kind of grip. In order to classify photos from the five datasets, an LDA classifier, a Naive Bayes classifier, and a Decision Tree classifier were used, among other methods. The LDA classifier trained with features extracted by HOGs outperformed the others and achieved 80% success in categorizing 10 postures/grasps and 60% success in classifying functional grasps with varied degrees of grip force in an experiment involving three intact human volunteers.
In order to classify complex hand gestures and dexterous finger movements, Akhlaghi et al. [
54] collected the forearm muscle activities in different hand gestures in conjunction with wrist pronation. Using the activity patterns collected during the training phase, a database of potential hand movements was created, and the nearest neighbour classifier was used to categorize the various activity patterns using the database. The feature vectors in closest neighbour classification were created using two-dimensional activity pattern pictures, and the distance metric in a classification algorithm was determined by the cross-correlation coefficient between two patterns. For each participant, a database of activity patterns corresponding to various hand gestures was created during the training portion of the study. It was discovered that during the testing phase, unique activity patterns were categorized using the database, with an average classification accuracy of 91%. A virtual hand could be controlled in real time using an image-based control system that had an accuracy of 92% on average.
McIntosh et al. [
55] collected data from participants’ forearm muscles in order to classify 10 different hand gestures using US. In order to identify the finger positions or estimate finger angles, two machine learning algorithms were used. However, because machine learning algorithms cannot process all of the information, an optical flow was used to classify discrete gestures, and a first-order surface was used to detect finger angle. SVM and MLP algorithms were used to classify the different gestures and finger flexing in different joints. The results of this study showed that finger flexion and extension for performing 10 different hand gestures were classified after using image processing and neural networks with an accuracy of above 98%. They also found out that the MLP algorithm had a slight advantage over the SVM method in every location. After analyzing the data collected from finger flexion and extension in different joints, they mentioned that it is possible to classify the flexion and extension of each finger in different joints with an accuracy of 97.4%.
In an experiment reported by Yang et al. [
48], in order to classify and identify the finger movements using wearable 1D SMG system, the muscle activity of participants during the performance of 11 different hand gestures was collected. Then the data were used to train LDA and SVM algorithms to classify hand movements. It was decided to use a five-fold cross-validation method. All the information was gathered in one database, which was then separated into five sections randomly and evenly distributed among them. One of the five components was designated as a testing set, while the other four were designated as training sets. The trial findings indicated that the accuracy of offline recognition was up to 98.83% ± 0.79%. The completion percentage of real-time motions was 95.4% ± 8.7%, and the time required to choose an online move was 0.243 s ± 0.127 s.
In order to classify the finger movements, Akhlaghi et al. [
56] used a B-mode ultrasound probe to capture the main forearm muscles’ activities. In addition, three different scanline reductions were used to limit the scanlines of the US. The data, after being collected and limited, were used to train a Nearest Neighbour algorithm to classify different finger movements and different hand gestures. Using the complete 128 scanline picture, the classification accuracy was 94.6%, while using four equally spaced scanlines, the classification accuracy averaged at 94.5%. On the other hand, there was no significant difference in the ability to categorize items when the best scanlines were selected using fisher criteria (FC) and mutual information (MI). They also suggested that instead of using the whole imaging array, a select subset of ultrasonic scanlines may be employed, which would not result in a reduction in classification accuracy for multiple degrees of freedom. Wearable sonomyography muscle computer interfaces (MCIs) may also benefit from selecting a restricted number of transducer parts to decrease computation, instrumentation, and battery use.
To detect finger movements and wrist rotation simultaneously, Yang et al. [
50] collected data from muscle activities during different finger movements with wrist rotation. Before using the collected data in the training of machine learning algorithms, different techniques were used to extract the features. The simultaneous wrist rotation and finger motions were predicted using an SDA technique and a PCA approach. The results indicated that SDA is capable of accurately classifying both finger movements and wrist rotations in the presence of dynamic wrist rotations. Using three subclasses to categorize wrist rotations, it is possible to properly classify around 99% of finger movements and 93% of wrist rotations. They also discovered that the wrist rotation angle is linearly related to the first principal component (PC1) of the chosen ultrasonography characteristics, independent of the finger motions being used. With just two minutes of user training, it was possible to achieve wrist tracking precision (R2) of 0.954 and finger gesture categorization accuracy (96.5%) with the PC1.
Fernandes et al. [
57] developed a wearable SMG technology to classify and categorize finger flexion and extension. In their study, 2D-mode US was used to collect five subjects’ muscle activities during finger movements. Before the LDA method was employed to categorize the finger motions, a feature selection process was carried out. The number of spatial and temporal characteristics that were extracted was reduced as a result of this procedure. This aids in the differentiation of various forms of finger flexion. An accuracy of 80–92% (full resolution) was achieved in the first and third quarters of 10 separate arm trials. Using the suggested feature extraction approach in conjunction with discrete wavelet transform, they demonstrated that classification accuracy may be improved by as much as 87% by averaging four radio frequency signals. According to the findings of their research, reduced resolutions may achieve high accuracy levels that are comparable to those of full resolution. Furthermore, they carried out pilot research employing a multichannel single-element ultrasound system using flexible wearable ultrasonic sensors (WUSs) that utilize non-focused ultrasound. Three WUSs were connected to one subject's forearm, and ultrasonic RF signals were recorded while the person flexed his or her fingers individually. Using WUS sensors, the researchers discovered that they could accurately categorize finger movement with an accuracy of about 98%, with F1 scores ranging between 95% and 98%.
Li et al. [
35] collected the muscle activities of participants using M-mode ultrasound. The data acquired were utilized to train SVM and BP ANNs, which were then used to categorize the movements of the wrist and hands. The SVM classifier had an average classification accuracy (CA) of 98.83% for M-mode and 98.77% for B-mode across the eight subjects’ 13 movements. Regarding the BP classifier, the average CA of M-mode and B-mode was around 98.7% ± 0.99% and 98.76% ± 0.91%, respectively, according to the results. CAs did not vary between M-mode and B-mode (p > 0.05). Aside from that, M-mode seems to have potential dominance in feature analysis. Their findings indicate that M-mode ultrasonography may be used to detect wrist and finger motions in addition to other applications. The results of their study also show that M-mode ultrasound can be used in HMI.
Table 1 presents a summary of the different machine learning algorithms, feature extraction methods, and modes of ultrasound devices used to classify different types of finger movements and hand gestures since 2006.