Pub Date : 2025-03-23DOI: 10.1109/TMRB.2025.3573025
Benjamin A. Miller;Varun R. Bharadwaj;Chao Jiang;Vesna D. Novak
A robotic rehabilitation gym is a venue where multiple patients perform motor rehabilitation as a group using multiple robots. Such gyms produce positive outcomes, but it can be hard to create patient-robot assignment schedules that maximize patient skill improvement during exercise sessions. We thus present a neural-network-informed scheduler methodology that monitors patient exercise performance and dynamically assigns patients to robots based on their performance. These schedulers were trained and tested using synthetic datasets from a simulated rehabilitation gym. They were evaluated in 36 scenarios that differed in several ways (e.g., number of robots/patients, degree of stochasticity). Results showed that our neural-network-informed schedulers led to greater mean skill improvement compared to several baseline schedulers (e.g., only switching robots once). Additionally, the outcome difference between neural-network-informed schedulers and baseline schedulers increased as skill improvement became more stochastic. We also performed secondary studies where we showed that our schedulers still outperform baselines when patients can arrive or leave midsession and analyzed how the training dataset size affects scheduler performance. While several limitations need to be addressed before the scheduler is ready for use in real-world gyms, our study represents a step toward the use of artificial intelligence for patient monitoring in group settings.
{"title":"Dynamic Patient-Robot Assignment in a Simulated Stochastic Robotic Rehabilitation Gym","authors":"Benjamin A. Miller;Varun R. Bharadwaj;Chao Jiang;Vesna D. Novak","doi":"10.1109/TMRB.2025.3573025","DOIUrl":"https://doi.org/10.1109/TMRB.2025.3573025","url":null,"abstract":"A robotic rehabilitation gym is a venue where multiple patients perform motor rehabilitation as a group using multiple robots. Such gyms produce positive outcomes, but it can be hard to create patient-robot assignment schedules that maximize patient skill improvement during exercise sessions. We thus present a neural-network-informed scheduler methodology that monitors patient exercise performance and dynamically assigns patients to robots based on their performance. These schedulers were trained and tested using synthetic datasets from a simulated rehabilitation gym. They were evaluated in 36 scenarios that differed in several ways (e.g., number of robots/patients, degree of stochasticity). Results showed that our neural-network-informed schedulers led to greater mean skill improvement compared to several baseline schedulers (e.g., only switching robots once). Additionally, the outcome difference between neural-network-informed schedulers and baseline schedulers increased as skill improvement became more stochastic. We also performed secondary studies where we showed that our schedulers still outperform baselines when patients can arrive or leave midsession and analyzed how the training dataset size affects scheduler performance. While several limitations need to be addressed before the scheduler is ready for use in real-world gyms, our study represents a step toward the use of artificial intelligence for patient monitoring in group settings.","PeriodicalId":73318,"journal":{"name":"IEEE transactions on medical robotics and bionics","volume":"7 3","pages":"1139-1151"},"PeriodicalIF":3.8,"publicationDate":"2025-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144887793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-19DOI: 10.1109/TMRB.2025.3552924
I-Chieh Lee;Ming Liu;He Huang
Clear and effective communication between humans and robots is crucial when they work closely together. As wearable robots become more intelligent and automated, anticipatory control is limited for amputees because they lack prior knowledge of the timing and nature of changes in the robot’s motion, making human-machine collaboration more challenging. This study addresses the need for improved wearable robot transparency by enhancing a prosthetic controller to provide users with advanced notifications of locomotion mode changes. Five transfemoral amputees (TFA) wore our designed knee prosthesis and walked on a treadmill. We simulated a terrain misrecognition error by switching the locomotion mode from treadmill walking to stair ascent. Our study focused on three main questions: 1) What is the ideal timing that the TFAs need to mitigate for machine errors? 2) How do TFAs compensate for prosthetic knee errors? And 3) How does the robotic prosthetic leg respond to the TFAs’ corrective actions? We found that the enhanced transparency system helps TFAs anticipate changes and adjust their gait to compensate for the terrain misrecognition error. Specifically, providing notifications about 650 milliseconds before a locomotion mode change significantly reduced the effect of robot errors. Although the error compensation from TFAs resulted in a larger magnitude of error induced by the prosthetic knee, the TFAs were able to tolerate it and improve balance stability. According to questionnaires on user preferences, with notification of prosthetic knee motion, the TFAs could trust the device more even though the devices might have occasional errors. This study demonstrates that simple notifications of the robot’s movement intent enhance the predictability of prosthetic motion, facilitating anticipatory adjustments that improve safety and user trust.
{"title":"Enhancing Robot Transparency in Human–Robot Prosthesis Interaction to Mitigate Terrain Misrecognition Error","authors":"I-Chieh Lee;Ming Liu;He Huang","doi":"10.1109/TMRB.2025.3552924","DOIUrl":"https://doi.org/10.1109/TMRB.2025.3552924","url":null,"abstract":"Clear and effective communication between humans and robots is crucial when they work closely together. As wearable robots become more intelligent and automated, anticipatory control is limited for amputees because they lack prior knowledge of the timing and nature of changes in the robot’s motion, making human-machine collaboration more challenging. This study addresses the need for improved wearable robot transparency by enhancing a prosthetic controller to provide users with advanced notifications of locomotion mode changes. Five transfemoral amputees (TFA) wore our designed knee prosthesis and walked on a treadmill. We simulated a terrain misrecognition error by switching the locomotion mode from treadmill walking to stair ascent. Our study focused on three main questions: 1) What is the ideal timing that the TFAs need to mitigate for machine errors? 2) How do TFAs compensate for prosthetic knee errors? And 3) How does the robotic prosthetic leg respond to the TFAs’ corrective actions? We found that the enhanced transparency system helps TFAs anticipate changes and adjust their gait to compensate for the terrain misrecognition error. Specifically, providing notifications about 650 milliseconds before a locomotion mode change significantly reduced the effect of robot errors. Although the error compensation from TFAs resulted in a larger magnitude of error induced by the prosthetic knee, the TFAs were able to tolerate it and improve balance stability. According to questionnaires on user preferences, with notification of prosthetic knee motion, the TFAs could trust the device more even though the devices might have occasional errors. This study demonstrates that simple notifications of the robot’s movement intent enhance the predictability of prosthetic motion, facilitating anticipatory adjustments that improve safety and user trust.","PeriodicalId":73318,"journal":{"name":"IEEE transactions on medical robotics and bionics","volume":"7 2","pages":"734-742"},"PeriodicalIF":3.4,"publicationDate":"2025-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143949131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-19DOI: 10.1109/TMRB.2025.3550661
Shanpu Fang;Riley J. Shepard;Collin D. Bowersock;Zachary F. Lerner
Motor-powered ankle exoskeletons have been shown to improve walking and rehabilitation outcomes in individuals with and without gait impairments (e.g., cerebral palsy (CP)). To date, ankle exoskeleton designs have either placed the motors on the shanks (direct or quasi-direct drive) or around the waist with Bowden cable transmissions. The former offers better transmission efficiency, while the latter reduces added mass biomechanical penalty. The biomechanical effects of motor placement may be magnified for individuals with CP due to weakened lower limb strength. To date, no study has compared how motor placement alters the biomechanical responses and user perceptions of individuals with or without gait impairment (e.g., CP). In this study involving 7 individuals with CP and 9 unimpaired individuals, we compared their metabolic cost of transport, lower limb muscle activities, and user perceptions when using ankle exoskeletons with either waist-mounted motors (and Bowden cables) or shank-mounted motors that were otherwise identical. Despite changes in lower leg muscle recruitment, results showed no statistical differences in the metabolic cost of transport. Shank-mounted motors were preferred by more participants in both cohorts (e.g., 6/7 in CP). These results help inform the ergonomics and mechanical designs of ankle exoskeletons and how they may be perceived.
{"title":"Effects of Ankle Exoskeleton Motor Location on Gait Biomechanics and User Perceptions: The Bowden Cable Dilemma","authors":"Shanpu Fang;Riley J. Shepard;Collin D. Bowersock;Zachary F. Lerner","doi":"10.1109/TMRB.2025.3550661","DOIUrl":"https://doi.org/10.1109/TMRB.2025.3550661","url":null,"abstract":"Motor-powered ankle exoskeletons have been shown to improve walking and rehabilitation outcomes in individuals with and without gait impairments (e.g., cerebral palsy (CP)). To date, ankle exoskeleton designs have either placed the motors on the shanks (direct or quasi-direct drive) or around the waist with Bowden cable transmissions. The former offers better transmission efficiency, while the latter reduces added mass biomechanical penalty. The biomechanical effects of motor placement may be magnified for individuals with CP due to weakened lower limb strength. To date, no study has compared how motor placement alters the biomechanical responses and user perceptions of individuals with or without gait impairment (e.g., CP). In this study involving 7 individuals with CP and 9 unimpaired individuals, we compared their metabolic cost of transport, lower limb muscle activities, and user perceptions when using ankle exoskeletons with either waist-mounted motors (and Bowden cables) or shank-mounted motors that were otherwise identical. Despite changes in lower leg muscle recruitment, results showed no statistical differences in the metabolic cost of transport. Shank-mounted motors were preferred by more participants in both cohorts (e.g., 6/7 in CP). These results help inform the ergonomics and mechanical designs of ankle exoskeletons and how they may be perceived.","PeriodicalId":73318,"journal":{"name":"IEEE transactions on medical robotics and bionics","volume":"7 2","pages":"699-710"},"PeriodicalIF":3.4,"publicationDate":"2025-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144073146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The traditional minimally invasive surgical (MIS) robots generally have bulk leader manipulators with relatively fixed working positions, which limits their further utilization in special scenarios, such as remote surgeries. This study proposes a compact and foldable leader device based on passive binocular near-infrared (NIR) optical navigation technology for MIS robots, which does not need mechanical arm linkage constraints and provides a larger range of position and orientation tracking, enabling the surgeons to perform continuous leader-follower manipulations more steadily. Moreover, the polyhedral and foldable structure of the optical leader device further reduces the spatial footprint of the MIS robot. A prototype of the optical leader devices was constructed with a weight of 186 g. Its performance was then evaluated through testing, and the maximum average absolute error in position and orientation tracking was 0.90 mm and 0.45°, respectively. Additionally, the prototype exhibits acceptable stability and a wide range of position and orientation tracking. The leader device features a compact, foldable structure with enhanced portability and excellent position and orientation tracking capabilities, facilitating precise surgical maneuvers of surgeons in scenarios of remote surgeries.
{"title":"Design and Analysis of a Compact and Foldable Master Device Based on Binocular Near-Infrared Optical Navigation Technology for Minimally Invasive Surgery Robots","authors":"Lizhi Pan;Xu Jiang;Zhikang Ma;Bo Guan;Bo Yi;Jianchang Zhao","doi":"10.1109/TMRB.2025.3550659","DOIUrl":"https://doi.org/10.1109/TMRB.2025.3550659","url":null,"abstract":"The traditional minimally invasive surgical (MIS) robots generally have bulk leader manipulators with relatively fixed working positions, which limits their further utilization in special scenarios, such as remote surgeries. This study proposes a compact and foldable leader device based on passive binocular near-infrared (NIR) optical navigation technology for MIS robots, which does not need mechanical arm linkage constraints and provides a larger range of position and orientation tracking, enabling the surgeons to perform continuous leader-follower manipulations more steadily. Moreover, the polyhedral and foldable structure of the optical leader device further reduces the spatial footprint of the MIS robot. A prototype of the optical leader devices was constructed with a weight of 186 g. Its performance was then evaluated through testing, and the maximum average absolute error in position and orientation tracking was 0.90 mm and 0.45°, respectively. Additionally, the prototype exhibits acceptable stability and a wide range of position and orientation tracking. The leader device features a compact, foldable structure with enhanced portability and excellent position and orientation tracking capabilities, facilitating precise surgical maneuvers of surgeons in scenarios of remote surgeries.","PeriodicalId":73318,"journal":{"name":"IEEE transactions on medical robotics and bionics","volume":"7 2","pages":"514-527"},"PeriodicalIF":3.4,"publicationDate":"2025-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144084800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Robotic ultrasound (US) scanning of the spine is becoming an increasingly viable radiation-free alternative to CT scans and fluoroscopy. However, due to the complex shape of the vertebra, three-dimensional (3D) US reconstructions generated from two-dimensional (2D) US scans often lack important anatomic information, such as the spinous process. This paper investigates scanning strategies that reorient the probe during US scanning to improve surface coverage of 3D US reconstructions. A two-scan procedure with a path re-planning algorithm is presented. The proposed algorithm uses information from a first exploratory scan to generate an improved imaging trajectory whereby the US probe is near-perpendicular to the targeted bone surface. The results show a 30.4%, 42.3%, and 75.0% improvement in surface coverage on a synthetic phantom, cadaver, and human volunteers, respectively, achieving up to 56% surface coverage on human volunteers. These results emphasise the value of exploiting information about the underlying anatomy to optimise the scanning trajectory. The increased surface coverage of the 3D US reconstructions will provide higher quality radiation-free visualisation, extending the role of US as a complementary imaging modality for safe and effective diagnosis and spine interventions.
{"title":"Robotic Path Re-Planning for US Reconstruction of the Spine","authors":"Kaat Van Assche;Ruixuan Li;Ayoob Davoodi;Matthias Tummers;Mouloud Ourak;Gianni Borghesan;Nicola Cavalcanti;Philipp Fürnstahl;Emmanuel Vander Poorten","doi":"10.1109/TMRB.2025.3550662","DOIUrl":"https://doi.org/10.1109/TMRB.2025.3550662","url":null,"abstract":"Robotic ultrasound (US) scanning of the spine is becoming an increasingly viable radiation-free alternative to CT scans and fluoroscopy. However, due to the complex shape of the vertebra, three-dimensional (3D) US reconstructions generated from two-dimensional (2D) US scans often lack important anatomic information, such as the spinous process. This paper investigates scanning strategies that reorient the probe during US scanning to improve surface coverage of 3D US reconstructions. A two-scan procedure with a path re-planning algorithm is presented. The proposed algorithm uses information from a first exploratory scan to generate an improved imaging trajectory whereby the US probe is near-perpendicular to the targeted bone surface. The results show a 30.4%, 42.3%, and 75.0% improvement in surface coverage on a synthetic phantom, cadaver, and human volunteers, respectively, achieving up to 56% surface coverage on human volunteers. These results emphasise the value of exploiting information about the underlying anatomy to optimise the scanning trajectory. The increased surface coverage of the 3D US reconstructions will provide higher quality radiation-free visualisation, extending the role of US as a complementary imaging modality for safe and effective diagnosis and spine interventions.","PeriodicalId":73318,"journal":{"name":"IEEE transactions on medical robotics and bionics","volume":"7 2","pages":"755-767"},"PeriodicalIF":3.4,"publicationDate":"2025-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143949130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
At-home stroke rehabilitation robots could improve access to rehabilitation therapies for stroke survivors. However, as the home is a challenging environment for design, it is essential that such designs are closely linked to stakeholder needs. This paper continues previous work by the authors linking stakeholder needs to the design of an at-home stroke rehabilitation robot for the upper limb. The proposed design is a constrained cable robot with a vertical workspace, capable of supporting and measuring the motion of a stroke survivor’s arm and hand during therapy activities, with a modular end effector design to simulate a variety of activities of daily living. The technical requirements of the design are described and linked to research on therapy activities, activities of daily living, and anthropometry. The kinematic and dynamic requirements for the design are validated in experiments. Potential improvements for the design include adding powered hand modules to assist users with hand impairments, adding a third rotational degree of freedom, and investigating parallel-spring motor designs that could reduce power consumption.
{"title":"Design and Prototyping of a Cable-Driven Parallel Robot for At-Home Upper Extremity Rehabilitation","authors":"Shane Forbrigger;Shammas Mohyaddin;Ashkan Rashvand;Andrew Jerabek;Matt Robertson;Vincent DePaul;Keyvan Hashtrudi-Zaad","doi":"10.1109/TMRB.2025.3552975","DOIUrl":"https://doi.org/10.1109/TMRB.2025.3552975","url":null,"abstract":"At-home stroke rehabilitation robots could improve access to rehabilitation therapies for stroke survivors. However, as the home is a challenging environment for design, it is essential that such designs are closely linked to stakeholder needs. This paper continues previous work by the authors linking stakeholder needs to the design of an at-home stroke rehabilitation robot for the upper limb. The proposed design is a constrained cable robot with a vertical workspace, capable of supporting and measuring the motion of a stroke survivor’s arm and hand during therapy activities, with a modular end effector design to simulate a variety of activities of daily living. The technical requirements of the design are described and linked to research on therapy activities, activities of daily living, and anthropometry. The kinematic and dynamic requirements for the design are validated in experiments. Potential improvements for the design include adding powered hand modules to assist users with hand impairments, adding a third rotational degree of freedom, and investigating parallel-spring motor designs that could reduce power consumption.","PeriodicalId":73318,"journal":{"name":"IEEE transactions on medical robotics and bionics","volume":"7 2","pages":"743-754"},"PeriodicalIF":3.4,"publicationDate":"2025-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143949156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-14DOI: 10.1109/TMRB.2025.3563286
{"title":"IEEE Transactions on Medical Robotics and Bionics Society Information","authors":"","doi":"10.1109/TMRB.2025.3563286","DOIUrl":"https://doi.org/10.1109/TMRB.2025.3563286","url":null,"abstract":"","PeriodicalId":73318,"journal":{"name":"IEEE transactions on medical robotics and bionics","volume":"7 2","pages":"C3-C3"},"PeriodicalIF":3.4,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11004174","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143949202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-14DOI: 10.1109/TMRB.2025.3563288
{"title":"IEEE Transactions on Medical Robotics and Bionics Information for Authors","authors":"","doi":"10.1109/TMRB.2025.3563288","DOIUrl":"https://doi.org/10.1109/TMRB.2025.3563288","url":null,"abstract":"","PeriodicalId":73318,"journal":{"name":"IEEE transactions on medical robotics and bionics","volume":"7 2","pages":"C4-C4"},"PeriodicalIF":3.4,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11004177","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143949214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-14DOI: 10.1109/TMRB.2025.3563284
{"title":"IEEE Transactions on Medical Robotics and Bionics Publication Information","authors":"","doi":"10.1109/TMRB.2025.3563284","DOIUrl":"https://doi.org/10.1109/TMRB.2025.3563284","url":null,"abstract":"","PeriodicalId":73318,"journal":{"name":"IEEE transactions on medical robotics and bionics","volume":"7 2","pages":"C2-C2"},"PeriodicalIF":3.4,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11004175","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144084733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-12DOI: 10.1109/TMRB.2025.3550646
Shengcai Duan;Le Wu;Aiping Liu;Xun Chen
Hand Gesture Recognition (HGR) employing surface electromyography (sEMG) and accelerometer (ACC) signals has garnered increasing interest in areas of bionic prostheses and human-machine interaction. However, existing multimodal approaches predominantly extract global specificity at a single temporal scale, which neglects local dynamic characteristics. This limitation hinders the effective capture of global-local temporal information, resulting in restricted performance and frequent misclassification of dynamic gestures. To this end, we propose a novel global-local Fusion model, termed Temporal-spatial Dependence Fusion (TsdFusion), for sEMG-ACC-based HGR. TsdFusion harnesses temporal-spatial dependencies (Tsd) from multi-time scale handcrafted features and employs a Convolution-Transformer framework for global-local fusion, thus enriching local dynamic information while preserving global insights. Specifically, the Tsd inputs are independently constructed from sEMG and ACC through multi-time scale window segmentation and feature engineering. Furthermore, the global and local temporal-spatial correlations within unimodal Tsd inputs are characterized by the unimodal transformer and dimension-wise convolution modules, respectively. Subsequently, a Convolution-coupled-transformer progressive hierarchical fusion module effectively integrates intramodal specificity and intermodal hierarchical relationship for final prediction. Evaluations on four public datasets, including transradial amputees and healthy subjects, demonstrate TsdFusion outperforms the state-of-the-art multimodal HGR methods. The TsdFusion effectively recognizes dynamic gestures, facilitating promising HGR-based interaction for prostheses or assistance robotics.
{"title":"A Global–Local Fusion Model Exploring Temporal–Spatial Dependence for Multimodal Hand Gesture Recognition","authors":"Shengcai Duan;Le Wu;Aiping Liu;Xun Chen","doi":"10.1109/TMRB.2025.3550646","DOIUrl":"https://doi.org/10.1109/TMRB.2025.3550646","url":null,"abstract":"Hand Gesture Recognition (HGR) employing surface electromyography (sEMG) and accelerometer (ACC) signals has garnered increasing interest in areas of bionic prostheses and human-machine interaction. However, existing multimodal approaches predominantly extract global specificity at a single temporal scale, which neglects local dynamic characteristics. This limitation hinders the effective capture of global-local temporal information, resulting in restricted performance and frequent misclassification of dynamic gestures. To this end, we propose a novel global-local Fusion model, termed Temporal-spatial Dependence Fusion (TsdFusion), for sEMG-ACC-based HGR. TsdFusion harnesses temporal-spatial dependencies (Tsd) from multi-time scale handcrafted features and employs a Convolution-Transformer framework for global-local fusion, thus enriching local dynamic information while preserving global insights. Specifically, the Tsd inputs are independently constructed from sEMG and ACC through multi-time scale window segmentation and feature engineering. Furthermore, the global and local temporal-spatial correlations within unimodal Tsd inputs are characterized by the unimodal transformer and dimension-wise convolution modules, respectively. Subsequently, a Convolution-coupled-transformer progressive hierarchical fusion module effectively integrates intramodal specificity and intermodal hierarchical relationship for final prediction. Evaluations on four public datasets, including transradial amputees and healthy subjects, demonstrate TsdFusion outperforms the state-of-the-art multimodal HGR methods. The TsdFusion effectively recognizes dynamic gestures, facilitating promising HGR-based interaction for prostheses or assistance robotics.","PeriodicalId":73318,"journal":{"name":"IEEE transactions on medical robotics and bionics","volume":"7 2","pages":"723-733"},"PeriodicalIF":3.4,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143949155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}