Pub Date : 2025-04-18DOI: 10.1109/TMRB.2025.3562266
Naveed Ahmad Khan;Fahad Hussain;Tanishka Goyal;Prashant K. Jamwal;Shahid Hussain
Robotic-assisted rehabilitation for wrist movements demands adaptive systems capable of balancing patient autonomy with robotic support. The integration of artificial intelligence (AI) into robotic-assisted rehabilitation offers transformative potential in delivering personalized, dynamic, and effective therapeutic interventions. This study introduces a novel neuromechanical control framework integrating a passivity observer with Quantum-Enhanced Deep Reinforcement Learning (QDRL) for adaptive impedance scaling in wrist rehabilitation robotics. The passivity observer continuously monitors energy exchanges to classify patient states into passive (patient requiring robotic assistance) and non-passive (patient actively participating) categories, dynamically guiding the robot’s impedance adjustments. Experiments were conducted with ten unimpaired human subjects (eight male and two female), who were instructed to simulate rehabilitation scenarios, focusing on three key wrist movements, flexion/extension (FL/EX), abduction/adduction (AB/AD), and pronation/supination (PR/SU). Experimental results showed high correlations (> 0.83) between energy-based and electromyography (EMG)-based passivity classifications, confirming the reliability of the proposed approach. Furthermore, the designed QDRL model significantly outperformed traditional reinforcement learning methods, achieving superior adaptability, stability, and higher average rewards during robotic impedance control. The framework offers advancement in optimizing robotic assistance during motor recovery, promoting personalized rehabilitation by tailoring interventions to the specific needs of each patient.
{"title":"Quantum Driven Dynamic Passivity-Based Neuromechanical Control for Wrist Rehabilitation Robot","authors":"Naveed Ahmad Khan;Fahad Hussain;Tanishka Goyal;Prashant K. Jamwal;Shahid Hussain","doi":"10.1109/TMRB.2025.3562266","DOIUrl":"https://doi.org/10.1109/TMRB.2025.3562266","url":null,"abstract":"Robotic-assisted rehabilitation for wrist movements demands adaptive systems capable of balancing patient autonomy with robotic support. The integration of artificial intelligence (AI) into robotic-assisted rehabilitation offers transformative potential in delivering personalized, dynamic, and effective therapeutic interventions. This study introduces a novel neuromechanical control framework integrating a passivity observer with Quantum-Enhanced Deep Reinforcement Learning (QDRL) for adaptive impedance scaling in wrist rehabilitation robotics. The passivity observer continuously monitors energy exchanges to classify patient states into passive (patient requiring robotic assistance) and non-passive (patient actively participating) categories, dynamically guiding the robot’s impedance adjustments. Experiments were conducted with ten unimpaired human subjects (eight male and two female), who were instructed to simulate rehabilitation scenarios, focusing on three key wrist movements, flexion/extension (FL/EX), abduction/adduction (AB/AD), and pronation/supination (PR/SU). Experimental results showed high correlations (> 0.83) between energy-based and electromyography (EMG)-based passivity classifications, confirming the reliability of the proposed approach. Furthermore, the designed QDRL model significantly outperformed traditional reinforcement learning methods, achieving superior adaptability, stability, and higher average rewards during robotic impedance control. The framework offers advancement in optimizing robotic assistance during motor recovery, promoting personalized rehabilitation by tailoring interventions to the specific needs of each patient.","PeriodicalId":73318,"journal":{"name":"IEEE transactions on medical robotics and bionics","volume":"7 3","pages":"1237-1247"},"PeriodicalIF":3.8,"publicationDate":"2025-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144887772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-17DOI: 10.1109/TMRB.2025.3561865
Ziqian Li;Zhengyu Wang;Xinzhou Xu;Yongfa Chen;Björn W. Schuller
Accurate semantic segmentation for surgical instruments is crucial in robot-assisted minimally invasive surgery, mainly regarded as a core module in surgical-instrument tracking and operation guidance. Nevertheless, it is usually difficult for existing semantic surgical-instrument segmentation approaches to adapt to unknown surgical scenes, particularly due to their insufficient consideration for reducing the domain gaps across different scenes. To address this issue, we propose an unsupervised domain-adaptive semantic segmentation approach for surgical instruments, leveraging Dropout-enhanced Dual Heads and Coarse-Grained classification branch (D2HCG). The proposed approach comprises dropout-enhanced dual heads for diverse feature representation, and a coarse-grained classification branch for capturing complexities across varying granularities. This incorporates consistency loss functions targeting fine-grained features and coarse-grained granularities, aiming to reduce cross-scene domain gaps. Afterwards, we perform experiments in cross-scene surgical-instrument semantic segmentation cases, with the experimental results reporting the effectiveness for the proposed approach, compared with state-of-the-art semantic segmentation ones.
{"title":"Unsupervised Domain-Adaptive Semantic Segmentation for Surgical Instruments Leveraging Dropout-Enhanced Dual Heads and Coarse-Grained Classification Branch","authors":"Ziqian Li;Zhengyu Wang;Xinzhou Xu;Yongfa Chen;Björn W. Schuller","doi":"10.1109/TMRB.2025.3561865","DOIUrl":"https://doi.org/10.1109/TMRB.2025.3561865","url":null,"abstract":"Accurate semantic segmentation for surgical instruments is crucial in robot-assisted minimally invasive surgery, mainly regarded as a core module in surgical-instrument tracking and operation guidance. Nevertheless, it is usually difficult for existing semantic surgical-instrument segmentation approaches to adapt to unknown surgical scenes, particularly due to their insufficient consideration for reducing the domain gaps across different scenes. To address this issue, we propose an unsupervised domain-adaptive semantic segmentation approach for surgical instruments, leveraging Dropout-enhanced Dual Heads and Coarse-Grained classification branch (D2HCG). The proposed approach comprises dropout-enhanced dual heads for diverse feature representation, and a coarse-grained classification branch for capturing complexities across varying granularities. This incorporates consistency loss functions targeting fine-grained features and coarse-grained granularities, aiming to reduce cross-scene domain gaps. Afterwards, we perform experiments in cross-scene surgical-instrument semantic segmentation cases, with the experimental results reporting the effectiveness for the proposed approach, compared with state-of-the-art semantic segmentation ones.","PeriodicalId":73318,"journal":{"name":"IEEE transactions on medical robotics and bionics","volume":"7 3","pages":"950-961"},"PeriodicalIF":3.8,"publicationDate":"2025-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144887824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The optimized hardware implementation of neurons and biological cells in the neuromorphic domain is of significant importance. In this paper, a novel method is presented that reduces any number of nonlinear terms in the differential equations describing the behavior of neurons or biological cells with a common variable to a single nonlinear term with high precision. This approach significantly improves implementation efficiency by reducing hardware resource consumption while maintaining high frequency and accuracy. The proposed method was applied to Cardiac Purkinje Fiber Cells, and its validity was demonstrated through time-domain analysis, noise condition analysis, Lyapunov stability analysis, and bifurcation analysis to validate the model under various conditions. These validations ensure the accuracy and stability of the proposed approach across different operating conditions. To assess large-scale applicability, the model was tested in a 300-cell Purkinje fiber network, demonstrating accurate synchronization, equilibrium states, and cross-spectral consistency while maintaining computational efficiency. The digital hardware implementation on a Virtex-7 FPGA board demonstrated a frequency improvement of 3.49 times compared to the original model and 1.79 times compared to the best implementation of this model to date. We also simulated a network of 4500 cells to analyze correlation and implemented it on hardware to demonstrate that the proposed model, based on the method presented in this paper, can efficiently and accurately scale to large-scale applications. This efficient and scalable approach paves the way for applications in medical research, bioengineering, and neuromorphic hardware development, including the creation of hardware-accelerated tools for simulating biological systems, designing bio-inspired devices, and enabling large-scale real-time simulations for understanding and treating cardiac or neurological conditions.
{"title":"FPGA-Optimized Neuromorphic Modeling of Cardiac Purkinje Fibers for Next-Generation Bionic Implants","authors":"Gilda Ghanbarpour;Muhammad Akmal Chaudhary;Maher Assaad;Milad Ghanbarpour","doi":"10.1109/TMRB.2025.3561836","DOIUrl":"https://doi.org/10.1109/TMRB.2025.3561836","url":null,"abstract":"The optimized hardware implementation of neurons and biological cells in the neuromorphic domain is of significant importance. In this paper, a novel method is presented that reduces any number of nonlinear terms in the differential equations describing the behavior of neurons or biological cells with a common variable to a single nonlinear term with high precision. This approach significantly improves implementation efficiency by reducing hardware resource consumption while maintaining high frequency and accuracy. The proposed method was applied to Cardiac Purkinje Fiber Cells, and its validity was demonstrated through time-domain analysis, noise condition analysis, Lyapunov stability analysis, and bifurcation analysis to validate the model under various conditions. These validations ensure the accuracy and stability of the proposed approach across different operating conditions. To assess large-scale applicability, the model was tested in a 300-cell Purkinje fiber network, demonstrating accurate synchronization, equilibrium states, and cross-spectral consistency while maintaining computational efficiency. The digital hardware implementation on a Virtex-7 FPGA board demonstrated a frequency improvement of 3.49 times compared to the original model and 1.79 times compared to the best implementation of this model to date. We also simulated a network of 4500 cells to analyze correlation and implemented it on hardware to demonstrate that the proposed model, based on the method presented in this paper, can efficiently and accurately scale to large-scale applications. This efficient and scalable approach paves the way for applications in medical research, bioengineering, and neuromorphic hardware development, including the creation of hardware-accelerated tools for simulating biological systems, designing bio-inspired devices, and enabling large-scale real-time simulations for understanding and treating cardiac or neurological conditions.","PeriodicalId":73318,"journal":{"name":"IEEE transactions on medical robotics and bionics","volume":"7 3","pages":"926-937"},"PeriodicalIF":3.8,"publicationDate":"2025-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144887778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-15DOI: 10.1109/TMRB.2025.3560394
Daniel Rodríguez-Jorge;Sainan Zhang;Jin Sen Huang;Ivan Lopez-Sanchez;Nitin Srinivasan;Qiang Zhang;Xianlian Zhou;Hao Su
Exoskeletons can improve human mobility, but discomfort remains a significant barrier to their widespread adoption. This paper presents a comfort-centered mechatronics design of portable hip exoskeletons, comprising of three factors: (i) actuation, (ii) wearable interface, (iii) and assistive controller. We introduced an analytical multibody model to predict the human-exoskeleton contact forces during gait. Informed by this model, we designed a wearable interface that significantly improved the three considered objective metrics: (i) undesired contact forces at the wearable interface, (ii) wobbling, and (iii) metabolic reduction, and also the post-test evaluation via a System Usability Scale questionnaire as a subjective metric. Our experiments with two exoskeleton controllers (gait-based and reinforcement learning-based) demonstrated that the design of the wearable physical interface has a greater impact on reducing metabolic rate and minimizing wobbling than the choice of controller. Our actuation design method leads to highly backdrivable, lightweight quasi-direct drive actuators with high torque tracking performance. By leveraging this wearable design, we achieved up to 60% reduction in undesired contact forces, and a 74% reduction in exoskeleton wobbling in the frontal axis compared to a traditional configuration. Additionally, the net metabolic cost reduction was 18% compared to the no exoskeleton condition.
{"title":"Biomechanics-Informed Mechatronics Design of Comfort-Centered Portable Hip Exoskeleton: Actuator, Wearable Interface, Controller","authors":"Daniel Rodríguez-Jorge;Sainan Zhang;Jin Sen Huang;Ivan Lopez-Sanchez;Nitin Srinivasan;Qiang Zhang;Xianlian Zhou;Hao Su","doi":"10.1109/TMRB.2025.3560394","DOIUrl":"https://doi.org/10.1109/TMRB.2025.3560394","url":null,"abstract":"Exoskeletons can improve human mobility, but discomfort remains a significant barrier to their widespread adoption. This paper presents a comfort-centered mechatronics design of portable hip exoskeletons, comprising of three factors: (i) actuation, (ii) wearable interface, (iii) and assistive controller. We introduced an analytical multibody model to predict the human-exoskeleton contact forces during gait. Informed by this model, we designed a wearable interface that significantly improved the three considered objective metrics: (i) undesired contact forces at the wearable interface, (ii) wobbling, and (iii) metabolic reduction, and also the post-test evaluation via a System Usability Scale questionnaire as a subjective metric. Our experiments with two exoskeleton controllers (gait-based and reinforcement learning-based) demonstrated that the design of the wearable physical interface has a greater impact on reducing metabolic rate and minimizing wobbling than the choice of controller. Our actuation design method leads to highly backdrivable, lightweight quasi-direct drive actuators with high torque tracking performance. By leveraging this wearable design, we achieved up to 60% reduction in undesired contact forces, and a 74% reduction in exoskeleton wobbling in the frontal axis compared to a traditional configuration. Additionally, the net metabolic cost reduction was 18% compared to the no exoskeleton condition.","PeriodicalId":73318,"journal":{"name":"IEEE transactions on medical robotics and bionics","volume":"7 2","pages":"687-698"},"PeriodicalIF":3.4,"publicationDate":"2025-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144084773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-14DOI: 10.1109/TMRB.2025.3560390
V. Mainardi;M. Dal Canto;T. Melillo;N. Lorenzini;G. Bagnoni;S. Moccia;G. Ciuti
Skin cancer affects over 2 million people worldwide each year. Although dermoscopy is the gold standard screening technique, it only assesses the superficial features of skin lesions. Novel approaches based on thermal investigation have revealed a correlation between thermal recovery and vascular pattern alterations, which is an important factor in discriminating malignant and benign lesions. In this study, a dynamic thermal-imaging system was designed, developed, and validated in a real clinical scenario. The system is non-invasive, compact, and cost-effective, comprising a cooling probe and an image acquisition system equipped with RGB and thermal cameras. The system incorporates a machine-learning classification algorithm for skin cancer screening. The system showed an accuracy of 89.7% in distinguishing between malignant and benign lesions in a case study involving 58 patients and classified sub-classes of lesions (i.e., melanoma and nevi) with an accuracy of 95.5%. These findings underscore the potential benefit of the proposed dynamic thermal-imaging system as a support tool for non-invasive screening and early detection of malignant skin lesions.
{"title":"A Thermal-Imaging System and Machine-Learning Classification Algorithm for Skin Cancer Screening","authors":"V. Mainardi;M. Dal Canto;T. Melillo;N. Lorenzini;G. Bagnoni;S. Moccia;G. Ciuti","doi":"10.1109/TMRB.2025.3560390","DOIUrl":"https://doi.org/10.1109/TMRB.2025.3560390","url":null,"abstract":"Skin cancer affects over 2 million people worldwide each year. Although dermoscopy is the gold standard screening technique, it only assesses the superficial features of skin lesions. Novel approaches based on thermal investigation have revealed a correlation between thermal recovery and vascular pattern alterations, which is an important factor in discriminating malignant and benign lesions. In this study, a dynamic thermal-imaging system was designed, developed, and validated in a real clinical scenario. The system is non-invasive, compact, and cost-effective, comprising a cooling probe and an image acquisition system equipped with RGB and thermal cameras. The system incorporates a machine-learning classification algorithm for skin cancer screening. The system showed an accuracy of 89.7% in distinguishing between malignant and benign lesions in a case study involving 58 patients and classified sub-classes of lesions (i.e., melanoma and nevi) with an accuracy of 95.5%. These findings underscore the potential benefit of the proposed dynamic thermal-imaging system as a support tool for non-invasive screening and early detection of malignant skin lesions.","PeriodicalId":73318,"journal":{"name":"IEEE transactions on medical robotics and bionics","volume":"7 3","pages":"938-949"},"PeriodicalIF":3.8,"publicationDate":"2025-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144887665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The advent of robotic surgery has brought about a paradigm shift in the medical field, necessitating the development of corresponding surgical skills training and assessment methods. These methods aim to enable surgeons to acquire the requisite skills for robotic surgery in the most efficient manner. Despite the progression from a master-apprentice system to manual objective assessment and then automated performance assessment methods, certain limitations have been observed. Our research aims to address these limitations by exploring muscle activity and state information during training via surface electromyography (sEMG) signals. This approach is intended to eventually provide interpretable information that can enhance the trainee’s understanding of assessment feedback and facilitate skill improvement. Building on our first study that validated the feasibility of motion primitive recognition based on sEMG signals, this work compares the performance of various machine learning (ML) methods for motion primitive recognition. It also investigates the effect of different parameters of the sliding window on recognition accuracy. Our findings indicate that the deep neural network (DNN) when paired with optimal sliding window parameters, can achieve the best average accuracy of 61.76% in this study. The discoveries also provide a reference of parameter settings for variable-length sliding window approach and ML methods in recognition of robotic surgery motion based on sEMG data. By demonstrating the feasibility and exploring the most effective analysis method, this work lays down the first stone to address the research topic of integrating muscle information into multimodal surgical skill training and assessment.
{"title":"sEMG-Based Motion Recognition for Robotic Surgery Training Using Machine Learning and Variable-Length Sliding Windows—A Preliminary Study","authors":"Chenji Li;Chao Liu;Arnaud Huaulmé;Nabil Zemiti;Pierre Jannin;Philippe Poignet","doi":"10.1109/TMRB.2025.3560389","DOIUrl":"https://doi.org/10.1109/TMRB.2025.3560389","url":null,"abstract":"The advent of robotic surgery has brought about a paradigm shift in the medical field, necessitating the development of corresponding surgical skills training and assessment methods. These methods aim to enable surgeons to acquire the requisite skills for robotic surgery in the most efficient manner. Despite the progression from a master-apprentice system to manual objective assessment and then automated performance assessment methods, certain limitations have been observed. Our research aims to address these limitations by exploring muscle activity and state information during training via surface electromyography (sEMG) signals. This approach is intended to eventually provide interpretable information that can enhance the trainee’s understanding of assessment feedback and facilitate skill improvement. Building on our first study that validated the feasibility of motion primitive recognition based on sEMG signals, this work compares the performance of various machine learning (ML) methods for motion primitive recognition. It also investigates the effect of different parameters of the sliding window on recognition accuracy. Our findings indicate that the deep neural network (DNN) when paired with optimal sliding window parameters, can achieve the best average accuracy of 61.76% in this study. The discoveries also provide a reference of parameter settings for variable-length sliding window approach and ML methods in recognition of robotic surgery motion based on sEMG data. By demonstrating the feasibility and exploring the most effective analysis method, this work lays down the first stone to address the research topic of integrating muscle information into multimodal surgical skill training and assessment.","PeriodicalId":73318,"journal":{"name":"IEEE transactions on medical robotics and bionics","volume":"7 2","pages":"572-582"},"PeriodicalIF":3.4,"publicationDate":"2025-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144084780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-14DOI: 10.1109/TMRB.2025.3560331
Reese R. Peterson;Jennifer K. Leestma;Inseung Kang;Aaron J. Young
Fall incidents due to slips are some of the most common causes of injuries for industry workers and older adults, motivating research to assist balance recovery following slips. To assist balance recovery during a slip, a detection algorithm that can work with an assistive device, such as an exoskeleton, needs to be able to detect slips rapidly after onset, which remains a critical gap in the field. Here, we compared the ability of linear discriminant analysis (LDA), extreme gradient boosting (XGBoost), and convolutional neural networks (CNN) to detect slip using only native sensors on a hip exoskeleton. We trained and evaluated user-independent models on early-stance (ES) and late-stance (LS) slips of various magnitudes collected through treadmill-based slips. All models, except LDA with LS slips, detected slips with ¿90% accuracy. Overall, the best model was XGBoost, with its fastest results achieving average detection times and median accuracies of 155.06 ms at 96.25% for ES slips and 228.88 ms at 93.75% for LS slips, while also achieving 100% sensitivity at 195.64 ms (ES) and 266.24 ms (LS). Our results indicate a promising direction for further research into designing a generalizable model for balance recovery during slip perturbations using robotic hip exoskeletons.
由于滑倒导致的跌倒事故是工业工人和老年人受伤的最常见原因之一,这促使研究人员在滑倒后帮助平衡恢复。为了在打滑过程中帮助平衡恢复,一种能够与辅助设备(如外骨骼)一起工作的检测算法需要能够在打滑发生后快速检测到打滑,这在该领域仍然是一个关键的空白。在这里,我们比较了线性判别分析(LDA)、极端梯度增强(XGBoost)和卷积神经网络(CNN)仅使用髋关节外骨骼上的本机传感器检测滑移的能力。我们训练并评估了通过跑步机收集的不同震级的早站(ES)和晚站(LS)滑动的用户独立模型。除LS滑动的LDA外,所有模型检测滑动的准确率均为90%。总的来说,最好的模型是XGBoost,其最快的结果实现了平均检测时间和中位数精度155.06 ms(96.25%的ES滑动)和228.88 ms(93.75%的LS滑动),同时也实现了100%的灵敏度195.64 ms (ES)和266.24 ms (LS)。我们的研究结果表明了一个有希望的方向,为进一步研究设计一个可推广的模型,用于利用机器人髋关节外骨骼在滑移扰动下的平衡恢复。
{"title":"Machine Learning Enables Rapid Detection of Slips Using a Robotic Hip Exoskeleton","authors":"Reese R. Peterson;Jennifer K. Leestma;Inseung Kang;Aaron J. Young","doi":"10.1109/TMRB.2025.3560331","DOIUrl":"https://doi.org/10.1109/TMRB.2025.3560331","url":null,"abstract":"Fall incidents due to slips are some of the most common causes of injuries for industry workers and older adults, motivating research to assist balance recovery following slips. To assist balance recovery during a slip, a detection algorithm that can work with an assistive device, such as an exoskeleton, needs to be able to detect slips rapidly after onset, which remains a critical gap in the field. Here, we compared the ability of linear discriminant analysis (LDA), extreme gradient boosting (XGBoost), and convolutional neural networks (CNN) to detect slip using only native sensors on a hip exoskeleton. We trained and evaluated user-independent models on early-stance (ES) and late-stance (LS) slips of various magnitudes collected through treadmill-based slips. All models, except LDA with LS slips, detected slips with ¿90% accuracy. Overall, the best model was XGBoost, with its fastest results achieving average detection times and median accuracies of 155.06 ms at 96.25% for ES slips and 228.88 ms at 93.75% for LS slips, while also achieving 100% sensitivity at 195.64 ms (ES) and 266.24 ms (LS). Our results indicate a promising direction for further research into designing a generalizable model for balance recovery during slip perturbations using robotic hip exoskeletons.","PeriodicalId":73318,"journal":{"name":"IEEE transactions on medical robotics and bionics","volume":"7 2","pages":"666-677"},"PeriodicalIF":3.4,"publicationDate":"2025-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144084779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-14DOI: 10.1109/TMRB.2025.3560399
Jiaqi Chen;Guochen Ning;Longfei Ma;Hongen Liao
Deformable tissue retraction is a common but time-consuming task in robotic surgery. An autonomous robotic deformable tissue retraction system has the potential to help surgeons reduce cognitive burdens and focus more on critical aspects of the surgery. However, the uncertain deformation and complex constraints of deformable tissues pose significant challenges. We propose an autonomous deformable tissue retraction framework that incorporates visual representation and learning models, along with a 7-degree-of-freedom robotic system. For extracting deformation representations and learning to manipulate deformable tissues based on 2D images, we introduce a Sequential-information-based Contrastive State Representation Learning (SC-SRL) algorithm and a reinforcement learning model with asymmetric inputs and auxiliary losses. Experimental results show that the proposed framework achieved a 93.0% success rate of tissue retraction task in a simulated environment. Furthermore, our method demonstrates a safe retraction trajectory proportion of 92.5% based on a novel evaluation method using the histogram of feature angles of the tissue particles. The proposed framework can also be deployed on a real robotic system through a sim-to-real transfer pipeline, acquire policies for nearby tasks and perform resistance to visual dynamic disturbance. This study paves a new path for the application of vision-based intelligent systems in surgical robotics.
{"title":"Autonomous Deformable Tissue Retraction System Based on 2-D Visual Representation and Asymmetric Reinforcement Learning for Robotic Surgery","authors":"Jiaqi Chen;Guochen Ning;Longfei Ma;Hongen Liao","doi":"10.1109/TMRB.2025.3560399","DOIUrl":"https://doi.org/10.1109/TMRB.2025.3560399","url":null,"abstract":"Deformable tissue retraction is a common but time-consuming task in robotic surgery. An autonomous robotic deformable tissue retraction system has the potential to help surgeons reduce cognitive burdens and focus more on critical aspects of the surgery. However, the uncertain deformation and complex constraints of deformable tissues pose significant challenges. We propose an autonomous deformable tissue retraction framework that incorporates visual representation and learning models, along with a 7-degree-of-freedom robotic system. For extracting deformation representations and learning to manipulate deformable tissues based on 2D images, we introduce a Sequential-information-based Contrastive State Representation Learning (SC-SRL) algorithm and a reinforcement learning model with asymmetric inputs and auxiliary losses. Experimental results show that the proposed framework achieved a 93.0% success rate of tissue retraction task in a simulated environment. Furthermore, our method demonstrates a safe retraction trajectory proportion of 92.5% based on a novel evaluation method using the histogram of feature angles of the tissue particles. The proposed framework can also be deployed on a real robotic system through a sim-to-real transfer pipeline, acquire policies for nearby tasks and perform resistance to visual dynamic disturbance. This study paves a new path for the application of vision-based intelligent systems in surgical robotics.","PeriodicalId":73318,"journal":{"name":"IEEE transactions on medical robotics and bionics","volume":"7 2","pages":"595-606"},"PeriodicalIF":3.4,"publicationDate":"2025-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144084776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-14DOI: 10.1109/TMRB.2025.3560383
Hamid Ladjal;Michael Beuve;Behzad Shariat
Respiratory-induced organ motion is a technical challenge to radiation therapy for lung cancer. Breathing is controlled by two independent muscles: the thorax and diaphragm muscles. The modeling of their action constitutes an important step for the respiratory motion model. The amplitude of the diaphragm forces and ribs displacement are patient-specific and depends on geometrical and physiological characteristics of the patient. This article presents a patient-specific bio-mechanical model (PSBM) of the diaphragm, as well as ribs kinematics. To determine the appropriate values of specific diaphragm forces for each patient, during a whole respiratory cycle, inverse finite element (FE) analysis methodology has been implemented to match the experimental results to the FE simulation results. Ribs kinematics extracted and calculated directly from 4D Computed Tomography (CT) scan images. We have investigated the effect of element type, finite deformation and elasticity on the accuracy and computation time. The results demonstrate that the proposed FE model including ribs kinematics can accurately predict the diaphragm motion with an average surface error in diaphragm/lungs contact region less than $2.2pm 2.1mm$ . This constitutes first steps for biomechanical patient-specific of the respiratory system modeling to pilot lungs and lung tumor motion for External Beam Radiation Therapy (EBRT).
{"title":"Patient-Specific Biomechanical Diaphragm-Ribs Respiratory Motion Model for Radiation Therapy","authors":"Hamid Ladjal;Michael Beuve;Behzad Shariat","doi":"10.1109/TMRB.2025.3560383","DOIUrl":"https://doi.org/10.1109/TMRB.2025.3560383","url":null,"abstract":"Respiratory-induced organ motion is a technical challenge to radiation therapy for lung cancer. Breathing is controlled by two independent muscles: the thorax and diaphragm muscles. The modeling of their action constitutes an important step for the respiratory motion model. The amplitude of the diaphragm forces and ribs displacement are patient-specific and depends on geometrical and physiological characteristics of the patient. This article presents a patient-specific bio-mechanical model (PSBM) of the diaphragm, as well as ribs kinematics. To determine the appropriate values of specific diaphragm forces for each patient, during a whole respiratory cycle, inverse finite element (FE) analysis methodology has been implemented to match the experimental results to the FE simulation results. Ribs kinematics extracted and calculated directly from 4D Computed Tomography (CT) scan images. We have investigated the effect of element type, finite deformation and elasticity on the accuracy and computation time. The results demonstrate that the proposed FE model including ribs kinematics can accurately predict the diaphragm motion with an average surface error in diaphragm/lungs contact region less than <inline-formula> <tex-math>$2.2pm 2.1mm$ </tex-math></inline-formula>. This constitutes first steps for biomechanical patient-specific of the respiratory system modeling to pilot lungs and lung tumor motion for External Beam Radiation Therapy (EBRT).","PeriodicalId":73318,"journal":{"name":"IEEE transactions on medical robotics and bionics","volume":"7 2","pages":"802-813"},"PeriodicalIF":3.4,"publicationDate":"2025-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143949203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-14DOI: 10.1109/TMRB.2025.3560385
Mahdi Nabipour;Gregory S. Sawicki;Massimo Sartori
Recent advancements in wearable exoskeletons for human lower extremities have primarily focused on augmenting walking capacity by either reducing metabolic costs or providing joint torque support based on measured electromyography or predicted joint torques. However, less attention has been given to the use of robotic exoskeletons for controlling the mechanics of specific biological tissues, such as elastic tendons. Achieving closed-loop control over in-vivo musculotendon mechanics during movement could revolutionize injury prevention and personalized rehabilitation. Here, we introduce a framework utilizing musculoskeletal modeling and nonlinear model predictive control (NMPC) to close the loop around tendon force in a simulation of cyclic force production of the human ankle plantarflexors in parallel with a powered exoskeleton. The proposed framework integrates a computationally efficient model comprising explicit closed-form ordinary differential equations governing musculotendon and ankle joint with parallel actuation dynamics. The model’s computational time, in the microsecond range, allows prediction of future states in real-time closed-loop control. Compared to a predictive proportional-derivative controller, the NMPC-based framework more effectively maintained Achilles tendon force within a predetermined threshold across varying levels of muscle excitation amplitude and frequency. Remarkably, the NMPC framework demonstrates robustness to muscle excitation variations during cyclic motions, making it suitable for real-world applications.
{"title":"Predictive Control of Achilles Tendon Force During Cyclic Motions in a Simulated Musculoskeletal System With Parallel Actuation","authors":"Mahdi Nabipour;Gregory S. Sawicki;Massimo Sartori","doi":"10.1109/TMRB.2025.3560385","DOIUrl":"https://doi.org/10.1109/TMRB.2025.3560385","url":null,"abstract":"Recent advancements in wearable exoskeletons for human lower extremities have primarily focused on augmenting walking capacity by either reducing metabolic costs or providing joint torque support based on measured electromyography or predicted joint torques. However, less attention has been given to the use of robotic exoskeletons for controlling the mechanics of specific biological tissues, such as elastic tendons. Achieving closed-loop control over in-vivo musculotendon mechanics during movement could revolutionize injury prevention and personalized rehabilitation. Here, we introduce a framework utilizing musculoskeletal modeling and nonlinear model predictive control (NMPC) to close the loop around tendon force in a simulation of cyclic force production of the human ankle plantarflexors in parallel with a powered exoskeleton. The proposed framework integrates a computationally efficient model comprising explicit closed-form ordinary differential equations governing musculotendon and ankle joint with parallel actuation dynamics. The model’s computational time, in the microsecond range, allows prediction of future states in real-time closed-loop control. Compared to a predictive proportional-derivative controller, the NMPC-based framework more effectively maintained Achilles tendon force within a predetermined threshold across varying levels of muscle excitation amplitude and frequency. Remarkably, the NMPC framework demonstrates robustness to muscle excitation variations during cyclic motions, making it suitable for real-world applications.","PeriodicalId":73318,"journal":{"name":"IEEE transactions on medical robotics and bionics","volume":"7 2","pages":"814-825"},"PeriodicalIF":3.4,"publicationDate":"2025-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143949215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}