LEARN: Wearable Exoskeletons Based on Multimodal Edge Computing for Daily Life Assistance
Key Information
- Duration: 24 months
- Main ERC field: PE – Physical Sciences and Engineering
- ERC Subfields: PE7_10 Robotics PE6_11 Machine learning, statistical data processing, and applications using signal processing (e.g., voice, image, video) LS7_12 Healthcare, including care for the aging population
- Keywords: Assistive technologies Edge computing Machine learning Exoskeletons Biometric signals Artificial vision
Brief Description of the Proposal LEARN focuses on assistive devices in the form of wearable robotic systems that can enhance or restore motor functions in healthy workers or patients with motor disabilities. The main goal is to leverage machine learning (ML) as an enabling technology for developing the next generation of such assistive devices. To achieve this, the project heavily relies on interdisciplinary research. The three research units involved in LEARN bring their expertise on various topics covered by the research project: machine learning, embedded systems, edge computing, artificial vision, robotic systems for human-machine interaction, wearable robotics.
State of the Art This section provides an overview of the state of the art in key technologies relevant to the LEARN project. Progress and limitations of robotic exoskeletons for assistance and neurorehabilitation are discussed, with particular attention to rigid and actuated designs. The role of edge computing in enabling the processing of complex algorithms, including artificial intelligence (AI), for exoskeleton control is also highlighted. Finally, deep learning (DL) techniques for affordance segmentation (AS) and classification of sEMG and biometric signals are examined, highlighting challenges and opportunities for the development of wearable assistive devices.
Detailed Project Description The project is structured into five milestones (M), each with specific objectives and related activities:
M1 – Artificial Vision for Affordance Detection on Edge Devices (Leader: UNIGE):
- Design a new RGB-D AS (RGB + depth) system using lightweight DL architectures to improve performance compared to using RGB data alone. The goal is to enable the assistive device to estimate object properties and automatically assume an appropriate grip, allowing semi-autonomous operation.
- Activities:
- A1.1 Integration of depth information into the AS system (months 1-18): Analyze SOTA solutions for RGB-D AS, explore, and co-optimize depth representation and DL architecture using Platform-Aware NAS.
- A1.2 Deployment of the AS system on the embedded system (months 3-18): Deploy the AS module on edge devices like Jetson Tx2, Jetson Nano, and Google Coral, ensuring a lightweight and effective AS model.
- Deliverables:
- D.1.1 (M12): Design and development of the AS module (document)
- D.1.2 (M18): AS module prototype (prototype)
- Activities:
M2 – Embedded ML for Classification of sEMG and Biometric Signals (Leader: UNIMERCATORUM):
- Develop a new low-power wearable control system to detect movement intention based on biometric signals, adapting to the user over time with Lifelong Machine Learning (LML). The goal is to improve control, operational awareness, and autonomy of assistive devices.
- Activities:
- A2.1 Development of HMI based on sEMG and biometric signals (months 1-18): Collect data, design lightweight ML models, develop hardware prototypes, and deploy ML models, including LML, on microcontrollers.
- A2.2 Integration of HMI with the wearable system (months 13-18): Integrate AI-based HMI with the wearable exoskeleton developed in M3.
- Deliverables:
- D.2.1 (M12): Design and development of AI-based HMI (document)
- D.2.2 (M18): AI-based HMI prototype (prototype)
- Activities:
M3 – Development of New AI-Enhanced Assistive Wearable Exoskeleton Systems (Leader: SSSA):
- Develop prototypes of advanced wearable exoskeletons for motor assistance, using AI to enhance control, operational awareness, and autonomy, improving usability and acceptance by the end user.
- Activities:
- A3.1 Development of wearable devices for detection and control (months 1-18): Redesign existing exoskeleton prototypes to integrate embedded sensors for AI data, improve usability, and incorporate additional features based on the results of M1 and M2.
- A3.2 System integration (months 13-18): Integrate the developed wearable exoskeleton with advanced embedded detection, AI processing methods, and control developed in M1 and M2.
- A3.3 Experiments in relevant application scenarios (months 15-24): Experimentally validate and evaluate the integrated prototypes in a lab environment and a relevant application scenario (e.g., with patients with motor disabilities) (TRL5).
- Deliverables:
- D.3.1 (M12): Design and development of wearable exoskeletons (document)
- D.3.2 (M18): Integrated system prototype (exoskeleton and AI processing) (prototype)
- D.3.3 (M24): Experimental evaluation results (document)
- Activities:
M4 – Dissemination and Exploitation (Leader: SSSA):
- Promote dissemination and interest in the methods developed in the target clinical and industrial application sectors (including through pilot tests and demonstrations) and communicate the results to the scientific community and the general public.
- Activities:
- A4.1 Internet and media dissemination (months 1-24): Create a web page, organize live events, disseminate results to the media.
- A4.2 Scientific dissemination (months 5-24): Publish articles in journals, organize workshops and special sessions at major conferences.
- A4.3 Strategic dissemination (months 13-24): Organize public awareness events such as webinars, workshops, etc. in collaboration with the Italian competence center on collaborative robotics Artes 4.0, the Italian Institute of Robotics and Intelligent Machines, and European digital innovation hubs on robotics such as DIH-HERO.
- Deliverables:
- D.4.1 (M2): Project web page online
- D.4.2 (M12): Dissemination report no. 1
- D.4.3 (M24): Dissemination report no. 2
- Activities:
M5 – Project Management (Leader: UNIMERCATORUM):
- Manage the administrative aspects of the project, as well as meetings and communication between the UR.
- Activities:
- A5.1 Administration and supervision (months 1-24): Prepare, monitor, and update the work plan, organize kickoff and technical meetings, manage communication between UR, define success criteria and strategies for resolving major technical issues, ensure high-quality results, periodically evaluate results, deliver interim and final scientific and financial reports.
- Deliverables:
- D.5.1 (M12): Management and administration report no. 1
- D.5.2 (M24): Management and administration report no. 2
- Activities:
Detailed Description of the Project’s Impact
- Beyond the state of the art: The project aims to advance the state of the art in several research areas. In artificial vision, LEARN will develop innovative solutions for affordance detection on embedded systems with limited resources. Regarding HMI, the project will introduce the use of Lifelong Machine Learning to adapt control systems to individual biometric peculiarities. In terms of developing assistive devices, the project will integrate advanced AI-enabled control features into exoskeleton designs, focusing on comfort, usability, and reliability.
- Dissemination of project results: Project results will be disseminated through various channels, including scientific publications, conference participation, workshop organization, project web page creation, and media engagement. Priority will be given to disseminating results to young researchers and industrial stakeholders.
- Exploitation of project results: LEARN project results have the potential to be exploited in various sectors, including healthcare, manufacturing, and assistive robotics. The developed technology could lead to smarter, more efficient, and user-friendly assistive devices for people with motor disabilities, as well as improved robotic systems for industrial and service applications.
- Socioeconomic impact and compliance with EU programs: The LEARN project aligns with EU priorities on health, well-being, and social inclusion. By providing innovative solutions to improve the lives of people with motor disabilities, the project contributes to the objectives of Cluster 5 Health and the European Commission’s 2020-2024 research and innovation strategy. Additionally, the project promotes innovation and competitiveness in the European industry by supporting the development of advanced assistive technologies.
Bibliography The bibliography section includes 29 references to academic publications, highlighting the research foundations and sources used to inform the state of the art, methodology, and objectives of the LEARN project. The citations cover topics such as robotic exoskeletons, machine learning, artificial vision, biometric signal processing, and human-robot interaction.
BIBLIOGRAPHY
- Cappello, Leonardo, et al. “Assisting hand function after spinal cord injury with a fabric-based soft robotic glove.” Journal of Neuroengineering and Rehabilitation 15.1 (2018): 1-10.
- Lu, Zhiyuan, et al. “Robotic Hand–Assisted Training for Spinal Cord Injury Driven by Myoelectric Pattern Recognition: A Case Report.” American Journal of Physical Medicine & Rehabilitation 96.10 (2017): S146-S149.
- Ang, Kai Keng, et al. “A randomized controlled trial of EEG-based motor imagery brain-computer interface robotic rehabilitation for stroke.” Clinical EEG and Neuroscience 46.4 (2015): 310-320.
- Heo, Pilwon, et al. “Current hand exoskeleton technologies for rehabilitation and assistive engineering.” International Journal of Precision Engineering and Manufacturing 13.5 (2012): 807-824.
- Leonardis, D., Barsotti, M., Loconsole, C., Solazzi, M., Troncossi, M., Mazzotti, C., … & Frisoli, A. “An EMG-controlled robotic hand exoskeleton for bilateral rehabilitation.” IEEE Transactions on Haptics 8.2 (2015): 140-151.
- Sarac, Mine, et al. “Design and kinematic optimization of a novel underactuated robotic hand exoskeleton.” Meccanica 52.3 (2017): 749-761.
- Chu, Chia-Ye, and Rita M. Patterson. “Soft robotic devices for hand rehabilitation and assistance: a narrative review.” Journal of Neuroengineering and Rehabilitation 15.1 (2018): 1-14.
- Alicea, Ryan, et al. “A soft, synergy-based robotic glove for grasping assistance.” Wearable Technologies 2 (2021).
- Kim, Dong Hyun, Yechan Lee, and Hyung-Soon Park. “Bioinspired high-degrees of freedom soft robotic glove for restoring versatile and comfortable manipulation.” Soft Robotics 9.4 (2022): 734-744.
- Bagneschi, Tommaso, et al. “Design and Characterization of Modular Soft Components for an Exoskeleton Glove with Improved Wearability.” Symposium on Robot Design, Dynamics and Control. Springer, Cham, 2022.
- Nguyen, Anh, et al. “Detecting object affordances with convolutional neural networks.” 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2016.
- Zhong, Boxuan, He Huang, and Edgar Lobaton. “Reliable vision-based grasping target recognition for upper limb prostheses.” IEEE Transactions on Cybernetics (2020).
- Ragusa, Edoardo, et al. “Hardware-aware affordance detection for application in portable embedded systems.” IEEE Access 9 (2021): 123178-123193.
- Apicella, Tommaso, et al. “An Affordance Detection Pipeline for Resource-Constrained Devices.” 2021 28th IEEE International Conference on Electronics, Circuits, and Systems (ICECS). IEEE, 2021.
- Howard, Andrew, et al. “Searching for mobilenetv3.” Proceedings of the IEEE/CVF International Conference on Computer Vision. 2019.
- Lin, Ji, et al. “Memory-efficient Patch-based Inference for Tiny Deep Learning.” Advances in Neural Information Processing Systems 34 (2021): 2346-2358.
- Benmeziane, Hadjer, et al. “A comprehensive survey on hardware-aware neural architecture search.” arXiv preprint arXiv:2101.09336 (2021).
- Taverne, Luke T., et al. “Video-based prediction of hand-grasp preshaping with application to prosthesis control.” 2019 International Conference on Robotics and Automation (ICRA). IEEE, 2019.
- Krausz, Nili E., and Levi J. Hargrove. “A survey of teleceptive sensing for wearable assistive robotic devices.” Sensors 19.23 (2019): 5238.
- He, Yunan, et al. “Vision-based assistance for myoelectric hand control.” IEEE Access 8 (2020): 201956-201965.
- Leonardis, Daniele, et al. “An EMG-controlled robotic hand exoskeleton for bilateral rehabilitation.” IEEE Transactions on Haptics 8.2 (2015): 140-151.
- Loconsole, Claudio, et al. “An EMG-based approach for on-line predicted torque control in robotic-assisted rehabilitation.” 2014 IEEE Haptics Symposium (HAPTICS). IEEE, 2014.
- Loconsole, Claudio, et al. “A model-free technique based on computer vision and sEMG for classification in Parkinson’s disease by using computer-assisted handwriting analysis.” Pattern Recognition Letters 121 (2019): 28-36.
- Pradhan, Ashirbad, Jiayuan He, and Ning Jiang. “Open Access Dataset for Electromyography based Multicode Biometric Authentication.” arXiv preprint arXiv:2201.01051 (2022).
- Johnson, Shawn Swanson, and Elizabeth Mansfield. “Prosthetic training: upper limb.” Physical Medicine and Rehabilitation Clinics 25.1 (2014): 133-151.
- Sierotowicz, Marek, et al. “EMG-Driven Machine Learning Control of a Soft Glove for Grasping Assistance and Rehabilitation.” IEEE Robotics and Automation Letters 7.2 (2022): 1566-1573.
- Wimalasena, Lahiru N., et al. “Estimating muscle activation from EMG using deep learning-based dynamical systems models.” Journal of Neural Engineering 19.3 (2022): 036013.
- Parisi, German I., et al. “Continual lifelong learning with neural networks: A review.” Neural Networks 113 (2019): 54-71.
- Lin, Ji, et al. “On-Device Training Under 256KB Memory.” arXiv preprint arXiv:2206.15472 (2022).