An analysis of gait capture and simulation techniques for lower limb exoskeleton robots for stroke rehabilitation

Research Article
Open access

An analysis of gait capture and simulation techniques for lower limb exoskeleton robots for stroke rehabilitation

Yiquan Jin 1*
  • 1 Zhejiang University    
  • *corresponding author yiquan.20@intl.zju.edu.cn
Published on 25 September 2023 | https://doi.org/10.54254/2755-2721/10/20230120
ACE Vol.10
ISSN (Print): 2755-2721
ISSN (Online): 2755-273X
ISBN (Print): 978-1-83558-009-7
ISBN (Online): 978-1-83558-010-3

Abstract

Lower extremity rehabilitation-assisted exoskeleton robots bring together multiple disciplines such as biomechanics, control engineering, robotics, and computer science. The main role of lower extremity rehabilitation exoskeleton robots is to help patients and rehabilitators to maintain or restore the mobility of lower extremities, therefore, proper research and discussion of human gait analysis is the basis for establishing and improving such exoskeleton robots. To analyse and evaluate the positive effects of modern technology on stroke patients, the development of human gait capture and simulation technologies will be mainly summarized, the latest human gait capture and simulation technologies will be classified and evaluated. And through a comprehensive review and analysis of research advances in these areas, the usefulness of lower extremity rehabilitation-assisted exoskeleton robots for stroke populations is evaluated. This paper is informative in studying the usefulness of a lower limb exoskeleton robot combined with gait capture and simulation technology for rehabilitation training of stroke patients.

Keywords:

lower extremity rehabilitation-assisted exoskeleton robots, gait capture techniques, gait simulation techniques, stroke patients.

Jin,Y. (2023). An analysis of gait capture and simulation techniques for lower limb exoskeleton robots for stroke rehabilitation. Applied and Computational Engineering,10,11-16.
Export citation

1. Introduction

From 1990 to 2019, the number of stroke cases increased significantly globally and was the third leading cause of death and disability combined [1]. The abnormal motor activation and altered muscle motor control caused by a stroke can cause patients to develop abnormal gait patterns. Therefore, timely and reasonable lower extremity rehabilitation activities are crucial for stroke patients. Compared with the traditional way of manual assistance by rehabilitation therapists, the use of a lower extremity rehabilitation assisted exoskeleton robot to help patients conduct self-rehabilitation can effectively alleviate the serious mismatch between the number of rehabilitation therapists and the number of patients and can effectively reduce the cost of rehabilitation. However, based on each patient's individual physical characteristics such as height, age, and gender, there is an error of more than 5 degrees in the sagittal kinematics of the patient's lower extremity [2]. Exoskeleton robots for lower extremity rehabilitation still need more accurate and differentiated human gait data to extend their range of application and improve their general safety.

Unlike traditional human lower limb rehabilitation assisted exoskeleton robots that cannot exert real-time control based on the realistic motion data of the rehabilitated person, the existing robots are capable of real-time monitoring of the wearer's lower limb motion patterns and self-adjustment of joint motion data, effectively reducing the possibility of secondary injuries to the rehabilitated person caused by data non-compliance. The comfort, naturalness and stability of the exoskeleton robot are gradually improved to optimize the wearer experience and promote a good experience of human-robot interaction. The precise and personalized body data of the rehabilitated person detected by modern motion capture technology is used as data input for modern gait simulation technology, and the lower limb exoskeleton robot provides individual rehabilitation training for the stroke patient based on the output of the gait simulation technology, thus helping the patient to carry out a more scientific and effective rehabilitation approach.

2. Development of human gait capture technology

Lighter, more accurate wearable sensors and accurate motion capture technology make the human gait capture level continue to improve and provide the basic equipment conditions and data conditions for the subsequent gait simulation.

2.1. Development of wearable multimode sensors

There are three main categories of current wearable sensors: mountable sensors, fabric-based sensors, and skin-like sensors [3]. Wearable devices allow for information extraction, aggregation, and close monitoring of the surroundings without affecting the motion of the detector. Wearable sensors usually have excellent flexibility and stretchability, thus adapting to the various strains generated by human motion and enabling accurate capture of human activity signals through their ability to respond quickly and to sense mechanical stimuli.

Wearable strain sensors based on 3D textile structures, such as MLS sensors, can be adapted in different modes to monitor biosignals or motion signals such as gait pressure [4]. 3D textile structures are now commonly used as pressure absorbers. For example, MLS is able to use pre-strained monofilaments in 3D textile structures as pressure transmitters to produce amplified strain in PVDF films [4]. Therefore, by attaching a 3D textile structure sensor, such as an MLS, to a shoe insole [4] or to the skin of a lower limb and monitoring the change in pressure effect during walking, human gait information can be captured in real time.

Skin-like sensors, such as multifunctional epidermal sensors, consist of electrodes to monitor muscle stimulation, Electromyography (EMG) signals, and mechanical strain [3]. By attaching the sensor to the skin, the EMG signal and strain of the lower extremity are monitored while the tester is walking, enabling the capture of human biosignals and kinetic information generated by gait.

Wearable multimode sensors, unlike some older sensors, are lightweight and fit well enough to allow stroke patients to test movement independent of the sensor's own size or mass, resulting in more accurate gait capture information.

2.2. Development of high precision motion capture technology

Unlike traditional motion capture techniques, modern optical motion capture systems with high accuracy features can detect the 3D position of the subject by multiple simultaneous cameras in a specified space [5]. Multi-camera motion capture systems that reflect markers through tracking mirrors, such as the Vicon multi-camera motion capture system, have gradually become the established gold standard for gait analysis [6].

Although optical motion capture technology is highly accurate, it also has the limitations of requiring frequent calibration, space limitations, and significant costs. The development of single-camera motion capture technology enables the capture of gait information of test subjects without spatial and environmental constraints on the basis of ensuring certain recognition and capture accuracy [5]. For example, a single RGB camera combined with multimodal human motion capture technology is used to capture the motion of the tester. For example, the MonoEye system uses an RGB camera with an extra-wide fisheye lens mounted on the tester's chest to capture the 3D body pose of the wearer as well as information about the surrounding environment, thus enabling multimodal motion capture [5]. Also, by combining a single RGB camera with a convolutional neural network (CNN), 2D and 3D pose features can be captured and matched while the subject is walking. In addition to this, depth cameras such as OpenPose, DeepPose and VNect are also being used for human gait capture [6].

The high-precision optical motion capture system can more accurately respond to human gait data, enabling stroke patients to make adjustments to their gait based on more accurate data. The portable RGB camera and depth camera systems enable telemedicine and remote rehabilitation methods, allowing stroke patients living in remote areas to adjust their gait with this technology [6].

3. Development of human gait simulation technology

The systematic upgrading of sensors and the continuous optimization of computer algorithms provide technical support for the development of new human gait simulation technologies. The new human gait simulation technology is divided into three main areas: gait recognition using deep learning algorithms, wireless real-time monitoring sensor network, and the development of virtual reality systems to evaluate gait function.

3.1. Gait recognition using deep learning algorithms

For gait simulation, the commonly used deep learning techniques are convolutional neural networks (CNN), support Vector Machines (SVM), long short-term memory networks (LSTM) and Gated circulation unit (GRU). However, using the above-mentioned deep algorithms alone to capture and simulate human gait predictions will have large errors. Instead, the classification and simulation prediction of human gait can be made more accurate by using CNN combined with LSTM [7] [8], multi-scale learning model (MSL) [9], deep neural network library with long and short-term memory (cuDNN LSTM) integrated with improved RNN [10], multilayer perceptron using nonlinear dynamics [11] and a two-layer feedforward neural network combination using WS classifier and PGC predictor [12] for simulating individual gait behavior.

The data inputs for the above combined approach are mainly divided into neurophysiological signals such as EEG and EMG [8] [9] [12], 3D motion capture information [7] [10] [11] and kinetic data information [12]. And the use of the above combined approach has a large requirement for the initial input data. For EMG signals collected by EMG sensors such as TrignoTM requires a filter to filter the sEMG signal [8], which in turn eliminates the interference of other electromagnetic signals received during signal recording. The acceleration and angular velocity data collected using IMU from various parts of the experimenter's lower extremities also need to be filtered by low-pass filters, and the sensor data should be in the same dimension using linear interpolation, normalization, and data segmentation, so as to effectively improve the accuracy of the subsequent deep neural network algorithms. CNNs mainly perform feature extraction and dimensionality reduction on images through convolutional and pooling layers [8], thus reducing the number of parameters and easing the complexity of the computation. RNNs capture the temporal information in sequential data through cyclic layers and state transfer, thus modeling long-term dependencies, and have problems such as gradient disappearance and gradient explosion while effectively reducing the number of parameters and training time of the model [7]. LSTM effectively mitigates the gradient vanishing problem of RNNs by introducing mechanisms such as forgetting gates, input gates and output gates [8].

Thus, by sequential combination of multiple neural networks and multimodal signal combination [9], combinatorial neural networks possess higher classification accuracy [9] and predictive relevance [11]. Gait simulation of a lower limb rehabilitation-assisted exoskeleton robot is made more adaptive and conformable by the input of personalized initial data from the wearer.

3.2. Wireless real-time monitoring sensor network

The human sensor network is a system created by fusing innovative multidisciplinary knowledge of biosensors, medical electronics, multi-sensor data fusion methods, and wireless communications [13]. Real-time body wireless sensor network systems measure real-time body motion data mainly through inertial measurement units (IMU), Micro-Electro-Mechanical Systems (MEMS) and other small systems with microelectronic components and micro packages. Compared to gait analysis using high-precision cameras and optical motion systems, Body Sensor Network (BSN) and Wireless Sensor Network (WSN) systems are less susceptible to the limitations of complex algorithms on development and workspace. Their small size, low cost and unrestricted portability for dynamic monitoring make it possible to record gait fluctuations in everyday life scenarios [14].

The data input for BSN and WSN is usually sensor unit detection data consisting of magnetometer, accelerometer and gyroscope [13]. Whether due to the drift characteristics of gyroscopes or the magnetic interference problems received by magnetometers [13], the measured data cannot be directly used as input to the real-time sensor network. The most common solution to this problem is to process the collected data through a Kalman filter to eliminate the errors caused by sensor noise [13], allowing the output to converge [14]. Wireless sensor network systems receive sensor detection data through aggregation nodes or transmit the received data to sensor nodes through wireless transmission, etc. The gateway, on the other hand, enables the uploading of sensor detection data by using protocol conversions such as the XBee communication protocol. The management side realizes control decisions by using sensor detection data as a basis for analysis and judgment of gait and subsequent decision-making behavior. Thus, the goal of continuous and timely monitoring of gait data can be achieved when the sensor units are placed at suitable locations on the human body and form a complete wireless body domain network. Therefore, the goal of continuous and timely monitoring of gait data, such as the motion of the hip, knee [13], and ankle [14] during human walking, can be achieved through a wireless body area network formed by sensor units located on the human body. Based on these data, the management segment can complete the simulation and decision making of human gait.

The use of wireless sensor networks for real-time monitoring makes it possible to continuously monitor and repeatedly validate the gait data of stroke patients with limited mobility and range of motion, making the lower extremity rehabilitation-assisted exoskeleton robot more responsive to the individual needs of stroke patients.

3.3. Development of virtual reality systems to evaluate gait function

Virtual reality (VR) is a computer-based technology that is capable of displaying corresponding digital images by simulating a real environment through which the user can interact with the virtual environment. Therefore, through the interactive capability of VR technology and the ability to manipulate motor control, cognitive processes and learning mechanisms based on sensory feedback [15], human gait can be better simulated in a virtual environment.

Simulating human gait using VR technology requires building a virtual environment. With various devices and software, a corresponding 3D virtual environment can be generated [16]. The actual environmental extent can be recorded using terrestrial laser scanning (TLS), which generates a corresponding 3D point cloud after human error correction [17]. Gait data can be collected using a 3D motion capture system such as Vicon Motion Systems [15] or fed through a robot-assisted gait training (RAGT) with a programmable human lower limb exoskeleton worn by the tester. The workstation acting as a computer acquires data and displays the corresponding virtual scene on the screen by connecting the 3D motion capture system and the lower limb exoskeleton robot. The tester will interact with the virtual scene with a real object as a reference [18]. Based on the intervention test data, the gait data and behaviors of the testers were categorized and analyzed by scales such as Functional Ambulatory Scale (FAC), Functional Independence Measure (FIM), Berg Balance Scale (BBS), and Trunk Control Test (TCT) [18] to understand the gait simulation characteristics and problems of the testers.

Thus, VR gait training provides an intensive, variable therapy for stroke patients that can be adapted to the patient's abilities and data. Patients can repeat different movements and dual task the conditions to solve problems with the help of VR technology [15]. Through the intervention of a therapist with knowledge of both neurorehabilitation and VR, the most appropriate gait simulation adjustment program can be provided to each stroke patient, thus completing and reinforcing the appropriate gait adjustment learning [18].

4. Conclusion

The development of various wearable multimode sensors and advances in various high-precision motion capture devices and systems have enabled gait data from stroke patients to be responded to more accurately. A variety of deep learning neural networks, wireless sensor networks and VR reality simulation systems are used to personalize the gait simulation for different stroke patients' physical characteristics with accurate input data. The lower extremity rehabilitation-assisted exoskeleton robot is then used to personalize the rehabilitation movements of the stroke patient based on the gait simulation data. By following the rehabilitation movements guided by the exoskeleton robot, the stroke patient adjusts his or her abnormal movement gait, thus achieving the goal of gait improvement and even rehabilitation. Through the analysis and evaluation of gait capture and simulation technologies, the inner logic and working principles of lower limb rehabilitation-assisted exoskeleton robots can be better known, and the principles of relatively safe and rapid rehabilitation of modern stroke patients can be better understood and positively contribute to the development of future rehabilitation procedures for stroke patients.


References

[1]. Feigin, V. L., Stark, B. A., Johnson, C. O., Roth, G. A., Bisignano, C., Abady, G. G., ... & Hamidi, S. (2021). Global, regional, and national burden of stroke and its risk factors, 1990–2019: a systematic analysis for the Global Burden of Disease Study 2019. The Lancet Neurology, 20(10), 795-820.

[2]. Moissenet, F., Leboeuf, F., & Armand, S. (2019). Lower limb sagittal gait kinematics can be predicted based on walking speed, gender, age and BMI. Scientific reports, 9(1), 9510.

[3]. Khoshmanesh, F., Thurgood, P., Pirogova, E., Nahavandi, S., & Baratchi, S. (2021). Wearable sensors: At the frontier of personalised health monitoring, smart prosthetics and assistive technologies. Biosensors and Bioelectronics, 176, 112946.

[4]. Ahn, S., Cho, Y., Park, S., Kim, J., Sun, J., Ahn, D., ... & Park, J. J. (2020). Wearable multimode sensors with amplified piezoelectricity due to the multi local strain using 3D textile structure for detecting human body signals. Nano Energy, 74, 104932.

[5]. Hwang, D. H., Aso, K., Yuan, Y., Kitani, K., & Koike, H. (2020, October). Monoeye: Multimodal human motion capture system using a single ultra-wide fisheye camera. In Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology (pp. 98-111).

[6]. Albert, J. A., Owolabi, V., Gebel, A., Brahms, C. M., Granacher, U., & Arnrich, B. (2020). Evaluation of the pose tracking performance of the azure kinect and kinect v2 for gait analysis in comparison with a gold standard: A pilot study. Sensors, 20(18), 5104.

[7]. Semwal, V. B., Jain, R., Maheshwari, P., & Khatwani, S. (2023). Gait reference trajectory generation at different walking speeds using LSTM and CNN. Multimedia Tools and Applications, 1-19.

[8]. Zhu, M., Guan, X., Li, Z., He, L., Wang, Z., & Cai, K. (2022). sEMG-Based Lower Limb Motion Prediction Using CNN-LSTM with Improved PCA Optimization Algorithm. Journal of Bionic Engineering, 1-16.

[9]. Duan, F., Lv, Y., Sun, Z., & Li, J. (2022). Multi-scale Learning for Multimodal Neurophysiological Signals: Gait Pattern Classification as an Example. Neural Processing Letters, 54(3), 2455-2470.

[10]. Low, W. S., Goh, K. Y., Goh, S. K., Yeow, C. H., Lai, K. W., Goh, S. L., ... & Chan, C. K. (2022). Lower extremity kinematics walking speed classification using long short-term memory neural frameworks. Multimedia Tools and Applications, 1-16.

[11]. Guzelbulut, C., Shimono, S., Yonekura, K., & Suzuki, K. (2022). Detection of gait variations by using artificial neural networks. Biomedical engineering letters, 12(4), 369-379.

[12]. Park, T. G., & Kim, J. Y. (2022). Real-time prediction of walking state and percent of gait cycle for robotic prosthetic leg using artificial neural network. Intelligent Service Robotics, 15(4), 527-536.

[13]. Qiu, S., Wang, Z., Zhao, H., Liu, L., Li, J., Jiang, Y., & Fortino, G. (2018). Body sensor network-based robust gait analysis: Toward clinical and at home use. IEEE Sensors Journal, 19(19), 8393-8401.

[14]. Qiu, S., Liu, L., Wang, Z., Li, S., Zhao, H., Wang, J., ... & Tang, K. (2019). Body sensor network-based gait quality assessment for clinical decision-support via multi-sensor fusion. Ieee Access, 7, 59884-59894.

[15]. de Rooij, I. J., van de Port, I. G., Visser-Meily, J., & Meijer, J. W. G. (2019). Virtual reality gait training versus non-virtual reality gait training for improving participation in subacute stroke survivors: study protocol of the ViRTAS randomized controlled trial. Trials, 20(1), 1-10.

[16]. Murthy, A. S. D., Jagan, B. O. L., Rao, K. R., & Murty, P. S. (2022, December). A virtual reality research of Gait analysis in the medicine fields. In AIP Conference Proceedings (Vol. 2426, No. 1, p. 020040). AIP Publishing LLC.

[17]. Schalbetter, L., Wissen Hayek, U., Gutscher, F., & Grêt-Regamey, A. (2022). VR Landscapes for Therapy of Gait Insecurity. Journal of Digital Landscape Architecture, 7, 346-355.

[18]. Luque-Moreno, C., Kiper, P., Solís-Marcos, I., Agostini, M., Polli, A., Turolla, A., & Oliva-Pascual-Vaca, A. (2021). Virtual reality and physiotherapy in post-stroke functional re-education of the lower extremity: a controlled clinical trial on a new approach. Journal of personalized medicine, 11(11), 1210.


Cite this article

Jin,Y. (2023). An analysis of gait capture and simulation techniques for lower limb exoskeleton robots for stroke rehabilitation. Applied and Computational Engineering,10,11-16.

Data availability

The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.

Disclaimer/Publisher's Note

The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of EWA Publishing and/or the editor(s). EWA Publishing and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

About volume

Volume title: Proceedings of the 2023 International Conference on Mechatronics and Smart Systems

ISBN:978-1-83558-009-7(Print) / 978-1-83558-010-3(Online)
Editor:Alan Wang, Seyed Ghaffar
Conference website: https://2023.confmss.org/
Conference date: 24 June 2023
Series: Applied and Computational Engineering
Volume number: Vol.10
ISSN:2755-2721(Print) / 2755-273X(Online)

© 2024 by the author(s). Licensee EWA Publishing, Oxford, UK. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license. Authors who publish this series agree to the following terms:
1. Authors retain copyright and grant the series right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this series.
2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the series's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this series.
3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See Open access policy for details).

References

[1]. Feigin, V. L., Stark, B. A., Johnson, C. O., Roth, G. A., Bisignano, C., Abady, G. G., ... & Hamidi, S. (2021). Global, regional, and national burden of stroke and its risk factors, 1990–2019: a systematic analysis for the Global Burden of Disease Study 2019. The Lancet Neurology, 20(10), 795-820.

[2]. Moissenet, F., Leboeuf, F., & Armand, S. (2019). Lower limb sagittal gait kinematics can be predicted based on walking speed, gender, age and BMI. Scientific reports, 9(1), 9510.

[3]. Khoshmanesh, F., Thurgood, P., Pirogova, E., Nahavandi, S., & Baratchi, S. (2021). Wearable sensors: At the frontier of personalised health monitoring, smart prosthetics and assistive technologies. Biosensors and Bioelectronics, 176, 112946.

[4]. Ahn, S., Cho, Y., Park, S., Kim, J., Sun, J., Ahn, D., ... & Park, J. J. (2020). Wearable multimode sensors with amplified piezoelectricity due to the multi local strain using 3D textile structure for detecting human body signals. Nano Energy, 74, 104932.

[5]. Hwang, D. H., Aso, K., Yuan, Y., Kitani, K., & Koike, H. (2020, October). Monoeye: Multimodal human motion capture system using a single ultra-wide fisheye camera. In Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology (pp. 98-111).

[6]. Albert, J. A., Owolabi, V., Gebel, A., Brahms, C. M., Granacher, U., & Arnrich, B. (2020). Evaluation of the pose tracking performance of the azure kinect and kinect v2 for gait analysis in comparison with a gold standard: A pilot study. Sensors, 20(18), 5104.

[7]. Semwal, V. B., Jain, R., Maheshwari, P., & Khatwani, S. (2023). Gait reference trajectory generation at different walking speeds using LSTM and CNN. Multimedia Tools and Applications, 1-19.

[8]. Zhu, M., Guan, X., Li, Z., He, L., Wang, Z., & Cai, K. (2022). sEMG-Based Lower Limb Motion Prediction Using CNN-LSTM with Improved PCA Optimization Algorithm. Journal of Bionic Engineering, 1-16.

[9]. Duan, F., Lv, Y., Sun, Z., & Li, J. (2022). Multi-scale Learning for Multimodal Neurophysiological Signals: Gait Pattern Classification as an Example. Neural Processing Letters, 54(3), 2455-2470.

[10]. Low, W. S., Goh, K. Y., Goh, S. K., Yeow, C. H., Lai, K. W., Goh, S. L., ... & Chan, C. K. (2022). Lower extremity kinematics walking speed classification using long short-term memory neural frameworks. Multimedia Tools and Applications, 1-16.

[11]. Guzelbulut, C., Shimono, S., Yonekura, K., & Suzuki, K. (2022). Detection of gait variations by using artificial neural networks. Biomedical engineering letters, 12(4), 369-379.

[12]. Park, T. G., & Kim, J. Y. (2022). Real-time prediction of walking state and percent of gait cycle for robotic prosthetic leg using artificial neural network. Intelligent Service Robotics, 15(4), 527-536.

[13]. Qiu, S., Wang, Z., Zhao, H., Liu, L., Li, J., Jiang, Y., & Fortino, G. (2018). Body sensor network-based robust gait analysis: Toward clinical and at home use. IEEE Sensors Journal, 19(19), 8393-8401.

[14]. Qiu, S., Liu, L., Wang, Z., Li, S., Zhao, H., Wang, J., ... & Tang, K. (2019). Body sensor network-based gait quality assessment for clinical decision-support via multi-sensor fusion. Ieee Access, 7, 59884-59894.

[15]. de Rooij, I. J., van de Port, I. G., Visser-Meily, J., & Meijer, J. W. G. (2019). Virtual reality gait training versus non-virtual reality gait training for improving participation in subacute stroke survivors: study protocol of the ViRTAS randomized controlled trial. Trials, 20(1), 1-10.

[16]. Murthy, A. S. D., Jagan, B. O. L., Rao, K. R., & Murty, P. S. (2022, December). A virtual reality research of Gait analysis in the medicine fields. In AIP Conference Proceedings (Vol. 2426, No. 1, p. 020040). AIP Publishing LLC.

[17]. Schalbetter, L., Wissen Hayek, U., Gutscher, F., & Grêt-Regamey, A. (2022). VR Landscapes for Therapy of Gait Insecurity. Journal of Digital Landscape Architecture, 7, 346-355.

[18]. Luque-Moreno, C., Kiper, P., Solís-Marcos, I., Agostini, M., Polli, A., Turolla, A., & Oliva-Pascual-Vaca, A. (2021). Virtual reality and physiotherapy in post-stroke functional re-education of the lower extremity: a controlled clinical trial on a new approach. Journal of personalized medicine, 11(11), 1210.