CN118538362B - Somatosensory-based interactive virtual rehabilitation training method and system - Google Patents
Somatosensory-based interactive virtual rehabilitation training method and system Download PDFInfo
- Publication number
- CN118538362B CN118538362B CN202410415119.9A CN202410415119A CN118538362B CN 118538362 B CN118538362 B CN 118538362B CN 202410415119 A CN202410415119 A CN 202410415119A CN 118538362 B CN118538362 B CN 118538362B
- Authority
- CN
- China
- Prior art keywords
- rehabilitation
- user
- module
- action
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H20/00—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
- G16H20/30—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to physical therapies or activities, e.g. physiotherapy, acupressure or exercising
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/30—Semantic analysis
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biophysics (AREA)
- Multimedia (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Medical Informatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Software Systems (AREA)
- Databases & Information Systems (AREA)
- Human Computer Interaction (AREA)
- Data Mining & Analysis (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Mathematical Physics (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Physical Education & Sports Medicine (AREA)
- Epidemiology (AREA)
- Primary Health Care (AREA)
- Public Health (AREA)
- Rehabilitation Tools (AREA)
Abstract
The invention discloses a motion-sensing-based interactive virtual rehabilitation training method and system, which belong to the technical field of artificial intelligence and are applied to a rehabilitation training system, wherein motion information and voice information of a user after the user follows NPC to conduct behavior are collected, user motion gesture simulation is conducted according to the motion information, scores of the user's rehabilitation motions are calculated according to preset requirements and standards of the rehabilitation motions and the motion information and the voice information of the user, corresponding voice and graphic feedback is provided for the user according to the scores of the user's rehabilitation motions, and rehabilitation progress and effect analysis of the user is conducted according to data of the user motion gesture simulation and the scores of the user's rehabilitation motions to output a rehabilitation report. The rehabilitation training system has the advantages that an immersive rehabilitation training environment can be provided for a user by utilizing a virtual reality technology and a human body action recognition technology, and meanwhile, rehabilitation actions of the user can be monitored and evaluated in real time, feedback and advice are given, and rehabilitation efficiency and experience of the user are improved.
Description
Technical Field
The invention relates to a somatosensory interactive virtual rehabilitation training method and system, and belongs to the technical field of artificial intelligence.
Background
Limb disharmony refers to abnormalities in the smoothness, speed, range, strength, and duration of limb movement, resulting in clumsy, inflexible, inaccurate, and uneven limb movements. The cause of the limb disharmony may be related to injury or dysfunction of the central nervous system, vestibular organ, deep sensory, visual, etc. system. The people with uncoordinated limbs include patients with apoplexy, cerebral palsy, ataxia, dyskinesia, dyscoordination and other diseases, and the movement functions of the limbs are affected to different extents, so that the daily life and the working capacity of the people are seriously affected. The rehabilitation of limb movement functions requires long-term, high-intensity and diversified training to promote the adaptation and recombination of the nervous system and improve the sensitivity and coordination of the limbs.
At present, the methods for limb rehabilitation training mainly comprise the following steps:
(1) In traditional physical therapy, a professional physiotherapy engineer performs one-to-one guidance and assistance on a patient, such as limb stretch, fist making, thumb-to-finger movements and the like. The method has the advantages that a personalized training plan can be formulated according to the specific situation of a patient, but has the defects of low training efficiency, difficult control of training intensity, boring and tedious training process, difficult assessment of training effect and the like.
(2) The mechanical rehabilitation equipment consists of a mechanical structure and a motor drive, and is acted on the limb of a patient through external force to make the limb of the patient perform passive or active movements, such as bending, straightening, rotating and the like. The method has the advantages of providing stable and controllable training intensity, but has the defects of high equipment cost, complex structure, difficult maintenance, single training mode, difficult guarantee of training effect and the like.
(3) The virtual reality technology simulates real limb movement scenes, such as picking, grasping, throwing and the like, through a virtual environment and interaction equipment generated by a computer, excites the interests and motivations of patients, and enhances the interestingness and immersion of training. The method has the advantages of providing rich and various training contents, but has the defects of high equipment cost, high technical requirements, difficult quantification of training effect, difficult adjustment of training intensity and the like.
In summary, the existing limb rehabilitation training methods have certain limitations and cannot meet the requirements of limb rehabilitation training of stroke patients, so a new limb rehabilitation training system is urgently needed, the defects of the prior art can be overcome, and an effective, convenient, intelligent and personalized limb rehabilitation training method is provided.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide a somatosensory interactive virtual rehabilitation training method and system.
In order to solve the technical problems, the invention is realized by adopting the following technical scheme.
In one aspect, the invention provides a somatosensory interactive virtual rehabilitation training method which is applied to a rehabilitation training system;
Collecting action information and voice information after a user follows NPC to conduct action;
Performing user motion gesture simulation according to the motion information;
Calculating the score of the rehabilitation action of the user according to the preset requirements and standards of the rehabilitation action and the action information and the voice information of the user, and providing corresponding voice and graphic feedback for the user according to the score of the rehabilitation action of the user;
And analyzing the rehabilitation progress and effect of the user according to the data of the user motion gesture simulation and the scores of the rehabilitation actions of the user, and outputting a rehabilitation report.
Further, the rehabilitation action of the NPC guiding action adopts eight-section brocade action.
Further, the collecting the action information of the user after following the NPC guiding action to simulate the user movement gesture includes:
Acquiring the distance and azimuth change of any two of a plurality of skeletal joints of a user through Kinect equipment, determining a human body joint movement track according to the change, and performing user movement gesture simulation according to the human body joint movement track; when the human body joint movement track is determined, smoothing the human body joint movement track, wherein the smoothing process comprises the steps of taking the coordinate value of the current moment of the movable joint and the average value of joint coordinates of the previous N-1 sampling periods as the joint position of the current moment, and progressively advancing the acquired coordinate position by taking N as a step length according to a time sequence;
Receiving voice information of a user through Kinect equipment, converting the source direction and distance of the voice information into angles and delays through a beam forming technology, dividing the voice information into a series of phoneme sequences based on the angles and delays, mapping the phoneme sequences into word sequences through a statistical model or a neural network model, and converting the word sequences into sentences or commands for interacting with the rehabilitation training system through a language model or a semantic analysis model;
analyzing emotion and demand of the user according to the statement or the command;
further, the step of analyzing the rehabilitation progress and effect of the user according to the data of the user motion gesture simulation and the score of the rehabilitation action of the user, and outputting a rehabilitation report comprises the following steps:
Acquiring data of the motion gesture simulation of the user and scores of rehabilitation actions of the user by using a UDP protocol;
A deep learning and pattern recognition technology is adopted to recognize key rehabilitation indexes from the data of the motion gesture simulation of the user and the scores of the rehabilitation actions of the user;
generating a rehabilitation report containing personalized advice according to the key rehabilitation indexes;
And determining the optimal rehabilitation advice according to the rehabilitation report by using a deep learning model.
Further, the requirements and standards of the rehabilitation actions are self-adjusted according to the rehabilitation progress and individual differences of each user;
the self-adjusting process comprises the steps of adding BOXcollider and kinect cameras into the rehabilitation training system to detect whether the action of the person is standard, adding one score when the person touches an object with boxcollider, deeply proceeding the process, and re-performing the current rehabilitation action once the score is less than a set value.
In a second aspect, the present invention further provides a somatosensory interactive virtual rehabilitation training system, including:
the virtual rehabilitation scene construction module is used for constructing a virtual rehabilitation scene through Unity, wherein the virtual rehabilitation scene comprises a simulated rehabilitation treatment room and a plurality of virtual objects and tasks related to rehabilitation actions;
the human body action recognition module is used for recognizing limb actions of the user in the virtual rehabilitation scene through connecting Kinect equipment and transmitting real action information of the user to the main control module;
the rehabilitation action design module is used for designing rehabilitation actions based on eight-section brocade actions and transmitting the requirements and standards of the rehabilitation actions to the main control module;
the voice interaction module is used for receiving voice instructions and feedback of the user and transmitting voice information to the main control module;
the NPC guiding module comprises a virtual NPC character for guiding a user to perform rehabilitation actions, and the virtual NPC character is utilized to demonstrate through voice and actions according to the instruction of the main control module so as to guide the user how to correctly complete the rehabilitation actions;
The main control module is used for controlling the operation of the whole system, judging the rehabilitation effect of the user according to the action information, the voice information and the requirements and standards of rehabilitation actions of the user, controlling the change of virtual rehabilitation scenes, controlling the behavior of the NPC guiding module, controlling the display of the movement gesture simulation module, controlling the recording of the rehabilitation action data recording module and controlling the analysis of the data analysis module.
The system further comprises a motion gesture simulation module, a control module and a control module, wherein the motion gesture simulation module is controlled by the main control module and is used for simulating and displaying the motion gesture corresponding to the limb of the user so that the user can observe whether the motion of the user is correct;
the motion gesture simulation module comprises a display screen and a graphic processor, wherein the display screen is used for displaying a three-dimensional model of a limb of a user, and the graphic processor is used for generating the three-dimensional model of the limb of the user according to action information of the user;
the rehabilitation evaluation module is used for receiving the action information, the voice information and the requirements and standards of rehabilitation actions of the user, which are transmitted by the main control module, and judging the rehabilitation effect of the user;
The rehabilitation evaluation module comprises a scoring algorithm and a feedback mechanism, wherein the scoring algorithm is used for calculating the score of the rehabilitation action of the user according to the action information, the voice information and the requirements and standards of the rehabilitation action of the user, and the feedback mechanism is used for providing corresponding voice and graphic feedback for the user according to the score of the rehabilitation action of the user.
The system further comprises a rehabilitation action data recording module, a rehabilitation action data processing module and a rehabilitation action data processing module, wherein the rehabilitation action data recording module is used for recording rehabilitation action data completed by a user in a virtual rehabilitation scene so as to track and analyze the rehabilitation effect for a long time;
The rehabilitation action data recording module comprises a data memory and a data transmitter, wherein the data memory is used for storing rehabilitation action data of a user, and the data transmitter is used for transmitting the rehabilitation action data of the user to the main control module or other equipment;
the data analysis module is used for analyzing the rehabilitation action data, generating a rehabilitation report and providing personalized rehabilitation advice;
The data analysis module comprises a data processor and a data display, wherein the data processor is used for analyzing the rehabilitation progress and effect of a user according to rehabilitation action data of the user by using a data mining and machine learning method to generate a rehabilitation report and a rehabilitation suggestion, and the data display is used for displaying the rehabilitation report and the rehabilitation suggestion to the user or a doctor in the form of a chart or text.
Further, the virtual rehabilitation scene construction module is further used for selecting or generating a virtual object and a task suitable for a user according to the requirements and standards of rehabilitation actions of the user, and adjusting the appearance and the character of the virtual NPC character;
the human body action recognition module is also used for calculating the limb flexibility and the force of the user according to the limb action of the user and transmitting the calculation result to the main control module;
the voice interaction module is also used for analyzing emotion and demand of the user according to voice information of the user and transmitting an analysis result to the main control module;
the NPC guiding module is also used for adjusting the voice and the action of the virtual NPC character according to the feedback and the score of the user.
Furthermore, the rehabilitation action design module is further used for automatically or manually adjusting the difficulty and sequence of the rehabilitation actions according to the rehabilitation demands and favorites of the user so as to improve the rehabilitation effect and experience of the user.
The invention has the beneficial effects that:
The system is suitable for the whole-body limb rehabilitation training of people with various uncoordinated limbs, can provide an immersive rehabilitation training environment for users by utilizing a virtual reality technology and a human body action recognition technology, can monitor and evaluate the rehabilitation actions of the users in real time, gives feedback and advice, and increases the rehabilitation efficiency and experience of the users.
Drawings
FIG. 1 is a schematic diagram of the physical structure of the virtual rehabilitation system of the present invention;
FIG. 2 is a schematic block diagram of the virtual rehabilitation system of the present invention;
FIG. 3 is a flow chart of the processing of the main control module of the virtual rehabilitation system of the present invention;
FIG. 4 is a flow chart of the human motion recognition module and feedback module processes of the virtual rehabilitation system of the present invention;
FIG. 5 is a flow chart of a virtual person collision prediction and collision detection module process of the virtual rehabilitation system of the present invention.
FIG. 6 is a flowchart of the Kinect module process of the virtual rehabilitation system of the present invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings. The following examples are only for more clearly illustrating the technical aspects of the present invention, and are not intended to limit the scope of the present invention.
Embodiment 1, this embodiment describes a somatosensory interactive virtual rehabilitation training method, as shown in fig. 1 and 2. The rehabilitation training device is characterized by being applied to a rehabilitation training system;
Collecting action information and voice information after a user follows NPC to conduct action;
Performing user motion gesture simulation according to the motion information;
Calculating the score of the rehabilitation action of the user according to the preset requirements and standards of the rehabilitation action and the action information and the voice information of the user, and providing corresponding voice and graphic feedback for the user according to the score of the rehabilitation action of the user;
And analyzing the rehabilitation progress and effect of the user according to the data of the user motion gesture simulation and the scores of the rehabilitation actions of the user, and outputting a rehabilitation report.
The rehabilitation action of the NPC guiding action adopts eight-section brocade action.
The step of collecting the action information of the user after following the NPC guiding action to simulate the user movement gesture, as shown in FIG. 6, includes:
Acquiring the distance and azimuth change of any two of a plurality of skeletal joints of a user through Kinect equipment, determining a human body joint movement track according to the change, and performing user movement gesture simulation according to the human body joint movement track; when the human body joint movement track is determined, smoothing the human body joint movement track, wherein the smoothing process comprises the steps of taking the coordinate value of the current moment of the movable joint and the average value of joint coordinates of the previous N-1 sampling periods as the joint position of the current moment, and progressively advancing the acquired coordinate position by taking N as a step length according to a time sequence;
Receiving voice information of a user through Kinect equipment, converting the source direction and distance of the voice information into angles and delays through a beam forming technology, dividing the voice information into a series of phoneme sequences based on the angles and delays, mapping the phoneme sequences into word sequences through a statistical model or a neural network model, and converting the word sequences into sentences or commands for interacting with the rehabilitation training system through a language model or a semantic analysis model;
According to the statement or command, the emotion and demand of the user are analyzed (the emotion and demand of the user can be determined according to preset options for the user to select).
The user rehabilitation progress and effect analysis is carried out according to the data of the user movement gesture simulation and the scores of the rehabilitation actions of the user, and a rehabilitation report is output, and the method comprises the following steps:
Acquiring data of the motion gesture simulation of the user and scores of rehabilitation actions of the user by using a UDP protocol;
A deep learning and pattern recognition technology is adopted to recognize key rehabilitation indexes from the data of the motion gesture simulation of the user and the scores of the rehabilitation actions of the user;
generating a rehabilitation report containing personalized advice according to the key rehabilitation indexes;
And determining the optimal rehabilitation advice according to the rehabilitation report by using a deep learning model.
The requirements and standards of the rehabilitation actions are self-adjusted according to the rehabilitation progress and individual differences of each user;
The self-adjusting process, as shown in fig. 5, includes adding boxcollider and kinect cameras to the rehabilitation training system to detect whether the action of the person is standard, when the person touches the object with boxcollider, the score will be added by one and the process will go deep, once the score is less than the set value, the current rehabilitation action will be resumed.
Embodiment 2, which is based on the same inventive concept as embodiment 1, as shown in fig. 1 and 2, introduces a somatosensory interactive virtual rehabilitation system, which includes:
The virtual rehabilitation scene construction module is used for selecting or generating virtual objects and tasks suitable for users and adjusting the appearance and character of the virtual NPC characters according to the requirements and standards of rehabilitation actions of the users, and utilizes a game engine and a scene editor of Unity, wherein the virtual scene is built by the inventor and comprises a plurality of virtual objects and tasks related to the rehabilitation actions, and a virtual NPC character, and the virtual rehabilitation scene construction module is used for selecting or generating the virtual objects and tasks suitable for the users and adjusting the appearance and character of the virtual NPC character according to the requirements and standards of the rehabilitation actions of the users, and the specific principle is as follows:
game engine-Unity is a cross-platform game development tool that provides a range of functions such as graphics rendering, physical simulation, audio processing, user interface, script programming, etc., for creating and running various types of games and applications. The Unity game engine is Component-based in that it abstracts each object (e.g., character, object, light, etc.) in the game into one game object (GameObject), and each game object can be attached with multiple components (components) for defining properties and behavior of the game object, such as position, rotation, scaling, collider, renderer, animation controller, etc. The relationship between a game object and a component can be described by the following formula:
GameObject=Transform+Component1+Component2+...
Scene editor-Unity provides a visual scene editor for creating and arranging game objects in a game and setting the environment and parameters of the game. The scene editor comprises the following parts:
hierarchical view (HIERARCHY VIEW) for displaying and managing all game objects in the scene, and parent-child relationships between them.
Viewing View (Injector View) the properties and components used to display and edit selected game objects.
A scene view (SCENE VIEW) for viewing and manipulating game objects in a scene in three-dimensional space, and setting the view angle and position of the camera.
Game View (Game View) for previewing and running games, and viewing output and debug information of games.
Project View (Project View) is used to display and manage all resources in a Project, such as models, maps, sound effects, scripts, etc.
Console View (Console View) is used to display and filter logs, warnings, and error information for games.
The human body motion recognition module is connected with the Kinect equipment and is used for recognizing the whole body limb motion of a user in a virtual rehabilitation scene and transmitting motion information to the main control module, the human body motion recognition module is also used for calculating the limb flexibility and the limb strength of the user according to the limb motion of the user and transmitting a calculation result to the main control module, and the human body motion recognition module utilizes the depth detection, skeleton tracking and motion trail generation functions of the Kinect equipment, as shown in fig. 6, and the specific principle is as follows:
depth detection the Kinect device comprises an infrared projector and an infrared camera, the infrared projector projects near infrared spectra into the scene, forming random reflection spots, called speckles. This principle is based on interference and diffraction phenomena of light. When a beam of monochromatic light (e.g., near infrared) is applied to a rough surface (e.g., a human body or an object), it is reflected by the irregular structure of the surface to form a plurality of sub-waves, which interfere with each other to form fringes or spots with alternate brightness, which are speckles. The shape and size of the speckle depends on the wavelength of the light source, the angle of illumination, the roughness of the reflective surface, and the distance. The speckle image can be described by the following formula:
Where I (x, y) is the brightness of the speckle image, I 0 is the intensity of the light source, λ is the wavelength of the light source, z is the average distance of the reflective surface, x and y are the coordinates of the speckle image, and phi is the initial phase.
Bone tracking, namely capturing an image of speckle by an infrared camera, and calculating the phase difference of each pixel point by a phase difference Method (PHASE SHIFT Method), so as to obtain the distance, namely the depth value, between each pixel point and a camera plane. This principle is based on the phase change of light. When a beam of light enters one medium from the other, its phase changes, which is related to the refractive index of the two media and the angle of incidence of the light. If we know the initial phase and the final phase of the light, we can calculate the phase difference of the light, and thus calculate the propagation path and distance of the light. In order to obtain the phase difference of light, the Kinect apparatus uses a phase difference Method (PHASE SHIFT Method), the basic idea of which is to sequentially irradiate a scene with four light sources of different phases, then capture four speckle images respectively with a depth camera, and according to the pixel values of the four images, calculate the phase difference of each pixel point, thereby obtaining the depth value of each pixel point. The formula of the phase difference method is as follows:
Where φ is the phase difference and I 1,I2,I3,I4 is the pixel value of the speckle images received by the depth camera for four different phases. The depth value is calculated as follows:
Where d is the depth value, f is the focal length of the depth camera, λ is the wavelength of the infrared projector, and Δφ is the variation of the phase difference. By means of the depth values, the user can be separated from the background and different parts of the user, such as head, hand, foot, etc., can be identified. Then, by a machine learning method, the joint position of the user, such as shoulder, elbow, knee, etc., is determined based on the location information of the user. Finally, a skeleton map of the user is constructed by connecting the joint positions. Kinect can track skeletal nodes for 6 users simultaneously, with 25 skeletal nodes for each user.
Based on the twenty-five bone nodes, the distance and azimuth change of any two joint points are calculated to obtain the human body joint movement track.
The Kinect can acquire scene depth information and further calculate the space interval between the person and the camera, and the distance between the person and Kinect equipment is set as d, and the distance is obtained by the following formula:
d=K·tan(H·dK+L)-O(4)
Where d K is the object depth value, obtained by the Kinect apparatus, k=12.36 cm, h=3.5-10-4 rad, l=1.18 rad, o=3.7 cm.
Generating a motion track, namely converting a depth image coordinate (x K,yK,zk) into an actual coordinate (x r,yr,zr), wherein the conversion formula is as follows:
Where f=0.0021, d' = -10, and kinect has a resolution of w×h=680×480. The space coordinate values M (x r1,yr1,zr1) and N (x r2,yr2,zr2) of any two joints of the human body can be obtained by using the formula (4) and the formula (5), and the distance between the two joints can be obtained as follows:
Because of a series of factors such as Kinect hardware error or jitter, the relative position of the bone joint point is greatly changed in a short time, and abnormal values exist in data, aiming at the problem, when the system generates a motion track of the bone joint point, the abnormal values are required to be screened and deleted, and the motion track is subjected to smoothing treatment. In order to achieve the noise reduction effect, the coordinate value of the current moment of the movable joint and the average value of the joint coordinates of the previous N-1 sampling periods are taken as the joint position of the current moment, and the acquired coordinate positions are progressively moved forward according to the time sequence by taking N as the step length.
The rehabilitation action design module is used for designing rehabilitation actions based on the traditional eight-section brocade actions and transmitting the requirements and standards of the rehabilitation actions to the main control module, and is also used for automatically or manually adjusting the difficulty and sequence of the rehabilitation actions according to the rehabilitation requirements and favorites of users so as to improve the rehabilitation effect and experience of the users, and the module is used for designing actions based on the motion principle of the eight-section brocade so as to aim for achieving the similar rehabilitation effect in the rehabilitation system, wherein the specific principle is as follows:
The eight-section brocade is a traditional qigong body-building method, and is composed of eight simple actions, each action has certain efficacy, such as regulating qi and blood, relaxing tendons and activating collaterals, building up body and the like. The eight-section brocade can strengthen heart and lung functions, regulate nervous system, dredge channels and collaterals, strengthen muscle strength, regulate viscera functions, correct bad posture, improve vision and the like, and has certain effects on preventing and treating various diseases. The eight-section brocade requires the body to be upright, contains chest sinking qi, is matched with deep breathing, so that the breathing is deep, the vital capacity is increased, the nerve is relaxed, the fatigue is eliminated, and the mental activity is enhanced. The eight-segment brocade can promote blood circulation of viscera, enhance metabolism of viscera, regulate functions of spleen and stomach, liver and gall, kidney and bladder, and prevent and treat dyspepsia, gastroenteropathy, hepatitis, nephritis, etc. The eight-section brocade can enhance the stability and flexibility of the waist and the back by stretching the musculature of the waist and the back, has a certain effect on correcting bad postures such as adduction of the shoulder, round back and the like, and has a certain effect on preventing and treating scoliosis, cervical spondylosis, lumbar spondylosis and the like. The eight-section brocade can increase the movement range of the eyeball and strengthen the strength of eye muscles by rotating and irritating eyes of the eyeball, and has certain effects of preventing and treating myopia, hyperopia, presbyopia and the like. The eight-section brocade acts as follows:
in the first mode, two hand supports heavenly principles are used for holding three coke. The hands lift from both sides, the palm center is upward, the fingers are straightened, the two arms extend upwards as much as possible, and inhale simultaneously, then the hands droop from the upper part, the palm center is downward, and the two arms droop naturally and exhale simultaneously. Repeated 8 times. The formula can regulate qi and blood circulation of triple energizer meridian, and promote functions of digestive, respiratory and urinary systems.
And secondly, opening the bow left and right to form jet carving. The hands lift from both sides, the palm center is upwards, the two arms stretch, the air is sucked simultaneously, then the left hand is used for making a fist, the right palm center is used for stretching leftwards and leftwards, the archery-like shape is realized, the air is expired simultaneously, the right hand is used for making a fist again, the left palm center is used for stretching rightwards and rightwards, the archery-like shape is realized, and the air is sucked simultaneously. The left and right are repeated 4 times. The composition can regulate qi and blood circulation of lung meridian of hand taiyin and large intestine meridian of hand yangming, and enhance respiratory system and immune system functions.
In the third formula, the spleen and stomach are conditioned. The hands lift from both sides, the palm center is upwards, the two arms stretch, inhale simultaneously, then left palm center upwards, upwards extend, right palm center is downwards, and the whereabouts breathes simultaneously, and the rethread right palm center upwards, upwards extends, and left palm center is downwards, and the whereabouts breathes simultaneously. The left and right are repeated 4 times. The composition can regulate qi and blood circulation of spleen meridian of foot taiyin and stomach meridian of foot yangming, and promote digestive system and blood system functions.
The fourth formula is five-effort and seven-injury with a back-pain. The hands lift from both sides, the palm center is upwards, the two arms stretch, the palm center is opposite during inspiration, the hands rotate backwards, expiration is performed, and the hands are restored to the original positions, and inspiration is performed. Repeated 4 times. The composition can regulate qi and blood circulation of heart meridian of hand shaoyin and small intestine meridian of hand taiyang, and promote functions of heart and nervous system.
And fifthly, shaking head, swinging tail and removing heart fire. The hands lift from both sides, the palm center is upwards, and both arms are flat to stretch, breathe in simultaneously, then the both hands palm center is downwards, and is pushed down, and the health is crooked left simultaneously, and the head rotates right, exhales simultaneously, and the rethread is to original position, breathes simultaneously, then the both hands palm center is downwards, pushes down, and the health is crooked right simultaneously, and the head rotates left, exhales simultaneously, and the rethread is to original position, breathes simultaneously. The left and right are repeated 4 times. The composition can regulate qi and blood circulation of the kidney meridian of foot shaoyin and bladder meridian of foot sun, and promote urinary system and endocrine system functions.
Sixth, two hands climb the feet to strengthen the kidney and waist. The hands lift from both sides, the palm center is upwards, the two arms stretch, the inspiration is performed simultaneously, then the palm center of the hands is downwards, the front lower part extends, the body is bent forwards, the toes are touched as much as possible by hands, the expiration is performed simultaneously, and then the original position is restored, and the inspiration is performed simultaneously. Repeated 8 times. The composition can regulate qi and blood circulation of liver meridian of foot jueyin and gallbladder meridian of foot yangming, and promote functions of liver and gallbladder system and nervous system.
Seventh, the punch is used for increasing the strength of eyes. The hands lift from both sides, the palm is upwards, and both arms are flatly stretched, breathe in simultaneously, then both hands make the fist, and the fist is upwards, draws in to the front, and the body is slight to lean back simultaneously, and eyes upwards stare at, exhales simultaneously, restores to original position again, breathes in simultaneously. Repeated 8 times. The composition can regulate qi and blood circulation of pericardium meridian of hand jueyin and triple energizer meridian of hand shaoyang, and promote functions of heart and circulatory system.
Eighth formula, seven back and hundred back diseases. The hands lift from both sides, the palm center is upwards, the two arms stretch, and inhale simultaneously, then the palm center of the hands is downwards, and is outwards extended, and the health is outwards bent simultaneously, touches the back waist as far as possible with the hand, exhales simultaneously, and then restores to original position, inhales simultaneously. Repeated 8 times. This formula can regulate qi and blood circulation of the conception vessel and governor vessel, and promote the functions of the spinal column and nervous system.
The voice interaction module is internally provided with a voice function and is used for carrying out voice interaction between a user and a system, receiving voice instructions and feedback of the user and transmitting voice information to the main control module, as shown in figure 3, analyzing emotion and requirements of the user (specifically, the emotion and requirements of the user can be determined according to preset options for the user to select, for example, a scenario dialogue NPC is that the rehabilitation action is not good, four options A, well-weather we continue bar, B, further, the bar C, feel that the just-few actions do not go in place, D, bad, all-round, and the analysis result is transmitted to the main control module), and the voice recognition module utilizes the voice recognition function of Kinect equipment, namely, the Kinect equipment comprises a quaternary linear microphone array and is used for receiving voice signals and positioning the direction and distance of a sound source through a beam forming (beam forming) technology. This principle is based on the propagation and interference of sound waves. When a sound source emits sound, it generates a series of pressure waves that propagate in the air at a certain velocity, and when they reach the microphone, the microphone converts the sound waves into electrical signals. Due to the different distances and angles between the sound source and the microphone, the time and intensity of arrival of the sound waves at the microphone will also be different, which results in a time difference and an amplitude difference. If we know the position of the microphone and the velocity of the sound wave we can calculate the direction and distance of the sound source from the time difference and the amplitude difference. In order to improve the accuracy and noise immunity of sound source localization, kinect uses a Beamforming (Beamforming) technique, and the basic idea is to implement localization of a sound source by weighting and phase-adjusting signals received by a microphone array so as to enhance signals from a target direction and suppress signals from other directions. The formula of the beamforming technique is as follows:
Where y (t) is the output signal, w i is the weight coefficient, x i is the signal received by the ith microphone, τ i is the delay time for the signal to reach the ith microphone, and N is the number of microphones. The direction and distance of the sound source can be converted into angles and delays by means of beam forming techniques, and then the sound signal is converted into text information by means of a speech recognition algorithm for user interaction with the system. The basic idea of the speech recognition algorithm is to divide the sound signal into a series of phonemes (Phoneme), then map the phoneme sequence into a word sequence through a statistical model or a neural network model, and then convert the word sequence into sentences or commands through a language model or a semantic analysis model.
The NPC guiding module comprises a virtual NPC (non-PLAYER CHARACTER ) character, is used for guiding a user to perform rehabilitation actions, teaches the user how to correctly complete the rehabilitation actions through voice and action demonstration according to the instruction of the main control module, is also used for adjusting the voice and actions of the virtual NPC character according to the feedback and the grading of the user so as to adapt to the rehabilitation level and the preference of the user, and utilizes a Unity animation system and a voice system, wherein the specific principle is as follows:
Animation system Unity provides a powerful animation system for creating and controlling the animation of game objects, such as walking, jumping, attacking, etc. of characters. The animation system comprises the following parts:
Animation Clip (Animation Clip) for storing Animation data of a game object, such as position, rotation, scaling, etc., can be recorded in a scene editor or made in an external tool.
Animation controller (Animator Controller) for managing animation states and transitions of game objects, as well as animation parameters and levels, may be edited in an animation controller window.
An animation state machine (Animator STATE MACHINE) for defining a relationship between an animation state of a game object and a transition condition, 2 can be edited in an animation state machine window.
Animation parameters (Animator Parameter) variables for controlling animation states and transitions, such as boolean values, integer values, floating point values, etc., may be set in the script.
Animation layer Animator Layer is used to layer the animation state machine to realize the animation mixing of different parts, such as the animation of upper and lower body, and can be set in the animation controller window.
Voice system Unity provides a simple voice system for playing and controlling the voice of game objects, such as conversations, narration, instructions, etc. of characters. The voice system comprises the following parts:
A voice Clip (audioclip) that stores voice data of game objects, such as Audio files, volume, tone, etc., may be produced in an external tool.
A voice Source (Audio Source) for playing and controlling the voices of the game object, such as play mode, loop mode, sound effects, etc., may be edited in the viewing view.
A speech listener (Audio Listener) for receiving and processing speech, such as volume, stereo, reverberation, etc., of the game object may be edited in the viewing view.
The system comprises a main control module, a motion gesture simulation module, a rehabilitation action data recording module, a data analysis module and a user action information, voice information and rehabilitation action requirements and standards, wherein the main control module is used for controlling the operation of the whole system, judging the rehabilitation effect of the user according to the user action information, the voice information and the rehabilitation action requirements and standards, controlling the change of a virtual rehabilitation scene, controlling the behavior of the NPC guide module, controlling the display of the motion gesture simulation module, controlling the recording of the rehabilitation action data recording module and controlling the analysis of the data analysis module, and the module utilizes a Unity script system and an event system, and the specific principle is as follows:
Scripting system-Unity provides a flexible scripting system for writing and executing logic for game objects, such as user input, game state, game logic, etc. The scripting system supports multiple programming languages, such as C#, javaScript, boo, etc. The script system comprises the following parts:
Scripts (scripts) logical code for storing game objects, such as variables, functions, classes, etc., may be written in external tools.
Script Component (Script Component) for attaching scripts to game objects to implement logical functions of the game objects, which can be edited in the viewing view.
Script lifecycle (SCRIPT LIFECYCLE) for defining execution order and time points of the script, such as initialization, updating, destruction, etc., may be written in the script.
Event system (EVENT SYSTEM) the Unity provides a unified event system for handling and distributing various events in a game, such as mouse events, keyboard events, touch events, etc. 2. The event system comprises the following parts:
Events (Event) used to represent a situation occurring in a game, such as a mouse click, a keyboard press, a touch move, etc., may be created in a script.
Event sources (Event sources) game objects, such as mice, keyboards, touch screens, etc., for generating events can be edited in the viewing view.
An event listener (EVENT LISTENER) that game objects, such as buttons, text, pictures, etc., for receiving and processing events may be edited in the viewing view.
Event handler (EVENT HANDLER) the processing logic for defining events, such as click events, drag events, bump events, etc., may be written in a script.
The motion gesture simulation module is controlled by the main control module and simulates and displays the motion gesture corresponding to the finger of the user so as to enable the user to observe whether the motion of the user is correct, the motion gesture simulation module comprises a display screen and a graphic processor, the display screen is used for displaying a three-dimensional model of the finger of the user, the graphic processor is used for generating the three-dimensional model of the finger of the user according to the motion information of the user, and the module utilizes a Unity graphic rendering system, and the specific principle is as follows:
graphics rendering system Unity provides an efficient graphics rendering system for creating and displaying three-dimensional graphics in a game, such as models, maps, lighting, shadows, etc. The graphics rendering system includes the following:
Models (models) for storing three-dimensional shapes of game objects, such as vertices, faces, normals, etc., may be created in an external tool.
Map (Texture) for storing surface details of game objects, such as color, texture, gloss, etc., can be produced in an external tool.
Materials (materials) are used to define surface properties of game objects, such as color, texture, gloss, transparency, etc., that can be edited in the viewing view.
Shaders (shaders) that define the manner in which game objects are rendered, such as lighting, shadows, reflections, etc., may be written in external tools.
Light sources (lights) for simulating lighting effects in games, such as color, intensity, direction, range, etc., of Light may be edited in the viewing view.
Cameras (cameras) for capturing and displaying images in a game, such as view angle, position, orientation, cropping, etc., may be edited in the viewing view.
The rehabilitation evaluation module receives the action information, the voice information and the requirements and the standards of the rehabilitation actions of the user transmitted by the main control module and is used for judging the rehabilitation effect of the user, the module comprises a scoring algorithm and a feedback mechanism, the scoring algorithm is used for calculating the score of the rehabilitation actions of the user according to the action information, the voice information and the requirements and the standards of the rehabilitation actions of the user, the feedback mechanism is used for providing corresponding voice and graphic feedback for the user according to the score of the rehabilitation actions of the user so as to encourage the user to improve the rehabilitation actions, and the module utilizes the following principles as shown in fig. 3 and 4:
scoring algorithm, wherein the scoring algorithm is a method for quantitatively evaluating rehabilitation actions of users according to certain standards and rules, and can be described by the following formula:
S=f(A,V,R)
(8)
Wherein S is the score of the rehabilitation action of the user, A is the action information of the user, V is the voice information of the user, R is the requirement and standard of the rehabilitation action, and f is a scoring function which can be designed and adjusted according to different rehabilitation actions and evaluation indexes.
Feedback mechanism-a method for providing corresponding voice and graphic feedback to a user to encourage the user to improve rehabilitation actions according to the scores of the rehabilitation actions of the user, which can be described by the following formula:
F=g(S)
(9)
wherein F is voice and graphic feedback of the user, S is score of rehabilitation action of the user, g (S) is feedback function, and design and adjustment can be carried out according to different feedback strategies and modes.
The rehabilitation action data recording module is used for recording rehabilitation action data completed by a user in a virtual rehabilitation scene so as to carry out long-term tracking and analysis of rehabilitation effects, the module comprises a data memory and a data transmitter, the data memory is used for storing the rehabilitation action data of the user, the data transmitter is used for transmitting the rehabilitation action data of the user to a main control module or other equipment, and the module utilizes the following principles:
Data storage, which is a device for storing rehabilitation motion data of a user, such as a hard disk, a flash memory, a cloud storage and the like, and can select proper types and capacities according to different storage requirements and performances. The basic principle of the data memory is to convert rehabilitation motion data of a user into binary digital signals, and then store the digital signals on a storage medium such as a magnetic disk, a chip, an optical disk, etc. through an electromagnetic or optical mode. The manner in which the data store is stored can be described by the following formula:
D=h(E)
(10)
wherein D is a digital signal stored on a data memory, E is rehabilitation action data of a user, h is a data conversion function, and the design and adjustment can be carried out according to different data formats and coding modes.
Data transmitter-a data transmitter is a device for transmitting rehabilitation action data of a user to a main control module or other devices, such as a network card, bluetooth, wi-Fi and the like, and can select proper types and speeds according to different transmission requirements and performances. The basic principle of the data transmitter is to convert rehabilitation motion data of a user into an analog signal of electromagnetic waves or light waves, and then transmit the analog signal to target equipment such as cables, optical fibers, air and the like through a wired or wireless mode. The transmission mode of the data transmitter can be described by the following formula:
S=g(E)
(11)
s is an analog signal transmitted to target equipment, E is rehabilitation action data of a user, g is a signal conversion function, and the design and adjustment can be carried out according to different signal formats and modulation modes.
The data analysis module is used for analyzing the rehabilitation action data to generate a rehabilitation report and providing a personalized rehabilitation suggestion, the data analysis module comprises a data processor and a data display, the data processor is used for analyzing the rehabilitation progress and effect of a user according to the rehabilitation action data of the user by using methods such as data mining, machine learning and the like to generate the rehabilitation report, the personalized rehabilitation suggestion is provided, the data display is used for displaying the rehabilitation report and the suggestion to the user or a doctor in a chart or text form, and the data processor is used for analyzing the rehabilitation action data of the user, such as CPU, GPU, FPGA and the like, and suitable types and speeds can be selected according to different analysis requirements and performances. The basic principle of the data processor is that rehabilitation action data of a user are converted into a computable data structure, such as an array, a matrix, a vector and the like, and then the data structure is operated and processed, such as classification, clustering, regression, prediction and the like, through an algorithm and a model, so that analysis results of rehabilitation progress and effect of the user, such as a rehabilitation curve, a rehabilitation index, rehabilitation evaluation and the like, are obtained. The manner of analysis of the data processor can be described by the following formula:
R=m(E)
(11)
wherein R is the analysis result of rehabilitation progress and effect of the user, E is rehabilitation action data of the user, m is an analysis function, and the design and adjustment can be carried out according to different analysis methods and models.
Data presenter-a device for presenting the results of an analysis of a user's rehabilitation progress and effects to a user or physician, such as a display, printer, projector, etc., may be selected for the appropriate type and resolution according to different presentation requirements and effects. The basic principle of the data presenter is to convert the analysis results of the rehabilitation progress and effect of the user into visual graphics or texts such as line drawings, bar charts, tables, reports, etc., and then display the graphics or texts on a presentation medium such as a screen, paper, wall, etc., by optical or electronic means. The manner in which the data presenter is presented can be described by the following formula:
V=n(R)
(12)
Wherein V is a graph or text of the rehabilitation progress and effect of the user, R is an analysis result of the rehabilitation progress and effect of the user, and n is a display function which can be designed and adjusted according to different display forms and styles.
The data analysis module can receive and analyze rehabilitation action data in real time, and can generate personalized rehabilitation reports and suggestions according to the data, and the method comprises the following steps:
and the real-time data mining and analyzing module receives the rehabilitation data in real time through the UDP protocol and immediately analyzes the rehabilitation data to provide quick feedback and early warning so as to optimize the rehabilitation plan.
And the personalized rehabilitation report generation algorithm module is used for analyzing key rehabilitation indexes by utilizing a machine learning algorithm, generating a rehabilitation report containing personalized suggestions and guiding future rehabilitation plans.
And the dynamic rehabilitation effect evaluation model dynamically adjusts the evaluation standard according to the real-time data so as to adapt to the individual difference and the change of the rehabilitation stage, and ensures the accuracy and individuation of the evaluation result.
And the rehabilitation process visualization module is used for displaying the rehabilitation report and the advice in the form of visual charts and texts, improving the readability of rehabilitation data and enhancing the understanding of users and doctors on the rehabilitation process.
And the self-adaptive rehabilitation suggestion module dynamically adjusts the rehabilitation suggestion according to the data in the rehabilitation report, ensures timeliness and individuality of the suggestion and optimizes the rehabilitation effect.
The innovation points comprehensively utilize data mining, machine learning, dynamic evaluation and visualization technologies, and aim to provide a comprehensive, accurate and personalized rehabilitation experience for users. By application of these techniques, our system can help users to perform rehabilitation more effectively while providing valuable insight to doctors to support better rehabilitation decisions. The design and implementation of the data analysis module show the innovation capability and the technical lead of the rehabilitation technology field.
And the real-time data mining and analyzing module is used for:
The core of the module is the application of the real-time data mining technology, and the module can immediately analyze and process the rehabilitation action data while transmitting the rehabilitation action data to the system through the UDP protocol. The application of the technology enables the data in the rehabilitation process to be actively used for generating real-time feedback instead of being recorded passively, thereby providing data support for timely adjustment of the rehabilitation plan. In addition, the real-time data analysis can also help identify potential rehabilitation problems, such as inaccuracy of action execution or abnormality of rehabilitation progress, so that early warning is given at the early stage of the problem, and the reduction of rehabilitation effect is avoided.
Mathematical formula:
Wherein, P Real time (d) represents the prediction effect of the real-time data, d is the input data, and a i is the model parameter.
The personalized rehabilitation report generation algorithm module:
the innovation of the algorithm is its personalized nature. Through deep learning and pattern recognition techniques, the algorithm is able to learn and recognize key rehabilitation metrics from the user's rehabilitation data. These metrics include muscle strength level, range of joint motion, brunstrom's phase, etc., and algorithms correlate these metrics with rehabilitation effects, generating a rehabilitation report containing personalized advice. Such reporting is not merely a summary of past rehabilitation activities, but rather a guide to future rehabilitation programs.
Mathematical formula:
Wherein R (u) is a rehabilitation report of the user u, x i,u is an ith rehabilitation index of the user u, w i is a weight, and f is a conversion function.
Dynamic rehabilitation effect evaluation model:
The dynamic rehabilitation effect evaluation model is innovative in that the dynamic rehabilitation effect evaluation model can dynamically adjust evaluation standards according to real-time data. This means that the assessment model is not static, but can self-adjust according to the rehabilitation progress and individual differences of each user. For example, for the user in early rehabilitation, the assessment model will be more focused on the accuracy of the basic actions, while for the user in later rehabilitation, the smoothness and coordination of the actions will be more focused. This dynamic adjustment ensures the accuracy and personalization of the assessment results.
Mathematical formula:
wherein E (t) is rehabilitation effect evaluation at time t, y j,t is the j-th evaluation index at time t, v j is index weight, and g is an evaluation function.
And a rehabilitation process visualization module:
the innovation of the rehabilitation progress visualization technology is intuitiveness and interactivity. By way of diagrams and text, the technique is able to translate complex data into visual information that is easy to understand. The user and doctor can intuitively see the rehabilitation progress through the charts, such as the growth curve of muscle strength, the expansion trend of the joint movement range and the like. In addition, the charts can also be operated in cloud, such as zooming in to view details and screening data. The visualization technology not only improves the readability of rehabilitation data, but also enhances the understanding and control of the rehabilitation process by users and doctors.
Mathematical formula:
Wherein V (p) is a visual representation of rehabilitation progress, p is a rehabilitation progress parameter, and h (z) is a mapping function of rehabilitation status.
An adaptive rehabilitation suggestion module:
The innovation of the adaptive rehabilitation advice system is that it can dynamically adjust rehabilitation advice based on data in the rehabilitation report. The system uses predictive models to determine the best rehabilitation advice by analyzing key indicators in the rehabilitation report, such as muscle strength, joint mobility, brunstrom's phase, etc. These suggestions are not static, but rather vary with the user's rehabilitation status, ensuring timeliness and personalization of the suggestions. For example, if the user's muscular strength increases at some stage are not apparent, the system may recommend that specific muscular strength exercises be added, and if the joint mobility increases well, it may recommend that the current rehabilitation program be maintained.
Mathematical formula:
Wherein S (c) is a suggestion for rehabilitation c, c l,s is the first index in the suggestions S, and u l is the utility value of the index.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The foregoing is merely a preferred embodiment of the present invention, and it should be noted that modifications and variations could be made by those skilled in the art without departing from the technical principles of the present invention, and such modifications and variations should also be regarded as being within the scope of the invention.
Claims (5)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202410415119.9A CN118538362B (en) | 2024-04-08 | 2024-04-08 | Somatosensory-based interactive virtual rehabilitation training method and system |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202410415119.9A CN118538362B (en) | 2024-04-08 | 2024-04-08 | Somatosensory-based interactive virtual rehabilitation training method and system |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN118538362A CN118538362A (en) | 2024-08-23 |
| CN118538362B true CN118538362B (en) | 2024-12-27 |
Family
ID=92392520
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202410415119.9A Active CN118538362B (en) | 2024-04-08 | 2024-04-08 | Somatosensory-based interactive virtual rehabilitation training method and system |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN118538362B (en) |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN110415783A (en) * | 2018-04-26 | 2019-11-05 | 北京新海樱科技有限公司 | A kind of Functional Activities of OT method of rehabilitation based on body-sensing |
| CN110890140A (en) * | 2019-11-25 | 2020-03-17 | 上海交通大学 | Virtual reality-based autism rehabilitation training and capability assessment system and method |
Family Cites Families (10)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN105844100A (en) * | 2016-03-24 | 2016-08-10 | 乐视控股(北京)有限公司 | Method and system for carrying out rehabilitation training through television and somatosensory accessory |
| US11665284B2 (en) * | 2020-06-20 | 2023-05-30 | Science House LLC | Systems, methods, and apparatus for virtual meetings |
| CN113241150A (en) * | 2021-06-04 | 2021-08-10 | 华北科技学院(中国煤矿安全技术培训中心) | Rehabilitation training evaluation method and system in mixed reality environment |
| US20250094855A1 (en) * | 2021-07-21 | 2025-03-20 | University Of Washington | Optimal data-driven decision-making in multi-agent systems |
| CN113888934A (en) * | 2021-11-11 | 2022-01-04 | 上海市养志康复医院(上海市阳光康复中心) | Aphasia rehabilitation training system and training method based on VR visual and auditory guidance |
| CN114822760B (en) * | 2022-04-14 | 2024-11-29 | 深圳市铱硙医疗科技有限公司 | Rehabilitation system for brain trauma upper limb dyskinesia based on VR equipment |
| US20240016415A1 (en) * | 2022-07-15 | 2024-01-18 | Pes University | Method and system for conducting interactive rehabilitation sessions with continuous monitoring |
| CN115714000B (en) * | 2022-11-25 | 2023-08-11 | 中国人民解放军总医院第四医学中心 | Method and device for evaluating rehabilitation training |
| CN115985461A (en) * | 2022-12-08 | 2023-04-18 | 重庆邮电大学 | A Rehabilitation Training System Based on Virtual Reality |
| CN117672454B (en) * | 2023-12-02 | 2025-01-28 | 南方医科大学第三附属医院(广东省骨科研究院) | A virtual reality-based urinary control recovery training system and method for prostatectomy |
-
2024
- 2024-04-08 CN CN202410415119.9A patent/CN118538362B/en active Active
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN110415783A (en) * | 2018-04-26 | 2019-11-05 | 北京新海樱科技有限公司 | A kind of Functional Activities of OT method of rehabilitation based on body-sensing |
| CN110890140A (en) * | 2019-11-25 | 2020-03-17 | 上海交通大学 | Virtual reality-based autism rehabilitation training and capability assessment system and method |
Also Published As
| Publication number | Publication date |
|---|---|
| CN118538362A (en) | 2024-08-23 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| Gillies | Understanding the role of interactive machine learning in movement interaction design | |
| CN108463271B (en) | System and method for motor skill analysis and skill enhancement and prompting | |
| Shapiro | Building a character animation system | |
| Debarba et al. | On the plausibility of virtual body animation features in virtual reality | |
| Takacs | Special education and rehabilitation: teaching and healing with interactive graphics | |
| LEite et al. | Mani-pull-action: Hand-based digital puppetry | |
| CN118538362B (en) | Somatosensory-based interactive virtual rehabilitation training method and system | |
| Thalmann et al. | Virtual reality software and technology | |
| CN112133409A (en) | A virtual diagnosis and treatment system and method | |
| Ibrahim et al. | Sonification of 3D body movement using parameter mapping technique | |
| CA3187416A1 (en) | Methods and systems for communication and interaction using 3d human movement data | |
| Hernholm | A virtual reality pose estimation exercise game for post-stroke upper-limb motor function rehabilitation | |
| Zafer | Research on Current Sectoral Uses of Motion Capture (MoCap) Systems | |
| Burger et al. | Communication of musical expression by means of mobile robot gestures | |
| Tits | Expert gesture analysis through motion capture using statistical modeling and machine learning | |
| Ma et al. | Value evaluation of human motion simulation based on speech recognition control | |
| Rahman et al. | Experience Augmentation in Physical Therapy by Simulating Patient-Specific Walking Motions | |
| Irlitti et al. | Examining the role of volumetric segmentation on movement training in mixed reality | |
| Tripathi | A Study on the Field of XR Simulation Creation, Leveraging Game Engines to Develop a VR Hospital Framework | |
| Grillon | Simulating interactions with virtual characters for the treatment of social phobia. | |
| Landry et al. | A broad spectrum of sonic interactions at immersive interactive sonification platform (iISoP) | |
| Tao | Design and development of a telerehabilitation app | |
| Guşită et al. | A Novelty Real-Time Gesture Recognition Model for Air-Hand Piano Playing Using Mediapipe | |
| CN120852604A (en) | Meta-universe digital person generation method and system based on deep learning | |
| Yang | Animation VR Motion Simulation Evaluation Based on Somatosensory Simulation Control Algorithm |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |