Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
In the embodiments of the present application, when related processing is performed according to data related to characteristics of a target object, such as attribute information or attribute information set of the target object, permission or consent of the target object is obtained first, and related laws and regulations and standards are complied with for collection, use, processing, etc. of the data. Wherein the target object may be a user. In addition, when the embodiment of the application needs to acquire the attribute information of the target object, the independent permission or independent consent of the target object is acquired through a popup window or a jump to a confirmation page or the like, and after the independent permission or independent consent of the target object is explicitly acquired, the necessary target object related data for enabling the embodiment of the application to normally operate is acquired.
In the present embodiment, the term "module" or "unit" refers to a computer program or a part of a computer program having a predetermined function and working together with other relevant parts to achieve a predetermined object, and may be implemented in whole or in part by using software, hardware (such as a processing circuit or a memory), or a combination thereof. Also, a processor (or multiple processors or memories) may be used to implement one or more modules or units. Furthermore, each module or unit may be part of an overall module or unit that incorporates the functionality of the module or unit.
In order to facilitate understanding of the technical solution provided by the embodiments of the present application, some key terms used in the embodiments of the present application are explained here:
Computer Vision (CV) is a science of researching how to make a machine "look at", and more specifically, it means to replace a human eye with a camera and a Computer to perform machine Vision such as identifying and measuring on a target, and further perform graphic processing, so that the Computer is processed into an image more suitable for human eye observation or transmitting to an instrument for detection. As a scientific discipline, computer vision research-related theory and technology has attempted to build artificial intelligence systems that can acquire information from images or multidimensional data. Computer vision techniques typically include image processing, image recognition, image semantic understanding, image retrieval, OCR, video processing, video semantic understanding, video content/behavior recognition, three-dimensional object reconstruction, 3D techniques, virtual reality, augmented reality, synchronous positioning, and map construction, among others, as well as common biometric recognition techniques such as face recognition, fingerprint recognition, and others.
Artificial intelligence (ARTIFICIAL INTELLIGENCE, AI) is the theory, method, technique, and application system that simulates, extends, and extends human intelligence using a digital computer or a machine controlled by a digital computer, perceives the environment, obtains knowledge, and uses the knowledge to obtain optimal results. In other words, artificial intelligence is an integrated technology of computer science that attempts to understand the essence of intelligence and to produce a new intelligent machine that can react in a similar way to human intelligence. Artificial intelligence, i.e. research on design principles and implementation methods of various intelligent machines, enables the machines to have functions of sensing, reasoning and decision. The artificial intelligence technology is a comprehensive subject, and relates to the technology with wide fields, namely the technology with a hardware level and the technology with a software level. Artificial intelligence infrastructure technologies generally include, for example, sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, pre-training model technologies, operation/interaction systems, mechatronics, and the like. The pre-training model is also called a large model and a basic model, and can be widely applied to all large-direction downstream tasks of artificial intelligence after fine adjustment. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and other directions.
The pose describes the position and the pose of an object (such as a coordinate) in a specified coordinate system, such as a common pose describes the position and the pose of a robot in a space coordinate system, wherein the position refers to the positioning of a rigid body in space, the position of the rigid body can be represented by a 3×1 matrix, namely the position of the center of the rigid body coordinate system in a base coordinate system, the pose refers to the orientation of the rigid body in space, and the pose of the rigid body can be represented by a 3×3 matrix, namely the pose of the rigid body coordinate system in the base coordinate system.
In daily life, in order to enhance balance ability, provide support and improve stability while standing or walking, related auxiliary activity devices are widely used in various scenes such as daily rehabilitation. Currently, commonly used auxiliary moving equipment is generally a crutch or other simple walking aid, and the intelligent degree of the auxiliary moving equipment needs to be improved
In order to solve the problems, the embodiment of the application provides a control method of a doping robot, the doping robot and electronic equipment, which can automatically execute doping tasks and have higher intelligent degree.
Referring to fig. 1, fig. 1 is a schematic diagram of an alternative implementation environment provided by an embodiment of the present application, where the implementation environment includes a control module of the doping robot, where the control module is capable of being communicatively connected to sensors disposed on the doping robot, and where the control module, after obtaining sensor data of the sensors, is capable of controlling the doping robot to operate based on the obtained sensor data.
The control module can control the supporting robot to move towards the target object based on the obtained sensor data, and can control the mechanical arm to move to the target joint position when the supporting robot is controlled to move to the target moving position, wherein the target moving position and the target joint position are changed along with the relative position between the target object and the target sensor, so that the state of the target object can be perceived, the supporting task can be automatically executed according to the state of the target object, on the basis, the interaction force data detected by the touch sensor can be obtained when the control module perceives that the touch sensor is applied with an external force in the process of moving the mechanical arm to the target joint position or after the mechanical arm moves to the target joint position, and the supporting robot is flexibly controlled based on the interaction force data, so that the supporting robot can follow the gesture of the target object, the supporting comfort is improved, and the control method provided by the application can be diversified and interacted with the target object, and has high intelligent degree.
In addition, the control module can communicate with a server, and the server updates the algorithm package corresponding to the control method to the control module in real time.
Fig. 2 is a flowchart of a control method of the robot according to an embodiment of the present application, which may be performed by a control module of the robot or may be performed by a control module of the robot in conjunction with a server, and in an embodiment of the present application, the control method of the robot is described by way of example by the control module of the robot, and the control method of the robot includes, but is not limited to, the following steps 201 to 202.
In step 201, in response to identifying that the target object has the intention of requiring the doping, the doping robot is controlled to move towards the target object, and when the doping robot moves to the target moving position, the mechanical arm is controlled to move to the target joint position.
In one possible implementation, the target object may refer to a user object that is perceived by the doping robot and there is an intention to be doped, and the doping intention may refer to an object for which the target object is expected to be assisted by the doping robot, such as an object within the computer vision of the doping robot, or an object bound to the terminal through biometric information, the doping robot may perceive and identify the target object through the installed target sensor, and evaluate whether the perceived target object requires doping assistance. For the robot, the presentation form of the doping intention of the target user may be various, for example, the doping intention may be presented by means of voice language, body posture, physical interaction, signal instruction, etc., and the specific manner may be based on the perception manner provided by the target sensor of the robot.
In one possible implementation, the target movement position may refer to a location that the robot needs to reach to complete the task of doping the target object, such as a location near the target object where the doping is desired, or the target movement position may refer to a position of the robot relative to the target object, while the target joint position may refer to a particular pose or joint angle that the robot arm of the robot needs to reach to complete the task of doping the target object, or the target joint position may refer to a position of the robot arm relative to the target object. The target moving position and the target joint position are changed along with the relative position change between the target sensor and the target object, which is equivalent to the fact that the state of the target object can be sensed in real time through the target sensor and the relative position between the target sensor and the target object is corrected in the process that the robot is moved to the target object, so that the target moving position and the target joint position are adjusted in real time, the supporting task can be automatically adjusted to be more matched with the current state of the target object, and the supporting comfort is improved.
In one possible implementation manner, during the process of executing the doping task of the target object by the doping robot, because the state of the target object changes in real time, the target movement position and the target joint position can be correspondingly adjusted according to the different diagrams of the requirement doping of the target object, for example, the doping gesture (namely the target movement position and the target joint position) of the doping robot required by the doping target object to walk and the doping target object to stand for two different doping requirements is different, and the target movement position and the target joint position of the doping robot are adjusted in real time by automatically sensing the state of the target object so as to provide different interaction modes, movement characteristics and assistance modes and perform diversified interaction with the target object, thereby improving the intelligent degree.
In one possible implementation, referring to fig. 3, fig. 3 is a schematic diagram of a doping target object according to an embodiment of the present application. When the supporting robot needs to support the target object to walk, the target moving position of the supporting robot can be positioned at the side rear of the target object, the state of the target object is sensed in real time in the supporting process, and the target moving position and the target object are synchronously updated when traveling, so that the supporting robot keeps moving towards the target object, the relative position of the target sensor and the target object is maintained stable, and the supporting effect that the supporting robot supports along with the target object is realized. In addition, the position of the target joint can be regulated in real time so that the power assisting point of the mechanical arm is maintained at the waist, the hip and the wrist of the target object, the walking direction of the target object is guided and the walking rhythm is maintained by simulating the human supporting gesture, and the supporting comfort is improved.
In one possible implementation, when the doping robot needs to support the target object to stand, the target moving position of the doping robot may be located in front of the target object, and the target moving position is fixed relative to the target object, so as to help the target object maintain balance, that is, the doping robot moves towards the target object and may remain stable after moving to the target moving position. And in the supporting process, the state of the target object is sensed in real time, the position of the target joint is regulated so that the power assisting point of the mechanical arm is maintained on the upper body, such as the waist and the arm, of the target object, and the weight of the target object is supported, so that the target object is changed from a sitting position or a squatting position to a standing position.
In one possible implementation, the intention of the target object to support may be determined by sensing the state of the target object, that is, the relative position of the target sensor and the target object, and the relative position of the target object and the target sensor may include the relative positions of the center of gravity position, the joint position, the trunk position, and the like of the target object to the target sensor, for example, if the center of gravity position of the target object is at a lower position relative to the target sensor and does not move substantially in consecutive frames, the target object may be considered to be attempting to stand, so that the intention of the target object to support may be determined, and if the center of gravity position of the target object is continuously changed relative to the target sensor and the leg joint position is continuously displaced relative to the target sensor in a certain direction, the target object may be considered to be walking, so that the intention of the target object to support may be determined.
Step 202, when external force is applied to the touch sensor during the process of the mechanical arm moving to the target joint position or after the mechanical arm moving to the target joint position, performing flexible control on the supporting robot.
In one possible implementation, the target object may be considered to be in contact with the support robot when the tactile sensor is perceived to be in contact with an external force during or after the movement of the robot arm to the target joint position, so that the support robot may be controlled to respond appropriately to the movement gesture of the target object by compliant control, for example, the support robot may be controlled to maintain a fixed output force without being affected by external pressure, or the support robot may be controlled to maintain a soft interaction with the target object by a change in position and speed according to a preset impedance characteristic with the external force, or may generate feedback to touch or contact while satisfying a specific gesture. Specifically, the flexible control of the supporting robot may be to adjust parameters of joints (including upper limbs and lower limbs) of the supporting robot, for example, by adjusting positions of the target joints, positions of the target movements, and motion speeds and accelerations of the joints, so as to reduce the occurrence of impact or excessive pressure on the target object during the supporting process, and improve the supporting comfort.
In one possible implementation manner, in the process of supporting, the mechanical arm of the supporting robot can be flexibly controlled, specifically, parameters such as moment, position, speed, acceleration and the like of each joint of the mechanical arm can be adjusted, and the action track and position of the mechanical arm of the supporting robot are changed, for example, under the condition that the tactile sensor of the mechanical arm is perceived to be applied with external force, the moment output of the corresponding joint can be adjusted in real time according to the position and the direction of the external force, so that the gesture of the target object is maintained or the interaction with the target object is carried out softly, and meanwhile, the dynamic impedance parameters of the corresponding joint are improved, so that the action of the mechanical arm is smoother and slower, and the gesture of the target object is complied. Besides the flexible control of the mechanical arm, the whole body of the supporting robot can be flexible, and the posture of a target object is complied by adjusting the whole body joints of the supporting robot, so that the supporting robot can better simulate the limb movement characteristics of human beings, and the comfort in the supporting process is improved. For example, when the robot is a wheeled robot with a mechanical arm, the degree of freedom of the omni-wheel (such as the forward distance, the backward distance and the rotation angle of the omni-wheel) and the movement acceleration of the omni-wheel can be adjusted under the condition that the external force is applied to the tactile sensor of the mechanical arm, so as to match with the flexible action of the mechanical arm.
In one possible implementation, the types of the target sensor include a plurality of types, and specifically may include a vision sensor and a pose sensor, where the pose sensor may detect a spatial position and a direction of the target object, and may also detect a spatial position and a direction in which the robot is located, and when the pose sensor detects that the position of the robot relative to the target object changes, the target movement position of the robot may be adjusted in real time to achieve or maintain a desired relative pose, and thus the target movement position may be changed along with a relative position between the target object and the pose sensor. The visual sensor can identify the target object and the gesture thereof, and when the relative gesture, namely the relative position, of the target object and the visual sensor changes, the action position of the mechanical arm of the robot can be adjusted in real time to respond to the gesture change of the target object, so that the position of the target joint follows the relative position change between the target object and the visual sensor.
In one possible implementation manner, at least one of the vision sensor and the pose sensor is arranged, so that measurement errors of a single sensor can be mutually compensated by integrating data of a plurality of sensors, and accuracy and reliability of object perception are improved. For example, the pose sensor may include an inertial measurement unit, a laser radar, an ultrasonic sensor, and the like, so that the relative position between the target object and the pose sensor may be obtained by comprehensively determining the relative position between each pose sensor and the target object, specifically, by collecting sensor data of each pose sensor at the same time, noise filtering is performed on all sensor data, and then fusion analysis is performed on all sensor data through a neural network model trained in advance, so as to determine the relative position between the target object and the pose sensor. Correspondingly, the visual sensor can comprise a depth camera, a stereo camera, an infrared camera and the like, and the three-dimensional gesture of the target object can be more accurately determined by integrating a plurality of visual sensors, so that the relative position between the visual sensor and the target object is determined.
In one possible implementation, the robot may be provided with a vision sensor, such as a camera, and the terminal may sense and identify a person within the visual range of the computer and identify the perceived pose of the target object by the vision sensor provided by the robot. The image data of the environment where the current supporting robot is located can be photographed in real time through the vision sensor, the image data is analyzed to determine whether a main body of a target object exists in the image data, specifically, the image data can be subjected to human body recognition through a human body gesture recognition algorithm such as MEDIAPIPE recognition frame to determine the position of a human skeleton point, so that the key point of the target object can be detected through the image data, after the key point of the target object is determined, the gesture recognition can be performed on the target object to determine the human body gesture of the target object, and further whether the target object has the intention of supporting is judged to execute supporting tasks. The human body gesture recognition algorithm with different response rates and complexities can be selected based on the sampling frequency of the vision sensor, for example, the video stream frame rate of the vision sensor is 30 frames per second, so that a model with medium model complexity can be selected to recognize the human body gesture, the frequency of recognizing the human body gesture by each video node is 28Hz, and real-time human body recognition of the current image data of the vision sensor is realized. When the fact that the target object does not have the intention of being doped is detected, the doping robot can continue to execute the current doping task or continuously conduct gesture recognition on the target object to judge whether the intention of being doped is needed.
Referring to fig. 4, fig. 4 is a schematic diagram of identifying that a target object has a demand supporting intention, where a terminal may be deployed with a pre-trained pose estimation network model, after image data obtained by capturing with a vision sensor is obtained, feature extraction may be performed on the image data to obtain first image data, then the first image data is input into the pose estimation network model to obtain a human body key point, then a current pose of the target object may be determined according to the human body key point, and further the current pose of the target object is matched with a to-be-supported object having the demand supporting intention, and if the current pose is matched with the to-be-supported object, the terminal may consider that the target object has the demand supporting intention. As shown in FIG. 4, the posture to be supported may be a body posture such as a user making a squat, walk, or a specific gesture. If a plurality of user objects exist in the image data, the gesture of each user object can be identified, and the user object with the current gesture matched with the gesture to be doped is determined as the target object.
In addition, the historical image data obtained by shooting at the last moment of the vision sensor can be obtained, the characteristic extraction is carried out on the historical image data at the last moment to obtain second image data, then the first image data and the second image data are spliced to obtain third image data, the third image data are input into a gesture estimation network model to obtain target key points, then the estimated gesture of the target object is determined according to the target key points, the gesture of the target object can be obtained more accurately by combining action changes in continuous frames, and if the estimated gesture is matched with the to-be-doped with the required doping intention, the terminal can consider that the target object has the required doping intention.
In one possible implementation manner, the supporting robot may be further provided with an acoustic sensor, such as a microphone, and the terminal may sense the voice information sent by the target object through the acoustic sensor provided by the supporting robot, and perform voice recognition on the voice information to determine whether the voice information has an intention to require supporting, for example, referring to fig. 5, fig. 5 is a schematic diagram provided in an embodiment of the present application for identifying that the target object has an intention to require supporting, and when a user speaks a sentence having an intention to require supporting, such as "please help me walk to a room" or "i need supporting", the target object may be considered to have an intention to require supporting, so as to control the supporting robot to perform supporting tasks.
In a possible implementation manner, referring to fig. 6, fig. 6 is a schematic diagram of identifying that there is a demand for supporting the target object according to an embodiment of the present application, and the terminal may sense the interaction force applied by the outside through a touch sensor disposed on a mechanical arm of the supporting robot. As shown in fig. 6, if the terminal senses that the tactile sensor is applied with an external force during the process of not controlling the supporting robot to perform the supporting task, the object with the smallest relative distance with the target sensor may be regarded as the target object at this time, and the target object is considered to have the intention of supporting the object, or the terminal may call the vision sensor (if the supporting robot is further provided with the vision sensor such as a camera) to sense and identify the gesture of each object in the vision range of the computer, and the object satisfying the gesture to be supported is regarded as the target object, and the target object is considered to have the intention of supporting the object.
In one possible implementation manner, referring to fig. 7, fig. 7 is a schematic diagram of identifying a demand doping intention of a target object provided in an embodiment of the present application, where a help button may be disposed on the doping robot, or a remote controller that is detachably disposed on the doping robot, and the terminal may sense a signal generated when the help button or the remote controller is triggered, and may consider that the demand doping intention exists on the target object that triggers the help button or the remote controller, or the terminal may call a vision sensor (if the doping robot is further provided with a vision sensor such as a camera) to sense and identify a gesture of each object in a computer vision range, take an object that meets the demand doping gesture as the target object, and consider that the target object has the demand doping intention.
In one possible implementation, sensor data related to the object is captured by the target sensor, for example, image data captured by a vision sensor and pose data detected by a pose sensor are used for feature extraction and stitching, so as to obtain multi-source sensor data, the multi-source sensor data is imported into a pre-trained deep learning model such as a convolutional neural network model, or a real-time object detection algorithm such as YOLO (You Only Look Once) target detection algorithm, an SSD (Single Shot MultiBo Detector) object detection algorithm, and the like, so that the person object in the sensor data (such as a picture image) can be rapidly detected and the position of the person object can be estimated, then the pose of the person object in the sensor data can be analyzed by using a pose estimation model such as a OpenPose human pose estimation algorithm model or a AlphaPose pose estimation algorithm based on human key point detection, and the behavior mode and the action intention of the person object are analyzed, so as to determine whether the intention of the person object is in need of being doped. When the intention of the object to be doped is identified, the optimal path of the doping robot reaching the target moving position can be calculated through a dynamic path planning algorithm, and the doping robot is controlled to move towards the object to be doped. The object sensor can sense not only person objects but also obstacle objects, such as laser radar, vision sensor and the like, which are used for sensing surrounding environments of the supporting robot, so that the object sensor can be used for sensing the obstacle on a moving path to realize an obstacle avoidance function in the moving process, and simultaneously the object sensor is used for sensing the object in real time in the moving process, correcting the relative distance between the object and the object sensor, and adjusting the object moving position of the supporting object. After the robot reaches the target moving position, the robot arm can be controlled to act to a human body supporting point, namely a target joint position, meanwhile, the triggering condition of a touch sensor on the robot arm and the relative distance between the target sensor and a target object are perceived, the state of the target object is judged, and the position and the force of the robot arm are adjusted in real time so as to adapt to the action and balance states of the target object.
The supporting robot and the control method thereof can be applied to various man-machine interaction scenes such as medical rehabilitation scenes, daily care scenes, public place supporting scenes, rescue and disaster relief scenes and the like, are high in intelligent degree, for example, in hospitals or rehabilitation centers, the supporting robot can be controlled to help a patient or an old person to walk, stand, sit and change the pose, and as the state of a target object can be perceived, supporting schemes can be adjusted in real time according to the state and the requirement of the target object, supporting comfort is improved, meanwhile, object rehabilitation can be assisted, the workload of a physical rehabilitation engineer can be relieved, and for example, the supporting robot is controlled to detect the activity state of the old person in real time, and help the old person to perform various movements (such as up and down stairs, stand and walk) indoors or outdoors and provide assistance (such as fall to support) in emergency situations in time.
In one possible implementation manner, in the process of controlling the robot to move towards the target object, current target pose data of the pose sensor can be obtained, the target pose data are transformed based on a preset first transformation matrix to obtain a target moving position, and then the robot is controlled to move towards the target object based on the target moving position.
In one possible implementation manner, the pose sensor may detect pose data of the human body of the target object, that is, the target pose data may be pose data of the human body measured by the pose sensor at the current moment of the target object, or may be calculated based on the measured pose data of the human body. The preset first transformation matrix may be a pose transformation matrix transformed from human body space coordinates to support robot space coordinates, and is used for representing a pose transformation relation between a support robot target moving position and a target object position, so that a desired position of the support robot relative to the target object, namely, a target moving position, is obtained through transformation of the target object space coordinates, and the support robot is controlled to move to a target moving position near the target object, so as to perform support tasks of the target object.
In one possible implementation manner, a plurality of first candidate transformation matrices may be preset to adapt to different postures or different doping requirements of the target object, for example, different transformation matrices may be selected according to a motion mode and postures of the target object to determine a corresponding target movement position of the doping robot, so as to ensure safety and comfort of doping. After the target moving position is determined, in the process of controlling the supporting robot to move towards the target object based on the target moving position, the pose data of the target object can be detected in real time, the target moving position of the supporting robot can be updated in real time based on the first transformation matrix, or the pose data of the target object can be detected in real time, the moving mode and the pose of the target object are judged, and the first transformation matrix is updated in real time so as to adjust the target moving position of the supporting robot.
Referring to fig. 8, fig. 8 is a schematic diagram of a positional transformation relationship according to an embodiment of the present application. The position and direction of the state data of the target object, such as the position coordinates (x h,yh, 1) of the target object relative to the robot and the angle theta h of the target object relative to the robot, can be extracted from the target position and direction data. Assuming that the doping requirement of the target object is doping walking, a first transformation matrix corresponding to the doping walking can be determined from a plurality of first candidate transformation matrices, so that the position of the target object is transformed based on a preset first transformation matrix [0.92,0,1] T to obtain a target moving position of the doping robot, and therefore the doping robot can be controlled to move from an original position to the target object and move to the target moving position, and the doping task of the target object is executed. Specifically, the calculation process of the target movement position (x base,ybase, 1) is as follows:
Therefore, it can be seen that the target moving position of the doping robot is related to the position of the target object relative to the doping robot, and along with the continuous movement of the doping robot, the target pose data detected by the pose sensor is continuously changed, so that the determined target moving position also correspondingly changes, namely, the target moving position of the doping robot changes along with the relative position between the target object and the pose sensor. It should be noted that the target moving position may be determined based on the spatial coordinate system of the robot, so that although the target moving position is continuously changed, the target moving position at each moment may be relatively fixed with respect to the world coordinate system or the coordinate system in which the target object is located.
In a possible implementation manner, the target pose data may be obtained by integrating pose data obtained by measuring the pose sensor at a plurality of moments, specifically, first pose data obtained by predicting the pose sensor at a previous moment and a first covariance matrix may be obtained, current second pose data is predicted based on the first pose data, current second covariance matrix is predicted based on the first covariance matrix, then measured pose data obtained by the pose sensor at present is obtained, a target gain is determined based on the measured pose data and the second covariance matrix, and then the second pose data is corrected based on the target gain to obtain current target pose data of the pose sensor.
In one possible implementation, because the pose sensor has a measurement error, a target object (such as a person) is easily confused with a non-target object (such as an obstacle), so that a doping task cannot be performed, and therefore, human pose data directly measured by the pose sensor needs to be corrected to obtain high-accuracy target pose data. Referring to fig. 9, fig. 9 is a schematic flow chart of data processing on pose data measured by a pose sensor according to an embodiment of the present application, and as shown in fig. 9, the pose data is corrected, and first, the pose data at the previous moment is used for prediction, including performing state prediction estimation on the pose data, and then performing prediction estimation on a covariance matrix corresponding to the predicted state quantity. Then, the predicted state value is corrected by using the measured value at the current moment, including calculating a target gain, correcting the predicted state value based on the target gain, and finally updating the covariance matrix estimated by the previous prediction. The human body pose data can comprise angles, directions of joints of the human body, three-dimensional coordinate positions of key points of the human body and the like, and specifically, the human body pose data detected by the pose sensor can be understood as the motion state of the target object, including the position and the speed of the target object, namelyThe first pose data is the pose data obtained by predicting the pose data measured by the pose sensor at the previous moment. Because the motion state of the target object is not affected by external control input, the external control vector and the external control input matrix can be ignored based on the state equation of the correction system, so that the second pose data predicted at the current moment can be obtained through the product calculation of the first pose data predicted at the previous moment and the state transition matrix, namely, the state of the target object at the current moment is predicted. The state transition matrix is used for describing the relation of the change of the correction system from the last moment to the current moment, and specifically, the state transition matrix is as follows:
wherein Δt is denoted as a time interval of the correction system, and represents a time difference between a previous time to a current time for correcting a change in the state variable due to the time interval.
According to the state equation of the correction system after the influence of the external control input is ignored, a calculation formula of the second pose data can be obtained, and the calculation formula can be specifically shown as follows:
Wherein, the Denoted as second pose data predicted at the current time, x k-1 denoted as first pose data predicted at the previous time, ω k denoted as process noise that the correction system satisfies a gaussian distribution.
In one possible implementation, the first covariance matrix refers to a matrix that measures uncertainty of the predicted first pose data when the pose sensor predicts the first pose data at the previous moment, and is used to represent uncertainty of state estimation at the previous moment in the correction system. The first covariance matrix is an expression of error distribution of the first pose data, diagonal elements in the first covariance matrix represent variances of all state variables of the first pose data (namely elements of the first pose data), and non-diagonal elements represent covariances among different state variables and are used for reflecting correlation among the different state variables. Based on the correction system, a second covariance matrix corresponding to the second pose data at the current moment can be obtained by performing product operation on the state transition matrix and the first covariance matrix. Specifically, the calculation formula of the second covariance matrix may be as follows:
Wherein, the Represented as and second pose dataThe corresponding second covariance matrix, P k-1, is denoted as a first covariance matrix corresponding to the first pose data x k-1, and Q is denoted as a noise covariance matrix corresponding to the process noise ω k of the correction system.
In one possible implementation, the target gain is used to combine the predicted state (i.e., the second pose data) predicted by the pose sensor and the observed quantity (i.e., the measured pose data) measured by the pose sensor to optimize the accuracy of the prediction, so as to balance between the second pose predicted pose and the measured pose data, and correct the second pose predicted data to obtain the target pose data. Specifically, a feedback gain matrix is obtained through calculation of the second covariance matrix and a preset observation matrix in the correction system, and then a target gain is calculated through measurement of pose data and the feedback gain matrix. The calculation formula of the feedback gain matrix can be shown as follows:
where K k is denoted as a feedback gain matrix, H is denoted as an observation matrix, R is denoted as an observed quantity noise matrix, and specifically, the observation matrix may be denoted as:
After the feedback gain matrix is obtained, the measured pose data and the second pose data can be weighted through the feedback gain matrix to obtain the target gain, and a specific calculation formula can be shown as follows
The measurement pose data can be obtained after low-pass correction based on initial pose data obtained by direct measurement of a pose sensor, and specifically, a correction calculation formula of the measurement pose data can be shown as follows:
Wherein alpha is represented as a correction coefficient, the correction coefficient is represented as a low-pass correction intensity, Represented as second initial pose data obtained by directly measuring the pose sensor at the current moment, and z k-1 represented as first initial pose data obtained by directly measuring the pose sensor at the last momentAnd performing low-pass correction to obtain historical measurement pose data.
Then, the second pose data is corrected by using the target gain, so that the current target pose data of the pose sensor can be obtained, and a specific correction formula of the target pose data x k can be shown as follows:
after correcting the second pose data of the predicted state by using the target gain, a second covariance matrix corresponding to the second pose data may be updated, and a specific matrix update formula may be as follows:
Wherein P k is represented as an updated second covariance matrix, and the second covariance matrix is predicted by correcting the feedback gain matrix K k and the observation matrix H of the system The updating is performed, so that the target pose data at the next moment can be predicted by using the updated second covariance matrix P k.
In one possible implementation, the relationship between the predicted state and the observed quantity in the correction system may be represented by an observation equation, and in particular, the observation equation may be represented by the following formula:
zk=Hxk+vk (9)
where v k is denoted as the observed noise in the correction system that satisfies the gaussian distribution.
In a possible implementation manner, in the process of controlling the mechanical arm to move to the target joint position, current image data of a vision sensor can be acquired first, key point positions of a plurality of key points of a target object in the image data are determined, then a coordinate system where the key point positions are located is converted to a coordinate system where the robot is located, the center point positions of the plurality of key points are determined based on the plurality of converted key point positions, the center point positions are transformed based on a preset second transformation matrix, the target joint position is obtained, and the mechanical arm is controlled to move based on the target joint position.
In one possible implementation, the target object may have a plurality of key points, and in particular, the key points of the target object may include salient feature points on the human body such as an eye, an ear, a shoulder, a waist, an elbow joint, a wrist, a hip joint, a knee joint, an ankle, and the like. In the case of performing the keypoint detection on the image data, the keypoints may be determined from the plurality of joint points based on the main body portion of the target object displayed in the image data, for example, if the main body portion of the target object displayed in the image data is the upper limb portion of the target object, the joint points such as the shoulder and the waist may be selected as the keypoints, and if the main body portion of the target object displayed in the image data is the lower limb portion of the target object, the joint points such as the hip joint and the knee joint may be selected as the keypoints. Or when the image data is subjected to key point detection, key points can be determined from a plurality of joint points based on the supporting requirement of the target object, for example, if the supporting requirement of the target object is supporting, the joint points such as shoulders, elbows and wrists can be selected as the key points, and if the supporting requirement of the target object is supporting, the joint points such as waists, hips and elbows can be selected as the key points.
In one possible implementation, the key point of the target object may be a fixed node on the human body, and when a plurality of fixed nodes of the target object cannot be detected from the image data, the target movement is redetermined based on the relative position between the pose sensor and the target object, and after the robot is controlled to reach a new target movement position again, the image data of the vision sensor is acquired again for analysis. For example, the key points of the target object may be four key points of a left shoulder, a right shoulder, a left hip and a right hip on the human body, when the four key points of the target object cannot be completely detected in the image data obtained by shooting through the vision sensor, the robot can be considered to be incapable of accurately sensing the target object, a sensing error easily occurs, and meanwhile, the relative position between the robot and the target object, such as too close or too far, is considered to be difficult to ensure safe execution of the task, so that the target moving position can be redetermined through the relative position between the pose sensor and the target object, the robot is controlled to move to the updated target moving position, and then the current image data is captured through the vision sensor until the four key points of the left shoulder, the right shoulder, the left hip and the right hip of the target object can be completely detected in the new image data.
In one possible implementation, the key point position may refer to a coordinate point of each key point in the image data under a pixel coordinate system, where the pixel coordinate system may use an upper left corner of the image as an origin of the pixel coordinate system, a horizontal right direction is a positive direction of an abscissa axis, and a vertical downward direction is a positive direction of an ordinate axis. After the image data is subjected to image recognition to determine a plurality of key points, the positions of the key points in the image data, namely, two-dimensional coordinate points under a pixel coordinate system, can be determined through image registration. Then, the coordinate system where the key points are located, namely the pixel coordinate system is converted into the coordinate system where the supporting robot is located, and the positions of the key points after conversion are represented as three-dimensional coordinate points of the key points under the coordinate system where the supporting robot is located. Then, an arithmetic average is calculated for the plurality of converted key point positions to obtain the center point positions of the plurality of key points, namely the coordinates of the center points of the plurality of key points. And then, transforming the position of the central point through a preset second transformation matrix to obtain the position of the target joint, wherein the second transformation matrix can refer to a transformation matrix that the central point position is changed to the doping position required by the mechanical arm, namely, the central point position is used as a reference point for the doping position expected by the doping robot to determine the position of the target joint of the mechanical arm.
In one possible implementation, referring to fig. 10, fig. 10 is a schematic diagram of a center point position according to an embodiment of the present application. The image data of the environment where the robot is currently located is captured by the vision sensor, as shown in fig. 10, if the image data captured by the vision sensor is required to capture the main body of the target object due to the limitation of the capturing angle of the vision sensor, the relative distance between the vision sensor and the target object needs to be larger than the minimum capturing distance of the vision sensor, for example, the vision sensor is a depth camera, and the minimum capturing distance of the depth camera is 50 cm, if the relative distance between the depth camera and the target object reaches 50 cm, the main body of the target object is displayed in the image data captured by the vision sensor, otherwise, the relative position between the robot and the target object needs to be adjusted, and the image data is re-captured. If the subject of the target object is displayed in the image data, the image data may be subjected to the key point detection of the target object, and the relative position between the target object and the vision sensor may be determined through four key points of the left shoulder (as the marked point a shown in fig. 10), the right shoulder (as the marked point B shown in fig. 10), the left hip (as the marked point C shown in fig. 10) and the right hip (as the marked point D shown in fig. 10) of the human body, while the distance between the target object and the vision sensor may be determined. Next, by calculating the center point positions of the four key points of the left shoulder, the right shoulder, the left hip, and the right hip (the mark point E shown in fig. 10), at this time, for the target object shown in fig. 10, the center point is located at the waist of the target object, so that the waist of the target object can be regarded as the reference point of the desired support position of the arm of the support robot, that is, the reference point of the target joint position, and the center point positions are transformed by the second transformation matrix, so that the support positions located on both sides of the waist can be obtained as the target joint positions (the mark point F shown in fig. 10).
In one possible implementation, the vision sensor is calibrated to obtain an internal reference matrix of the vision sensor, then the installation position of the vision sensor in the robot is determined, an external reference matrix of the vision sensor is determined according to the installation position, and then the coordinate system where the key point position is located is converted to the coordinate system where the target moving position is located based on the internal reference matrix and the external reference matrix.
In one possible implementation, the internal matrix of the vision sensor describes the internal optical characteristics of the vision sensor, including the focal length and the imaging center (optical center). The focal length of the vision sensor may be represented by representing the 10-axis focal length and the y-axis focal length in pixels, respectively, and the optical center may be represented by a projection position on the image plane, for example, after parameter calibration of the vision sensor, an internal reference matrix of the vision sensor may be obtained, and the internal reference matrix may be represented as:
Where f x is denoted as 10-axis focal length, f y is denoted as y-axis focal length, and (c x,cy) is denoted as optical center position.
In one possible implementation, the extrinsic matrix of the vision sensor defines a spatial relationship between the vision sensor coordinate system and the coordinate system in which the doping robot (base) is located, and the extrinsic matrix includes a rotation matrix, typically a 3X3 matrix, representing the rotation of the vision sensor on the doping robot (base) coordinate system, and a translation matrix, typically a 3X1 vector, representing the position of the vision sensor relative to the doping robot (base), which may be obtained, for example, according to the installation position of the vision sensor in the doping robot, and may be expressed in particular as:
Where R is denoted as a rotation matrix and T is denoted as a translation matrix.
In one possible implementation manner, the coordinate system where the key point is located is converted to the coordinate system where the target moving position is located, and the coordinate system where the pixel coordinate system is located can be firstly converted to the coordinate system where the vision sensor is located based on the coordinate system where the key point is located, and then the coordinate system where the vision sensor is located is converted to the coordinate system where the robot (base) is located, and the specific calculation process can be shown as follows:
Wherein, (u, v) is represented as a pixel coordinate system, namely a coordinate system where a key point is located, (X c,Yc,Zc) is represented as a coordinate system where a vision sensor is located, an origin of the coordinate system where the vision sensor is located is on an optical center of the vision sensor, a Z axis is parallel to an optical axis of the vision sensor, Z c is a shooting direction of the vision sensor, and (X B,YB,ZB) is represented as a coordinate system where a robot (base) is located.
In one possible implementation, the kinematics of the robotics are modeled based on joint variables (e.g., displacement, velocity, acceleration, position) of the robotics, and a kinetic model of the robotics can be obtained describing the relationship between joint moments of the robotics and joint accelerations of the robotics. Then, external force data are acquired through a touch sensor arranged on the mechanical arm of the supporting robot, external force information of a stress point is converted into a coordinate system of a corresponding joint of the supporting robot through a transformation matrix, and a moment required to be generated by the joint of the supporting robot according to the external force information, namely a target joint moment, is obtained, so that the supporting robot can generate enough force to resist the external force while ensuring action flexibility and supporting stability. And then, determining the target joint acceleration of the robot based on the obtained target joint moment and the established dynamic model, and performing flexible control on the robot based on the target joint acceleration.
In one possible implementation, the mass coefficient may be determined based on the mass of the robot, the joint position of the robot, and the joint velocity of the robot, the friction coefficient may be determined based on the friction force of the robot during the motion of the robot and the joint position of the robot, the gravity coefficient may be determined based on the gravitational acceleration of the robot and the joint position of the robot, a first product between the mass coefficient and the joint acceleration of the robot may be determined, and the robot may be kinematically modeled based on the first product, the friction coefficient, and the sum of the gravity coefficients to obtain a kinetic model of the robot.
In one possible implementation, the mass coefficient may represent an inertial property of each joint of the robot, and the mass coefficient may be understood as a mass matrix in a robot dynamics model, for describing a mass distribution of each joint of the robot, and in particular, the mass coefficient of each joint of the robot may be determined by a mass of a corresponding joint of the robot, a joint position, and a joint velocity, wherein the joint position of the robot may be determined by a position encoder of each joint, and the mass coefficient may vary with a variation of the joint position. The friction coefficient can be understood as a friction matrix in the dynamic model, which indicates that the friction force applied to each joint is affected when the robot is supported, and the directions and the magnitudes of the friction forces corresponding to the joints at different joint positions are different, so that the friction coefficients corresponding to the joints at different joint positions are different. The gravity coefficient can be understood as a gravity matrix in the dynamic model, and is used for representing the influence of gravity on each joint of the robot, and the different directions of gravity applied to the joints at different joint positions influence the posture transformation of each joint. The first product is obtained by multiplying the mass coefficient by the joint acceleration of the corresponding joint, and can represent the inertial force caused by the joint acceleration, namely the inertial force which is required to be opposed when each joint acts, so as to ensure that the joint responds in time when the joint is subjected to external force. The robot is subjected to kinematic modeling through the sum of a first product, a friction coefficient and a gravity coefficient, and a specific formula of the obtained dynamic model can be shown as follows:
Wherein M (q) represents the mass coefficient of the joint as joint position q, Represented as a first product of the products,The joint velocity of the joint expressed as joint position,The coefficient of friction of the joint expressed as joint position,The joint velocity of the joint expressed as the joint position, g (q) expressed as the gravity coefficient of the joint at the joint position q, and τ expressed as the joint moment of the joint at the joint position q.
Then, based on the dynamics model, the target joint acceleration generated under the action of external force can be calculated, and a specific calculation formula of the target joint acceleration can be shown as follows:
After the target joint acceleration is obtained, an adaptive control strategy can be generated through a proportional-integral-derivative controller, namely a PID controller, and the torque output of the corresponding joint is adjusted to respond to the external force, so that the movement of the robot is controlled based on the target joint acceleration.
In one possible implementation, the haptic sensor includes a plurality of haptic cells, so that a third transformation matrix for transforming the coordinate system of the robot to the coordinate system of the robot is determined, a fourth transformation matrix for transforming the coordinate system of the robot to the coordinate system of the haptic cells is determined, then a transformation function between the coordinate system of each haptic cell and the coordinate system of the robot is determined according to the third transformation matrix and the fourth transformation matrix, differentiation of the transformation function to the position of the robot joint is determined to obtain a jacobian matrix corresponding to each haptic cell, then a second product between external force data acquired by each haptic cell and the corresponding jacobian matrix is determined, and a target joint moment is obtained based on the sum of the second products.
In one possible implementation manner, it is assumed that the doping robot needs to perform the doping task, where the task space of the doping task and the joint space form a nonlinear relationship, that is, coordinate points of the target joint positions required for performing the doping task are located under the coordinate system where the doping robot is located, coordinate points of external force data received by the doping task are located in the coordinate system where the haptic units are located, and compliance control cannot be achieved by directly adjusting the target joint positions with the external force data, so that a conversion function needs to be determined by a third conversion matrix and a fourth conversion matrix, and the conversion function may describe how to convert the coordinate system where each haptic unit is located into the coordinate system where the doping robot is located, and then differential operation is performed on the joint positions of the doping robot by the conversion function, so that two different coordinate systems may be associated, and a jacobian matrix of a corresponding joint may be obtained, where the jacobian of a corresponding joint may be represented by the following formula:
wherein f is expressed as a conversion function, J (q) is expressed as a jacobian matrix of the joint at the joint position q, and the jacobian matrix of the corresponding joint can be obtained by obtaining the partial derivative of the joint position q through the conversion function f.
Taking the differentiation of the transformation function on the joint position of the robot as the jacobian matrix of each haptic unit to represent the influence of the magnitude and direction of the external force received by each haptic unit on different joint positions, specifically, assuming that the direction of the rotation axis of each joint i is represented by a unit vector zri and the position of the origin of the coordinate system of each joint can be represented by a vector p i, the jacobian matrix J i (q) of the joint i can be represented as:
wherein, (x b,yb) is represented as a coordinate system where the robot is located, q k represents the kth joint, 1<k is less than or equal to n, and n represents the total number of joints of the robot, namely the total number of joints of the mechanical arm and the number of joints of the base.
Because each joint of the robot rotates around the z-axis, the pose transformation matrix T i of each joint i may be:
And substituting the pose transformation matrix of the joint corresponding to each haptic unit into the jacobian matrix of the corresponding joint to obtain the jacobian matrix corresponding to each haptic unit. Then, the external force data acquired by each haptic unit is multiplied by the corresponding jacobian matrix to obtain a second product, the external force data received by the corresponding haptic unit is converted into the integral joint by utilizing the jacobian matrix of each haptic unit, namely, the second product can represent the force transmitted by each haptic unit to the integral joint, and further, the sum of the second products can represent the force applied by the outside (namely, the target object) when interacting with the mechanical arm in the supporting process, so that the target joint moment is mapped, wherein the calculation formula of the target joint moment tau can be shown as follows:
Wherein, the A jacobian matrix of haptic elements j denoted as a robotic arm joint i,External force data of the haptic unit j, which is represented as a robot arm joint i, in particular, since the external force f z is applied in the z-axis directionCan be expressed in the following form:
In one possible implementation, the joint of the supporting robot comprises a plurality of sub-arms and joint components of the mechanical arm and a base joint of the supporting robot, so that flexible control can be realized on joint moment and joint acceleration of each sub-arm and joint component of the mechanical arm respectively, when the supporting robot is a wheeled robot, namely the base joint of the supporting robot can comprise an omni-wheel installed on the base, wherein the joint moment of the omni-wheel can be represented by the rotation angle of the omni-wheel, the joint acceleration of the omni-wheel can be represented by the angular velocity and the angular acceleration of the omni-wheel, the moment of motor-driven omni-wheel and the like, flexible control can be realized by adjusting the rotation direction, the rotation speed, the rotation acceleration and the like of the omni-wheel, and flexible control can be realized on movement trend of a supporting object, when the supporting robot is a foot-type robot, the base joint of the supporting robot can comprise ankle joints, knee joints, hip joints and the like, and the flexible control can be realized by controlling the ankle joints, the knee joints, the hip joints and the joint acceleration and the like respectively.
In one possible implementation, to determine a fourth transformation matrix for transforming the coordinate system in which the joints of the robot are located into the coordinate system in which the haptic elements are located, it is necessary to kinematically model each haptic element of the haptic sensor for a position on the manipulator, and establish a relationship between the haptic elements and a kinematic chain on the manipulator, wherein each joint of the manipulator can be regarded as approximately a cylinder, and therefore, the plurality of haptic elements of the haptic sensor can be regarded as being distributed cylindrically on the surface of the manipulator, and a first distance between any two adjacent haptic elements is equal, wherein the first distance is expressed as a distance between two adjacent haptic elements in a direction along the length of the manipulator linkage. Meanwhile, the gesture of the coordinate system where the haptic unit is located relative to the coordinate system where the supporting robot is located is the same as the gesture of the coordinate system where the joint corresponding to the haptic unit is located relative to the coordinate system where the supporting robot is located, so that the distance between each haptic unit and the starting end of the joint where the haptic unit is located can be determined based on the first distance corresponding to the haptic unit, and further a fourth transformation matrix for transforming the coordinate system where the joint of the supporting robot is located to the coordinate system where the haptic unit is located can be determined.
Referring to fig. 11, fig. 11 is a schematic diagram illustrating a transformation of a coordinate system of a joint of the robot to a coordinate system of a haptic unit according to an embodiment of the present application. Assuming that the first distance is 0.015 m, as shown in fig. 11, the mechanical arm may be regarded as a cylinder, since the first distance between any two adjacent haptic units is equal, equivalently, a plurality of haptic units cylindrically distributed on the surface of the mechanical arm may be divided along the height of the cylinder to obtain a plurality of circles, the distance between two adjacent circles is fixed to be 0.015 m, and the haptic units on the same circle may be modeled to be regarded as the same unit, therefore, based on the first distance of the haptic units and the position of the haptic unit (the circle where the haptic unit is located) on the mechanical arm (the cylinder), a conversion relationship between the haptic unit and the joint where the haptic unit is located may be obtained, and further a fourth transformation matrix may be obtained, where the fourth transformation matrix may be expressed as:
as shown in FIG. 11, t i can be understood as the origin of the coordinate system of the joint i where the haptic unit is located May be expressed as the origin of the coordinate system of the haptic unit j in the joint i, (0.015 x j) may be expressed as the distance between the origin of the coordinate system of the haptic unit j and the origin of the coordinate system of the joint i where it is located,Can be represented as a fourth transformation matrix for transforming the coordinate system in which the joints of the robotics are located into the coordinate system in which the haptic elements are located.
In one possible implementation, referring to fig. 12, fig. 12 is a schematic flow chart of compliance control provided by an embodiment of the present application. The whole body soft control based on the touch sensor can be realized by firstly, carrying out kinematic modeling on each touch unit of the touch sensor aiming at the position on the mechanical arm, then carrying out dynamic modeling on the supporting robot to construct a dynamic model, and then calculating a jacobian matrix on each touch unit based on the dynamic model of the modeling of the touch unit and the supporting robot, so that external force data received by the touch sensor can be mapped to stress conditions of joints of the whole body of the mechanical arm and the supporting robot through jacobian matrix conversion, and further joint acceleration corresponding to the joints of the whole body of the supporting robot can be obtained to carry out soft control.
In one possible implementation manner, based on the perceived supporting requirement of the target object, the supporting robot can be controlled to follow the target object to execute supporting tasks, such as the intention of the target object to support and walk is perceived, the supporting robot can be controlled to update the target moving position in real time based on the relative distance between the pose sensor and the target object, and the supporting robot is controlled to move towards the target object based on the updated target moving position, so that the supporting robot can execute supporting tasks following the target object. In the process of moving the robot along with the target object, the obstacle can be detected by the target sensor, when the obstacle is identified within a preset angle range, the outline of the obstacle can be determined by carrying out feature extraction on the data obtained by the target sensor, and then the target size of the obstacle can be calculated, specifically, the environmental data of the surrounding environment can be obtained by adopting a pose sensor (such as a laser radar), the environmental data can be subjected to point cloud segmentation, key characteristics of the obstacle, the ground and the like are identified, and then the segmented point cloud data are clustered to form a cluster of the obstacle, so that the size of the obstacle can be further analyzed, or a depth image of the current environment can be obtained by a visual sensor (such as a depth camera), the distance data of the obstacle can be determined by calculating the depth of the depth image, then the distance data can be identified and estimated by a pre-trained neural network model, more accurate environmental data can be obtained by combining the data of a plurality of sensors, for example, the corrected data obtained by the pose sensor and the visual sensor can be imported into the corrector or the particle corrector, and the multi-source data can be fused to determine the size of the obstacle. Then, the target size of the obstacle is compared with a preset size range. If the target size of the obstacle exceeds the preset size range, it is indicated that the obstacle may interfere with the moving path of the robot and affect the safety of the robot, and therefore, the robot needs to be controlled to avoid the obstacle.
In one possible implementation manner, the second distance between the supporting robot and the obstacle can be detected through a pose sensor and/or a vision sensor and other target sensors arranged on the supporting robot, and when the second distance is smaller than or equal to a preset distance threshold value, the obstacle can be considered to influence the moving path of the supporting robot and the target object, so that the supporting robot can be controlled to avoid the obstacle, and meanwhile, the robot arm of the supporting robot can be controlled to provide information feedback (such as applying external force, sending out sound prompt and the like) for the target object to guide the target object to avoid the obstacle.
In one possible implementation, referring to fig. 13, fig. 13 is a schematic diagram of a robot arm for supporting an obstacle avoidance according to an embodiment of the present application. Under the condition that the obstacle is considered to influence the moving path of the robot, a virtual space can be created, and sensor data perceived by a target sensor of the robot is mapped into the virtual space, so that the obstacle position of the obstacle and the end position of the moving path of the robot can be represented in the virtual space, wherein the obstacle and the end position can be modeled to form a sphere. The first force vector of the obstacle, i.e. the repulsive force of the obstacle, can be determined on the basis of the obstacle position and the obstacle mass corresponding to the obstacle, the first force vector being used for guiding the robot to move away from the obstacle position, and then the second force vector of the end position being determined on the basis of the end position and the corresponding preset mass, the second force vector being used for guiding the robot to move towards the end position, wherein the first force vector decreases with increasing second distance and the second force vector decreases with increasing distance between the current position of the robot and the end position. Then, the first force vector and the second force vector are synthesized to obtain a target force vector, specifically, the two vectors can be added or weighted to obtain the target force vector, and the speed and the direction of the robot can be controlled according to the target force vector, so that the robot can move to the set end position while avoiding the obstacle.
When the second distance between the plurality of obstacles and the supporting robot is smaller than or equal to the preset distance threshold, the positions of the plurality of obstacles and the corresponding obstacles can be determined in the virtual space, and the first force vectors corresponding to the obstacles are obtained respectively, and then all the first force vectors and the second force vectors can be combined to obtain the target force vector.
In one possible implementation manner, in the process of supporting the target object, the target sensor can be used for detecting the change of the surrounding environment of the supporting robot in real time so as to respond to sudden events such as environmental changes and the like, so that the safety of a user is ensured, for example, in the process of supporting the target object by the supporting robot through zebra crossing, the supporting robot can perform sensing and identification on dynamic obstacles such as vehicles, pedestrians and static obstacles such as road shoulders and flower gardens in the surrounding environment, and guide the target object to adjust the walking speed, pause walking or change the walking path so as to avoid potential collision.
In one possible implementation manner, the touch sensor includes a plurality of touch units, the touch units can be uniformly distributed on the surface of the mechanical arm, the touch sensor can collect external force data in real time, and based on detection principles of different touch sensors, whether the touch sensor is triggered can be judged through resistance change, capacitance change, piezoelectric effect and the like. When the external force is applied to the touch sensor, the electric signals of the touch units are changed, the signals can be converted into identifiable external force data, meanwhile, the touch units can be analyzed to determine the activation quantity of the touch units, specifically, whether one touch unit is in an activated state or not can be judged by setting a sensitivity or an activation threshold, and when the applied external force exceeds the sensitivity or the activation threshold of the touch units, the touch unit can be in the activated state. When the external force is applied to the touch sensor, external force data can be continuously acquired through the touch sensor, when the detected external force data indicates that the currently applied external force is greater than or equal to an external force threshold value and the activation quantity is greater than a preset quantity threshold value, the current target object and the mechanical arm can be considered to perform large-area force interaction, so that the target object can be considered to be supported by the supporting robot at present, or an emergency such as unstable gravity center of the target object can be considered to occur at present, and therefore, the supporting robot can be subjected to flexible control, and the target joint position and the target joint acceleration of the mechanical arm can be adjusted to conform to the gesture of the target object.
In one possible implementation manner, when the external force data indicates that the external force applied by the touch sensor is greater than or equal to the external force threshold and the activation number is smaller than the preset number threshold, the target object can be considered to be interacting with the mechanical arm, but the stability of the doping is low at this time, if the current gesture of the doping robot is changed, the supporting balance of the target object is easily lost, so that the doping robot can be controlled to move to the target moving position or the mechanical arm is controlled to move to the target joint position according to a preset control strategy, and the doping robot does not need to be flexibly controlled.
In one possible implementation manner, when the external force data indicates that the external force variation value suffered by the touch sensor is greater than or equal to the first external force variation threshold value within a first preset duration, and the internal and external force variation value within a second preset duration after the first preset duration is smaller than the second external force variation threshold value, it can be considered that the target object is determined to have performed force interaction with the mechanical arm at present, and the doping gesture of the current doping robot is stable with the doping gesture of the target object, so that the doping robot can be flexibly controlled.
The following describes in detail a control method of the robot according to the embodiment of the present application.
Referring to fig. 14, fig. 14 is an optional overall flowchart of a control method provided in an embodiment of the present application, where the control method may be performed by a terminal, and the control method includes, but is not limited to, the following steps 1401 to 1410:
Step 1401, in response to identifying that the target object has the intention of being required to support, acquiring first pose data and a first covariance matrix, wherein the first pose data is obtained by predicting a pose sensor at the previous moment.
Step 1402, predicting current second pose data based on the first pose data, and predicting current second covariance matrix based on the first covariance matrix.
Step 1403, acquiring measurement pose data acquired by a pose sensor currently, and determining a target gain based on the measurement pose data and a second covariance matrix.
Step 1404, correcting the second pose data based on the target gain to obtain current target pose data of the pose sensor.
And 1405, transforming the target pose data based on a preset first transformation matrix to obtain a target moving position.
Step 1406, controlling movement of the robot to the target object based on the target movement location.
Step 1407, when the robot is moved to the target moving position, current image data of the vision sensor is acquired, and key point positions of a plurality of key points of the target object in the image data are determined.
And 1408, converting the coordinate system where the key points are located to the coordinate system where the robot is located, and determining the central point positions of the key points based on the converted key point positions.
And 1409, transforming the center point position based on a preset second transformation matrix to obtain a target joint position, and controlling the mechanical arm to act based on the target joint position.
In this step, the target movement position and the target joint position are both varied following the relative position between the target object and the target sensor;
and 1410, performing compliance control on the support robot when the external force is applied to the touch sensor during or after the mechanical arm moves to the target joint position.
Referring to fig. 15, fig. 15 is an optional overall flowchart of a control method provided in an embodiment of the present application, where the control method may be performed by a terminal, and the control method includes, but is not limited to, the following steps 1501 to 1512:
Step 1501, controlling the support robot to move toward the target object in response to identifying that the target object has an intention to support.
Step 1502, when the robot is moved to the target movement position, the robot arm is controlled to move to the target joint position.
In this step, the target movement position and the target joint position are both varied following the relative position between the target object and the target sensor.
In step 1503, when the haptic sensor is applied with an external force during or after the movement of the robot arm to the target joint position, a mass coefficient is determined according to the mass of the robot, the joint position of the robot, and the joint speed of the robot.
Step 1504, determining friction coefficient according to the friction force of the robot and the joint position of the robot.
Step 1505, determining the gravity coefficient according to the gravity acceleration of the supporting robot and the joint position of the supporting robot.
Step 1506, determining a first product between the mass coefficient and joint acceleration of the robot, and performing kinematic modeling on the robot according to the sum of the first product, the friction coefficient and the gravity coefficient to obtain a dynamic model of the robot.
In this step, the kinetic model is used to indicate the relationship between the joint moment of the robot and the joint acceleration of the robot.
Step 1507, obtaining the current external force data of the touch sensor.
Step 1508, determining a third transformation matrix for transforming the coordinate system in which the robot is located into the coordinate system in which the joints of the robot are located.
Step 1509, determining a fourth transformation matrix for transforming the coordinate system of the robot in which the joint is located to the coordinate system of the haptic unit based on the first distance corresponding to the haptic unit.
In this step, the plurality of haptic units are cylindrically distributed on the robot arm, and the first distances between any two adjacent haptic units are equal.
And 1510, determining a conversion function between the coordinate system where each haptic unit is located and the coordinate system where the supporting robot is located according to the third transformation matrix and the fourth transformation matrix, and determining the differentiation of the conversion function on the joint position of the supporting robot to obtain the jacobian matrix corresponding to each haptic unit.
Step 1511, determining a second product between the external force data acquired by each haptic unit and the corresponding jacobian matrix, and obtaining the target joint moment based on the sum of the second products.
And 1512, determining the target joint acceleration of the robot based on the target joint moment and the dynamics model, and performing flexible control on the robot based on the target joint acceleration.
Referring to fig. 16, fig. 16 is an optional overall flowchart of a control method provided in an embodiment of the present application, where the control method may be performed by a terminal, and the control method includes, but is not limited to, the following steps 1601 to 1609:
step 1601, in response to identifying that the target object has an intent to support, controlling the support robot to move toward the target object.
Step 1602, detecting a target size of the obstacle when the obstacle is identified within a predetermined angular range during the course of the robot following the target object.
Step 1603, detecting a second distance between the robot and the obstacle when the target size is outside of the predetermined size range.
Step 1604, constructing a virtual space where the robot is located when the second distance is less than or equal to a preset distance threshold.
Step 1605, determining an obstacle position of the obstacle in the virtual space and a set end position of the robot.
Step 1606, determining a first force vector for the obstacle based on the obstacle position and the obstacle mass for the obstacle, and determining a second force vector for the end position based on the end position and a preset mass for the end position.
Step 1607, synthesizing the first force vector and the second force vector to obtain a target force vector, and controlling the robot to avoid the obstacle according to the target force vector.
Step 1608, when the robot is moved to the target movement position, controlling the mechanical arm to move to the target joint position.
In this step, the target movement position and the target joint position are both varied following the relative position between the target object and the target sensor.
Step 1609, performing compliant control on the robot when the haptic sensor is applied with external force during or after the movement of the manipulator to the target joint position.
It will be appreciated that, although the steps in the flowcharts described above are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited in order unless explicitly stated in the present embodiment, and may be performed in other orders. Moreover, at least some of the steps in the flowcharts described above may include a plurality of steps or stages that are not necessarily performed at the same time but may be performed at different times, and the order of execution of the steps or stages is not necessarily sequential, but may be performed in turn or alternately with at least a portion of the steps or stages in other steps or other steps.
Referring to fig. 17, fig. 17 is a schematic diagram of an alternative configuration of a doping robot 1700 provided in an embodiment of the present application, the doping robot 1700 comprising:
A robot arm 1701 provided with a tactile sensor;
a target sensor 1702 for object perception;
A control module 1703 for controlling the movement of the doping robot 1700 to the target object in response to recognizing that the target object has an intention to be doped, controlling the movement of the robot 1700 to the target joint position when the robot 1700 moves to the target movement position, and performing compliant control of the robot 1700 when an external force is applied to the tactile sensor during the movement of the robot 1701 to the target joint position or after the movement of the robot 1701 to the target joint position, wherein the target movement position and the target joint position are each changed following the relative position between the target object and the target sensor 1702.
In one possible implementation, the control module 1703 is further configured to:
Acquiring current target pose data of a pose sensor, and transforming the target pose data based on a preset first transformation matrix to obtain a target moving position;
support robot 1700 is controlled to move toward the target object based on the target movement position.
In one possible implementation, the control module 1703 is further configured to:
acquiring first pose data and a first covariance matrix, which are obtained by predicting a pose sensor at the previous moment, predicting current second pose data based on the first pose data, and predicting current second covariance matrix based on the first covariance matrix;
acquiring measurement pose data acquired by a pose sensor currently, and determining a target gain based on the measurement pose data and a second covariance matrix;
and correcting the second pose data based on the target gain to obtain the current target pose data of the pose sensor.
In one possible implementation, the control module 1703 is further configured to:
Acquiring current image data of a vision sensor, and determining key point positions of a plurality of key points of a target object in the image data;
converting the coordinate system where the key points are located to the coordinate system where the supporting robot 1700 is located, and determining the central point positions of the key points based on the converted key point positions;
And transforming the center point position based on a preset second transformation matrix to obtain a target joint position, and controlling the mechanical arm 1701 to act based on the target joint position.
In one possible implementation, the control module 1703 is further configured to:
calibrating parameters of the vision sensor to obtain an internal reference matrix of the vision sensor;
determining the installation position of the vision sensor in the support robot 1700, and determining an external parameter matrix of the vision sensor according to the installation position;
and converting the coordinate system where the key point is located into the coordinate system where the target moving position is located based on the internal reference matrix and the external reference matrix.
In one possible implementation, the control module 1703 is further configured to:
And carrying out gesture recognition on the target object according to each key point, wherein the gesture recognition result comprises the intention that the target object has the demand for supporting or the intention that the target object does not have the demand for supporting.
In one possible implementation, the control module 1703 is further configured to:
Performing kinematic modeling on the supporting robot 1700 to obtain a dynamic model of the supporting robot 1700, wherein the dynamic model is used for indicating the relation between the joint moment of the supporting robot 1700 and the joint acceleration of the supporting robot 1700;
acquiring current external force data of the touch sensor, and converting the external force data to obtain a target joint moment;
and determining the target joint acceleration of the robot 1700 according to the target joint moment and the dynamics model, and performing flexible control on the robot 1700 based on the target joint acceleration.
In one possible implementation, the control module 1703 is further configured to:
determining a mass coefficient according to the mass of the support robot 1700, the joint position of the support robot 1700, and the joint speed of the support robot 1700;
Determining a friction coefficient according to the friction force of the supporting robot 1700 during the action and the joint position of the supporting robot 1700;
Determining a gravity coefficient according to the gravity acceleration of the supporting robot 1700 and the joint position of the supporting robot 1700;
A first product between the mass coefficient and joint acceleration of the support robot 1700 is determined, and the support robot 1700 is kinematically modeled based on a sum of the first product, the friction coefficient, and the gravity coefficient to obtain a kinetic model of the support robot 1700.
In one possible implementation, the control module 1703 is further configured to:
Determining a third transformation matrix for converting the coordinate system of the robot 1700 to the coordinate system of the joint of the robot 1700, and determining a fourth transformation matrix for converting the coordinate system of the robot 1700 to the coordinate system of the haptic unit;
According to the third transformation matrix and the fourth transformation matrix, determining a conversion function between a coordinate system where each haptic unit is located and a coordinate system where the supporting robot 1700 is located, and determining differentiation of the conversion function on the joint position of the supporting robot 1700 to obtain a jacobian matrix corresponding to each haptic unit;
and determining a second product between the external force data acquired by each haptic unit and the corresponding jacobian matrix, and obtaining the target joint moment based on the sum of the second products.
In one possible implementation, the plurality of haptic units are cylindrically distributed on the mechanical arm 1701, and the first distance between any two adjacent haptic units is equal, and the control module 1703 is further configured to:
a fourth transformation matrix for transforming the coordinate system in which the joints of the robotics 1700 are located to the coordinate system in which the haptic elements are located is determined based on the first distances corresponding to the haptic elements.
In one possible implementation, the control module 1703 is further configured to:
detecting a target size of an obstacle when the obstacle is recognized within a preset angle range in the process of following the target object by the robot 1700;
when the target size is outside the preset size range, robot 1700 is controlled to avoid the obstacle.
In one possible implementation, the control module 1703 is further configured to:
detecting a second distance between the supporting robot 1700 and the obstacle, and constructing a virtual space where the supporting robot 1700 is located when the second distance is smaller than or equal to a preset distance threshold;
determining an obstacle position of an obstacle in the virtual space and an end position set by the robot 1700;
Determining a first force vector of the obstacle according to the obstacle position and the obstacle mass of the obstacle, and determining a second force vector of the end position according to the end position and the preset mass of the end position;
The first force vector and the second force vector are combined to obtain a target force vector, and the support robot 1700 is controlled to avoid the obstacle according to the target force vector.
In one possible implementation, the tactile sensor is provided with a plurality of tactile units, and the control module 1703 is further configured to:
when external force is applied to the touch sensor, current external force data of the touch sensor are obtained, and the activation quantity of the touch units is determined;
When the external force data indicates that the external force applied to the touch sensor is greater than or equal to the external force threshold and the activation number is greater than or equal to a preset number threshold, compliance control is performed on the support robot 1700.
In one possible implementation, referring to fig. 18, fig. 18 is a schematic diagram of an alternative configuration of a doping robot 1700 provided in accordance with an embodiment of the present application. As shown in fig. 18, the support robot 1700 includes a main trunk 1801, the mechanical arm 1701 includes a first sub-arm 1802 and a second sub-arm 1803, one end of the first sub-arm 1802 is movably connected to the main trunk 1801, the other end of the first sub-arm 1802 is movably connected to the second sub-arm 1803, and the first sub-arm 1802 and the second sub-arm 1803 are each provided with a touch sensor, which is equivalent to that the first sub-arm 1802 can move relative to the main trunk 1801, the second sub-arm 1803 can move relative to the first sub-arm 1802, the first sub-arm 1802 can be used as a main part of the mechanical arm 1701 and can bear a larger force relative to the second sub-arm 1803, so as to support and assist a user, and the second sub-arm 1803 can be used as an extension of the first sub-arm 1802, so as to perform a more complex and fine-tuning action, such as a support force or direction. In addition, the link length of the second sub-arm 1803 may be less than the link length of the first sub-arm 1802, thereby reducing the motion inertia of the second sub-arm 1803 for more accurate motion control. Therefore, through the multistage movable connection of the first sub-arm 1802 and the second sub-arm 1803, a wider movement range of the mechanical arm 1701 can be provided, functions of limbs of a human body can be better simulated, movement of a target object can be assisted, and support and protection of doping can be provided. It should be noted that, the tactile sensors may be respectively disposed on the first sub-arm 1802 and the second sub-arm 1803, so that more comprehensive external force data can be obtained, and further, the gesture of the target object can be recognized more accurately, so as to perform compliant control on the supporting robot 1700.
In one possible implementation, the first sub-arm 1802 has a first joint member 1804 and a second joint member 1805 connected to both ends thereof, the first joint member 1804 is connected to the third joint member 1806 through a first connecting shaft, the third joint member 1806 is connected to the main trunk 1801 through a second connecting shaft, one end of the second sub-arm 1803 is connected to the second joint member 1805 through a third connecting shaft, the other end of the second sub-arm 1803 is connected to the fourth joint member 1807 through a fourth connecting shaft, the fourth joint member 1807 is connected to the fifth joint member 1808 through a fifth connecting shaft, the fifth joint member 1808 is connected to the sixth joint member 1809 through a sixth connecting shaft, and the sixth joint member 1809 is connected to the end effector 1810. As shown in fig. 18, arm 1701 of support robot 1700 may include 6 joint components to achieve 6 degrees of freedom, wherein third joint component 1806 may rotate about a second connection axis relative to main torso 1801, and first joint component 1804 may rotate about a first connection axis relative to third joint component 1806, such that first sub-arm 1802 and second joint component 1805 may move in synchronization relative to third joint component 1806. While the second sub-arm 1803 may rotate about a third connection axis relative to the second joint member 1805, the fourth joint member 1807 may rotate about a fourth connection axis relative to the second sub-arm 1803, the fifth joint member 1808 may rotate about a fifth connection axis relative to the fourth joint member 1807, and the sixth joint member 1809 may rotate about a sixth connection axis relative to the fifth joint member 1808.
In one possible implementation, the sixth joint component 1809 is coupled to an end effector 1810 and the end effector 1810 is removably mounted to the sixth joint component 1809. The end effector 1810 may be a robotic gripper to facilitate the transfer of items by a user, such as a transfer weight, a backpack, a crutch, etc., the end effector 1810 may be a safety restraint device, such as a safety belt, to improve stability and safety during movement of the support object, the end effector 1810 may be a support handle or armrest to facilitate grasping of the object to provide stable support for the object, the end effector 1810 may be a task controller for setting support tasks, the task controller may be communicatively coupled to the control module 1703, the control module 1703 may control the support robot 1700 based on the task controller triggering the set support tasks, such as, for example, the original support tasks to walk for the support object, the task controller of the end effector 1810 may be triggered to reset the support tasks to support the support object when the target object is in a designated position (such as next to a seat), the control module 1703 may be enabled to modify the support tasks to control the support robot 1700 to support the object while sitting down on the seat.
In addition, the doping robot 1700 may be provided with a plurality of arms 1701, as shown in fig. 18, the doping robot 1700 may be provided with two arms 1701, and in order to reduce collisions between the plurality of arms 1701, adjustment in different directions is achieved, and the arms 1701 may be respectively mounted on both sides of the main trunk 1801.
As shown in fig. 18, the robot 1700 further includes a vision sensor 1811, a pose sensor, and a base 1812 for movement, the vision sensor 1811 may be installed in the front side of the main trunk 1801, and the vision sensor 1811 may be located between the two side arms 1701, so that a wider angle of view can be obtained to capture image data to recognize the pose of the human body. The pose sensors may be mounted on the front side of the base 1812 to sense the current environmental conditions of the robot 1700, such as the position of the target object and the position of the obstacle, and when the number of the pose sensors is plural, the pose sensors may be respectively disposed on the peripheral side walls of the base 1812, so as to improve the sensing range and obtain more accurate environmental data.
Fig. 19 is a schematic view of another view of a doping robot 1700 according to an embodiment of the present application. A pose sensor, a control cabinet body 1901, an antenna 1902, a base display screen 1905 for interactively displaying the state of the base 1812, and a control display screen 1906 for interactively displaying the state of the mechanical arm 1701 can be arranged in the base 1812, wherein the base display screen 1905 and the control display screen 1906 are both arranged on the outer side of the control cabinet body 1901, so that interaction with a user is facilitated. The pose sensor may include a three-dimensional lidar 1903 for detecting an obstacle, and a two-dimensional lidar 1904 for detecting a target object position and an obstacle position, the two-dimensional lidar 1904 may be mounted to a side wall of the base 1812, the three-dimensional lidar 1903 may be mounted above the control cabinet 1901, and the antenna 1902 may be mounted to a side of the base 1812 near the control cabinet 1901. As shown in fig. 19, base 1812 may be a mecanum mobile platform, and the joints of support robot 1700 in the lower limb portion may be the articulating mechanism of omni-wheel 1813 and base 1812. Wherein, main trunk 1801 may be located at the front side of control cabinet body 1901, and the gravity center of supporting robot 1700 is moved forward due to the weight of mechanical arm 1701 and main trunk 1801, so that control cabinet body 1901 may be placed at the rear side of main trunk 1801 to balance the gravity center position of supporting robot 1700 by adding a counterweight to base 1812, and stability of supporting robot 1700 movement is improved.
As shown in fig. 20, fig. 20 is a schematic diagram illustrating an internal structure of a control cabinet 1901 of a robot 1700 according to an embodiment of the present application. The robot 1700 further includes a switch 2001, a battery 2002 for supplying power, and an inverter 2003 for converting the voltage of the battery 2002, which are connected to the antenna 1902 and the control module 1703, respectively, and the control module 1703, the switch 2001, the battery 2002, and the inverter 2003 are installed in the control cabinet 1901, and the control module 1703 may interact with a server through the switch 2001 and the antenna 1902, for example, upload sensor data to the server, or download an update algorithm program from the server.
As shown in fig. 21, fig. 21 is a schematic diagram illustrating structural connection of a support robot 1700 according to an embodiment of the present application. The battery 2002 is connected to an inverter 2003 so that the 48V voltage of the battery 2002 is converted into a 221V ac voltage to power the electric loads of the robot 1700. The control module 1703 may be in communication connection with a touch sensor, a pose sensor (including a three-dimensional laser radar 1903 and a two-dimensional laser radar 1904), a vision sensor 1811, a mechanical arm control cabinet 2101, a base display screen 1905 and a control display screen 1906, so that the control module 1703 may acquire data of each sensor and interactive data of each display screen, and the control module 1703 may generate a control instruction based on the acquired data and send the control instruction to the mechanical arm control cabinet 2101, so as to control the mechanical arm 1701 and the base 1812 through the mechanical arm control cabinet 2101. Specifically, the control module 1703 may establish connection with each component by using different data transmission modes, for example, the control module 1703 may perform data transmission with the mechanical arm 1701 and the base 1812 through a TCP/IP protocol, may perform data transmission with the touch sensor and the vision sensor 1811 through a USB3.0 transmission protocol, and may further perform data transmission with the pose sensor, the base display 1905 and the pose display through an HDMI transmission protocol.
The control module 1703 may be an industrial personal computer, configured to perform the control method of the foregoing embodiments, by controlling movement of the support robot to the target object in response to identifying that the target object has an intention to support, and controlling movement of the robot to the target joint position when the support robot moves to the target movement position, where the target movement position and the target joint position both follow a relative position change between the target object and the target sensor, so as to be capable of sensing a state of the target object, and automatically performing the support task according to the state of the target object, on the basis, when the haptic sensor is applied with an external force during or after the movement of the robot to the target joint position, performing compliant control on the robot so that the robot can conform to a posture of the target object, thereby improving comfort of support.
The embodiment of the application also provides electronic equipment, which comprises a memory and a processor, wherein the memory stores a computer program, and the processor realizes the control method of each embodiment when executing the computer program.
The embodiments of the present application also provide a computer-readable storage medium storing a computer program for executing the control method of the foregoing embodiments.
Embodiments of the present application also provide a computer program product comprising a computer program stored on a computer readable storage medium. A processor of the computer device reads the computer program from the computer-readable storage medium, and the processor executes the computer program so that the computer device executes the control method described above.
The terms "first," "second," "third," "fourth," and the like in the description of the application and in the above figures, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate to describe embodiments of the application such as capable of being practiced otherwise than as shown or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus.
It should be understood that in the present application, "at least one (item)" means one or more, and "a plurality" means two or more. "and/or" is used to describe an association relationship of an associated object, and indicates that three relationships may exist, for example, "a and/or B" may indicate that only a exists, only B exists, and three cases of a and B exist simultaneously, where a and B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one of a, b or c may represent a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural.
It should be understood that in the description of the embodiments of the present application, plural (or multiple) means two or more, and that greater than, less than, exceeding, etc. are understood to not include the present number, and that greater than, less than, within, etc. are understood to include the present number.
In the several embodiments provided in the present application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of elements is merely a logical functional division, and there may be additional divisions of actual implementation, e.g., multiple elements or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods of the embodiments of the present application. The storage medium includes various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory RAM), a magnetic disk, or an optical disk.
It should also be appreciated that the various embodiments provided by the embodiments of the present application may be arbitrarily combined to achieve different technical effects.
While the preferred embodiment of the present application has been described in detail, the present application is not limited to the above embodiments, and those skilled in the art can make various equivalent modifications or substitutions without departing from the spirit and scope of the present application, and these equivalent modifications or substitutions are included in the scope of the present application as defined in the appended claims.