CN112595323B - Robot and drawing construction method and device thereof - Google Patents
Robot and drawing construction method and device thereof Download PDFInfo
- Publication number
- CN112595323B CN112595323B CN202011424296.1A CN202011424296A CN112595323B CN 112595323 B CN112595323 B CN 112595323B CN 202011424296 A CN202011424296 A CN 202011424296A CN 112595323 B CN112595323 B CN 112595323B
- Authority
- CN
- China
- Prior art keywords
- key frame
- robot
- point cloud
- frame
- transformation matrix
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000010276 construction Methods 0.000 title description 22
- 238000000034 method Methods 0.000 claims abstract description 42
- 238000013507 mapping Methods 0.000 claims abstract description 20
- 230000009466 transformation Effects 0.000 claims description 42
- 239000011159 matrix material Substances 0.000 claims description 41
- 238000004590 computer program Methods 0.000 claims description 19
- 238000012937 correction Methods 0.000 claims description 7
- 238000003860 storage Methods 0.000 claims description 6
- 230000001131 transforming effect Effects 0.000 claims description 2
- 230000008569 process Effects 0.000 description 11
- 238000010586 diagram Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 238000012545 processing Methods 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000002159 abnormal effect Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 230000004807 localization Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000005520 cutting process Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 238000010187 selection method Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
- G01C21/206—Instruments for performing navigational calculations specially adapted for indoor navigation
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Automation & Control Theory (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
- Manipulator (AREA)
Abstract
The application belongs to the field of robots, and provides a method for constructing a map of a robot, which comprises the steps of obtaining an initial position of the robot; searching a first preset number of key frames in a pre-stored key frame library according to the initial value position of the robot, determining a reference point cloud corresponding to the searched key frames, matching the characteristic point cloud corresponding to the current frame acquired by the robot with the reference point cloud, determining a target key frame corresponding to the current frame, and perfecting an established map corresponding to the pre-stored key frame library according to the target key frame corresponding to the current frame. Therefore, when the robot performs simultaneous positioning and mapping, the robot does not need to complete once, the completed map can be effectively and incrementally mapped without restarting, and the mapping efficiency of the robot can be improved.
Description
Technical Field
The application belongs to the field of robots, and particularly relates to a robot and a drawing construction method and device thereof.
Background
With the progress of technology, commercial server robots are becoming more and more popular. In addition, the cost of the multi-line laser radar sensor for sensing the external environment is gradually reduced, and more multi-line laser radar sensors are arranged in the configuration of the commercial service robot, so that the technology of the instant positioning and mapping (English is totally called simultaneous localization AND MAPPING, english is called SLAM for short) of the robot is also rapidly improved and developed.
Currently, mapping by SLAM technology is usually done at one time. If the robot encounters an abnormal condition in the drawing construction process and is stopped, or the drawing construction needs to be carried out in a follow-up mode, the drawing construction needs to be carried out by restarting, and the drawing construction efficiency of the robot is not improved.
Disclosure of Invention
In view of the above, the embodiment of the application provides a robot and a method and a device for constructing the map thereof, which are used for solving the problems that in the prior art, the map construction of the robot needs to be completed once, if the robot is interrupted, the map construction needs to be started by a restarting head, and the map construction efficiency of the robot is not beneficial to being improved.
A first aspect of an embodiment of the present application provides a mapping method for a robot, where the method includes:
acquiring an initial value position of a robot;
Searching a first preset number of key frames in a pre-stored key frame library according to the initial value position of the robot, and determining a reference point cloud corresponding to the searched key frames;
matching a characteristic point cloud corresponding to a current frame acquired by a robot with the reference point cloud, and determining a target key frame corresponding to the current frame;
And perfecting the built map corresponding to the prestored key frame library according to the target key frame corresponding to the current frame.
With reference to the first aspect, in a first possible implementation manner of the first aspect, according to an initial value position of the robot, searching a first predetermined number of key frames in a pre-stored key frame library, including:
acquiring the distance between a prestored key frame and the initial value position;
and sorting according to the distances to obtain a key frame sequence, and selecting a first preset number of key frames with smaller distances in the sequence.
With reference to the first possible implementation manner of the first aspect, in a second possible implementation manner of the first aspect, determining a reference point cloud corresponding to the searched keyframe includes:
In the key frame sequence, taking an ith key frame as a center, and taking a second preset number w of length as a center to intercept 2w+1 key frames in the key frame sequence, wherein i is a positive integer smaller than the first preset number, and w is an integer larger than or equal to 0;
and constructing a reference point cloud of the ith key frame according to the intercepted point clouds of 2w+1 key frames.
With reference to the first aspect, in a third possible implementation manner of the first aspect, matching a feature point cloud corresponding to a current frame acquired by a robot with the reference point cloud, and determining a target key frame corresponding to the current frame includes:
Determining a pose transformation matrix corresponding to the current frame of the robot;
According to the pose transformation matrix, transforming the characteristic point cloud corresponding to the current frame into a point cloud to be registered;
Matching the point cloud to be registered with the reference point cloud corresponding to each searched key frame, and determining the matching degree of the current frame and each key frame;
And selecting the key frame with the best matching degree as a target key frame corresponding to the current frame.
With reference to the third possible implementation manner of the first aspect, in a fourth possible implementation manner of the first aspect, before selecting a key frame with the best matching degree as the target key frame corresponding to the current frame, the method further includes:
Comparing the matching degree of the key frame with the optimal matching degree with a preset matching degree threshold value;
if the matching degree of the key frame with the best matching degree is larger than or equal to a preset matching degree threshold value, selecting the key frame with the best matching degree as a target key frame corresponding to the current frame;
and if the matching degree of the key frame with the optimal matching degree is smaller than a preset matching degree threshold value, the initial value position is redetermined.
With reference to the third possible implementation manner of the first aspect, in a fifth possible implementation manner of the first aspect, the matching degree between the current frame and each key frame is a minimum mean square error of a point cloud to be registered corresponding to the current frame and a reference point cloud corresponding to each key frame.
With reference to the first aspect, in a sixth possible implementation manner of the first aspect, perfecting an established map corresponding to a pre-stored key frame library according to a target key frame corresponding to a current frame, includes:
Determining a fourth pose transformation matrix of the odometer coordinate of the current moment in the odometer coordinate system before correction according to a second pose transformation matrix of the odometer coordinate system of the target key frame in the map coordinate system, a first pose transformation matrix of the laser radar coordinate of the current moment in the map coordinate system and a third pose transformation matrix of the laser radar coordinate of the current moment in the odometer coordinate system before correction;
Determining a fifth pose transformation matrix of the current laser radar coordinate system relative to the corrected odometer coordinate system according to the third pose transformation matrix and the fourth pose transformation matrix;
And perfecting the built map according to the position of the target frame corresponding to the current frame in the map coordinate system and the first pose transformation matrix.
A second aspect of an embodiment of the present application provides a mapping apparatus for a robot, the apparatus including:
the initial value position acquisition unit is used for acquiring the initial value position of the robot;
The reference point cloud determining unit is used for searching a first preset number of key frames in a pre-stored key frame library according to the initial value position of the robot and determining a reference point cloud corresponding to the searched key frames;
the target key frame matching unit is used for matching the characteristic point cloud corresponding to the current frame acquired by the robot with the reference point cloud and determining a target key frame corresponding to the current frame;
And the map perfecting unit is used for perfecting the built map corresponding to the prestored key frame library according to the target key frame corresponding to the current frame.
A third aspect of an embodiment of the application provides a robot comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the method according to any one of the first aspects when executing the computer program.
A fourth aspect of the embodiments of the present application provides a computer-readable storage medium storing a computer program which, when executed by a processor, implements the steps of the method according to any of the first aspects.
Compared with the prior art, the method has the advantages that the preset number of key frames are searched according to the initial value position of the determined robot when the incremental map construction is needed on the robot map by storing the key frames acquired in advance through the key frame library, the target key frames corresponding to the current frame acquired by the robot are determined in a point cloud matching mode, and the established map corresponding to the key frame library is perfected according to the determined target key frames. Therefore, when the robot performs simultaneous positioning and mapping, the robot does not need to complete once, the completed map can be effectively and incrementally mapped without restarting, and the mapping efficiency of the robot can be improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic implementation flow diagram of a mapping method of a robot according to an embodiment of the present application;
fig. 2 is a schematic implementation flow diagram of a reference point cloud determining method according to an embodiment of the present application;
FIG. 3 is a schematic diagram of an implementation flow of a target key frame selection method according to an embodiment of the present application;
Fig. 4 is a schematic diagram of a drawing device of a robot according to an embodiment of the present application;
fig. 5 is a schematic view of a robot according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, techniques, etc., in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
In order to illustrate the technical scheme of the application, the following description is made by specific examples.
Currently, in the simultaneous positioning and mapping operation of a robot, map construction in a scene is generally required to be completed at one time. After the map construction in the scene is completed, a map including sufficient static environment information can be obtained. But if an interruption occurs in the map construction process, such as an abnormal shutdown of the robot during the construction process. When restarting, the pose of the robot changes, and the robot needs to restart the map construction. Or when the map needs to be further expanded or locally updated, the map needs to be rebirth to be built, which is not beneficial to improving the robot map building efficiency.
Based on the problem, the application provides a method for constructing a map of a robot, as shown in fig. 1, comprising the following steps:
in S101, an initial position of the robot is acquired.
Specifically, the initial value position in the embodiment of the application is the estimated initial position of the robot at the current moment when the robot performs incremental map building on the map.
The paid location may be a specified location in a map of a user received by the robot. For example, a user's designated location Pi on the map may be received. In order to improve the precision of incremental mapping of the robot, the distance between the designated position Pi of the robot and the real position Pt of the robot needs to be smaller than a preset distance threshold T.
In a possible implementation, the initial position of the robot may also be the position of the robot determined by the positioning module of the robot. The positioning module may include, but is not limited to, base station positioning, and the like.
In S102, according to the initial value position of the robot, a first predetermined number of key frames are searched in a pre-stored key frame library, and a reference point cloud corresponding to the searched key frames is determined.
There may be a difference between the initial position and the real position of the robot. In order to accurately acquire the matched key frame of the current position of the robot in the key frame library, the matched key frame can be searched in a point cloud matching mode. The process of point cloud matching may be as shown in fig. 2, including:
in S201, a distance between a prestored key frame and the initial value position is acquired.
And storing the data of the key frames acquired before the current simultaneous localization and mapping (SLAM) in the key frame library. The method comprises the steps of corresponding positions of key frames, corresponding point cloud information of the key frames, corresponding transformation matrix information of the key frames and the like. The transformation matrix information may include transformation matrices between coordinates of a laser radar coordinate system, an odometer coordinate system, a map coordinate system, and the like.
According to the position information of the key frames in the key frame library stored in advance, the distance between the robot and the position points of the key frames in the key frame library can be estimated by combining the initial value position of the robot.
In S202, the key frames are ordered according to the distances to obtain a key frame sequence, and a first predetermined number of key frames with smaller distances in the sequence are selected.
And sorting according to the estimated and calculated distance and the size of the distance to obtain a key frame sequence. Because the key frame is likely to be matched with the current frame of the robot when the position point of the key frame is close to the initial value position, the application selects the first preset number of key frames with smaller distance to be matched with the current frame in a point cloud mode.
Since the initial position is a distance from the true position, the first predetermined number of key frames may be determined according to a preset distance threshold.
Of course, the manner of selecting the first predetermined number of key frames is not limited to the comparison of the magnitude of the distance from the initial position, but the predetermined number of key frames may be determined by demarcating other position ranges.
In S203, in the key frame sequence, 2w+1 key frames are truncated in the key frame sequence with the ith key frame as a center and with a length of a second predetermined number w, where i is a positive integer smaller than the first predetermined number, and w is an integer greater than or equal to 0.
In S204, a reference point cloud of the ith key frame is formed according to the point clouds of the 2w+1 key frames.
In one implementation manner of determining other key frames used for matching the key frames, the key frames to be matched can be taken as a center, a second preset number of lengths are taken as a cutting radius, 2w+1 key frames are cut out from the key frame sequence, and a certain number of surrounding key frames are selected to be overlapped to obtain a reference point cloud, so that the registration efficiency can be improved.
Or in a possible implementation manner, the reference point cloud corresponding to the key frame may be obtained through other key frame selection manners, for example, the point cloud corresponding to the single key frame may be used as the reference point cloud.
In S103, the characteristic point cloud corresponding to the current frame acquired by the robot is matched with the reference point cloud, and a target key frame corresponding to the current frame is determined.
The target key frame is a key frame corresponding to the current frame in a key frame library.
In order to facilitate comparison of a current frame acquired by a robot at a current time with a pre-stored key frame, the current frame of the robot needs to be coordinated with the stored key frame. The embodiment of the application can convert the point cloud of the current frame acquired by the robot into the point cloud corresponding to the map coordinate system, namely the point cloud to be registered, and match the reference point cloud which is stored in the key frame library and is based on the map coordinate system, thereby improving the accuracy and efficiency of the matching. The specific matching process may be as shown in fig. 3, including:
in S301, a pose transformation matrix corresponding to the current frame of the robot is determined.
According to the pose information of the robot, a pose transformation matrix corresponding to the robot when acquiring the image of the current frame can be acquired, and the change information of the pose of the robot can be determined through the pose transformation matrix.
In S302, according to the pose transformation matrix, the feature point cloud corresponding to the current frame is transformed into a point cloud to be registered.
According to the determined pose transformation matrix of the robot, the characteristic point cloud of the current frame acquired by the robot can be transformed to obtain the characteristic point cloud corresponding to the map coordinate system, namely the point cloud to be registered.
In S303, the point cloud to be registered is matched with the reference point cloud corresponding to each searched key frame, and the matching degree of the current frame and each key frame is determined.
When the point cloud to be registered is matched with the reference point cloud corresponding to each searched key frame, the mean square error between the point cloud to be registered and the reference point cloud can be calculated through a nearest neighbor iterative algorithm. And reflecting the matching degree between the point cloud to be registered and the reference point cloud through the mean square error. The higher the matching degree between the point cloud to be registered and the reference point cloud is, the smaller the mean square error value is.
In S304, the key frame with the best matching degree is selected as the target key frame corresponding to the current frame.
And carrying out matching degree calculation on the reference point clouds corresponding to the key frames determined through the initial value positions and the point clouds to be registered one by one, so as to obtain the key frame with the largest matching degree, namely the key frame with the smallest mean square error, as the target key frame.
In a possible implementation manner of the present application, the method may further include a step of comparing the matching degree best value with a preset matching degree threshold value. Through the screening of the matching degree threshold value, the situation that the initial value position is far away from the actual position and the searched key frame is not an accurate target key frame can be avoided, and in this case, the initial value position can be redetermined and the accurate target key frame can be further searched. If the matching degree threshold is greater than or equal to a preset matching degree threshold, for example, the mean square error is smaller than a preset error threshold, the key frame with the best matching degree can be used as the target key frame corresponding to the current frame.
In S104, the built map corresponding to the pre-stored key frame library is perfected according to the target key frame corresponding to the current frame.
After determining the target key frame corresponding to the current frame, a closed loop factor can be established according to the current key frame index and the matched historical key frame index. The position of the robot in the built map can be determined, and the local positioning of the robot is realized.
In the embodiment of the application, when the two-time startup is performed, the current odometer data of the robot is not matched with the odometer data of the previous map building, and in the map building process, the position and the pose of the odometer data before fusion are required to be optimized, so that the odometer coordinate system of the two-time map building is required to be corrected.
Assuming that the current odometer coordinate system is odomnew and the odometer coordinate system at the time of last drawing is odomold, the conversion of the coordinate points in the odometer coordinate system odomold of last drawing into the fourth pose transformation matrix t_ odomold _2_odometer of the current odometer coordinate system odomnew may be represented as follows:
T_odomold_2_odomnew
=(T_map_2_odomold)-1*T_map_2_lidar*(T_odomnew_2_lidar)-1
Wherein t_map_2_odomole is the second pose transformation matrix of the odometer coordinate system of the target key frame in the map coordinate system, t_map_2_lidar is the first pose transformation matrix of the laser radar coordinate at the current moment in the map coordinate system, and t_ odomnew _2_lidar is the third pose transformation matrix of the laser radar coordinate at the current moment in the odometer coordinate system before correction.
After determining that the coordinate point in the odometer coordinate system odomold of the last mapping is converted into the fourth pose transformation matrix t_ odomold _2_odometer of the current odometer coordinate system odomnew, the fifth pose transformation matrix t_ odomold _2_lidar of the current laser radar coordinate system with respect to the corrected odometer coordinate system may be determined according to the formula t_ odomold _2_lidar=t_ odomold _2_odometer.
According to the corrected odometer coordinate system, the robot can perform pose optimization according to the odometer coordinate system during previous image construction, and the robot local positioning information is combined, so that the incremental image construction of the robot can be more effectively completed, the SLAM process is not limited to one-time completion, the starting of a re-head is not required during secondary image construction, the previous image construction data and the built map can be effectively combined for incremental image construction, namely the previous map is expanded or updated, and the image construction efficiency of the robot is effectively improved.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiment of the present application.
Fig. 4 is a schematic diagram of a mapping device of a robot according to an embodiment of the present application, as shown in fig. 4, where the device includes:
An initial value position obtaining unit 401, configured to obtain an initial value position of the robot;
a reference point cloud determining unit 402, configured to search a first predetermined number of key frames in a pre-stored key frame library according to an initial value position of the robot, and determine a reference point cloud corresponding to the searched key frames;
a target key frame matching unit 403, configured to match a feature point cloud corresponding to a current frame acquired by a robot with the reference point cloud, and determine a target key frame corresponding to the current frame;
And the map perfecting unit 404 is used for perfecting the built map corresponding to the pre-stored key frame library according to the target key frame corresponding to the current frame.
The robot mapping apparatus shown in fig. 4 corresponds to the robot mapping method shown in fig. 1.
Fig. 5 is a schematic view of a robot according to an embodiment of the present application. As shown in fig. 5, the robot 5 of this embodiment comprises a processor 50, a memory 51 and a computer program 52, e.g. a mapping program of the robot, stored in said memory 51 and executable on said processor 50. The processor 50, when executing the computer program 52, implements the steps of the embodiment of the mapping method for each robot described above. Or the processor 50, when executing the computer program 52, performs the functions of the modules/units of the apparatus embodiments described above.
By way of example, the computer program 52 may be partitioned into one or more modules/units that are stored in the memory 51 and executed by the processor 50 to complete the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing a specific function for describing the execution of the computer program 52 in the robot 5.
The robot may include, but is not limited to, a processor 50, a memory 51. It will be appreciated by those skilled in the art that fig. 5 is merely an example of a robot 5 and is not meant to be limiting of the robot 5, and may include more or fewer components than shown, or may combine certain components, or different components, e.g., the robot may also include input and output devices, network access devices, buses, etc.
The Processor 50 may be a central processing unit (Central Processing Unit, CPU), other general purpose Processor, digital signal Processor (DIGITAL SIGNAL Processor, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 51 may be an internal storage unit of the robot 5, such as a hard disk or a memory of the robot 5. The memory 51 may be an external storage device of the robot 5, such as a plug-in hard disk, a smart memory card (SMART MEDIA CARD, SMC), a Secure Digital (SD) card, a flash memory card (FLASH CARD), or the like, which are provided on the robot 5. Further, the memory 51 may also include both an internal memory unit and an external memory device of the robot 5. The memory 51 is used for storing the computer program and other programs and data required by the robot. The memory 51 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, the specific names of the functional units and modules are only for distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other manners. For example, the apparatus/terminal device embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical function division, and there may be additional divisions in actual implementation, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated modules/units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on this understanding, the present application may also be implemented by implementing all or part of the procedures in the methods of the above embodiments, and the computer program may be stored in a computer readable storage medium, where the computer program when executed by a processor may implement the steps of the respective method embodiments. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the computer readable medium may include content that is subject to appropriate increases and decreases as required by jurisdictions in which such content is subject to legislation and patent practice, such as in certain jurisdictions in which such content is not included as electrical carrier signals and telecommunication signals.
The foregoing embodiments are merely illustrative of the technical solutions of the present application, and not restrictive, and although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those skilled in the art that modifications may still be made to the technical solutions described in the foregoing embodiments or equivalent substitutions of some technical features thereof, and that such modifications or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application.
Claims (8)
1. A method of mapping a robot, the method comprising:
acquiring an initial value position of a robot;
Searching a first preset number of key frames in a pre-stored key frame library according to the initial value position of the robot, and determining a reference point cloud corresponding to the searched key frames;
matching a characteristic point cloud corresponding to a current frame acquired by a robot with the reference point cloud, and determining a target key frame corresponding to the current frame;
Perfecting an established map corresponding to a prestored key frame library according to a target key frame corresponding to the current frame;
Searching a first preset number of key frames in a pre-stored key frame library according to the initial value position of the robot, wherein the first preset number of key frames comprise:
acquiring the distance between a prestored key frame and the initial value position;
Sorting according to the distances to obtain a key frame sequence, and selecting a first preset number of key frames with smaller distances in the sequence;
Perfecting an established map corresponding to a prestored key frame library according to a target key frame corresponding to the current frame, wherein the perfecting comprises the following steps:
Determining a fourth pose transformation matrix of the odometer coordinate of the current moment in the odometer coordinate system before correction according to a second pose transformation matrix of the odometer coordinate system of the target key frame in the map coordinate system, a first pose transformation matrix of the laser radar coordinate of the current moment in the map coordinate system and a third pose transformation matrix of the laser radar coordinate of the current moment in the odometer coordinate system before correction;
Determining a fifth pose transformation matrix of the current laser radar coordinate system relative to the corrected odometer coordinate system according to the third pose transformation matrix and the fourth pose transformation matrix;
And perfecting the built map according to the position of the target frame corresponding to the current frame in the map coordinate system and the first pose transformation matrix.
2. The method of claim 1, wherein determining the reference point cloud corresponding to the found keyframe comprises:
In the key frame sequence, taking an ith key frame as a center, and taking a second preset number w of length as a center to intercept 2w+1 key frames in the key frame sequence, wherein i is a positive integer smaller than the first preset number, and w is an integer larger than or equal to 0;
and constructing a reference point cloud of the ith key frame according to the intercepted point clouds of 2w+1 key frames.
3. The method of claim 1, wherein matching the feature point cloud corresponding to the current frame acquired by the robot with the reference point cloud, and determining the target key frame corresponding to the current frame, comprises:
Determining a pose transformation matrix corresponding to the current frame of the robot;
According to the pose transformation matrix, transforming the characteristic point cloud corresponding to the current frame into a point cloud to be registered;
Matching the point cloud to be registered with the reference point cloud corresponding to each searched key frame, and determining the matching degree of the current frame and each key frame;
And selecting the key frame with the best matching degree as a target key frame corresponding to the current frame.
4. The method of claim 3, wherein prior to selecting the key frame with the best match as the target key frame for the current frame, the method further comprises:
Comparing the matching degree of the key frame with the optimal matching degree with a preset matching degree threshold value;
if the matching degree of the key frame with the best matching degree is larger than or equal to a preset matching degree threshold value, selecting the key frame with the best matching degree as a target key frame corresponding to the current frame;
and if the matching degree of the key frame with the optimal matching degree is smaller than a preset matching degree threshold value, the initial value position is redetermined.
5. A method according to claim 3, wherein the matching degree between the current frame and each key frame is a minimum mean square error between the point cloud to be registered corresponding to the current frame and the reference point cloud corresponding to each key frame.
6. A mapping apparatus for a robot, the apparatus comprising:
the initial value position acquisition unit is used for acquiring the initial value position of the robot;
The reference point cloud determining unit is used for searching a first preset number of key frames in a pre-stored key frame library according to the initial value position of the robot and determining a reference point cloud corresponding to the searched key frames;
the target key frame matching unit is used for matching the characteristic point cloud corresponding to the current frame acquired by the robot with the reference point cloud and determining a target key frame corresponding to the current frame;
The map perfecting unit is used for perfecting the built map corresponding to the prestored key frame library according to the target key frame corresponding to the current frame;
The reference point cloud determining unit is also used for obtaining the distance between a prestored key frame and the initial value position, sorting according to the distance to obtain a key frame sequence, and selecting a first preset number of key frames with smaller distance in the sequence;
the map perfecting unit is used for determining a fourth pose transformation matrix of the current mileometer coordinate system before correction according to a second pose transformation matrix of the mileometer coordinate system of the target key frame in the map coordinate system, a first pose transformation matrix of the laser radar coordinate at the current moment in the map coordinate system and a third pose transformation matrix of the laser radar coordinate at the current moment in the mileometer coordinate system before correction, determining a fifth pose transformation matrix of the current laser radar coordinate system relative to the corrected mileometer coordinate system according to the third pose transformation matrix and the fourth pose transformation matrix, and perfecting the built map according to the position of the target frame corresponding to the current frame in the map coordinate system and the first pose transformation matrix.
7. A robot comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1 to 5 when the computer program is executed.
8. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps of the method according to any one of claims 1 to 5.
Priority Applications (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202011424296.1A CN112595323B (en) | 2020-12-08 | 2020-12-08 | Robot and drawing construction method and device thereof |
| PCT/CN2020/140416 WO2022121018A1 (en) | 2020-12-08 | 2020-12-28 | Robot, and mapping method and apparatus therefor |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202011424296.1A CN112595323B (en) | 2020-12-08 | 2020-12-08 | Robot and drawing construction method and device thereof |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN112595323A CN112595323A (en) | 2021-04-02 |
| CN112595323B true CN112595323B (en) | 2024-12-17 |
Family
ID=75188851
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202011424296.1A Active CN112595323B (en) | 2020-12-08 | 2020-12-08 | Robot and drawing construction method and device thereof |
Country Status (2)
| Country | Link |
|---|---|
| CN (1) | CN112595323B (en) |
| WO (1) | WO2022121018A1 (en) |
Families Citing this family (21)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN113375657B (en) * | 2021-04-26 | 2025-03-28 | 原力聚合(重庆)机器人科技有限公司 | Method, device and electronic device for updating electronic map |
| CN113674351B (en) * | 2021-07-27 | 2023-08-08 | 追觅创新科技(苏州)有限公司 | Drawing construction method of robot and robot |
| CN113624222B (en) * | 2021-07-30 | 2024-09-13 | 深圳市优必选科技股份有限公司 | Map updating method, robot and readable storage medium |
| CN113899357B (en) * | 2021-09-29 | 2023-10-31 | 北京易航远智科技有限公司 | Incremental mapping method and device for visual SLAM, robot and readable storage medium |
| CN114136316B (en) * | 2021-12-01 | 2024-08-02 | 珠海一微半导体股份有限公司 | Inertial navigation error elimination method based on point cloud characteristic points, chip and robot |
| CN114279456B (en) * | 2021-12-06 | 2024-04-30 | 纵目科技(上海)股份有限公司 | Picture construction/vehicle positioning method, system, terminal and computer storage medium |
| CN115191871B (en) * | 2022-06-07 | 2024-05-28 | 深圳市倍思科技有限公司 | A method, device, cleaning robot and storage medium for data time synchronization |
| CN115390085B (en) * | 2022-07-28 | 2024-07-30 | 广州小马智行科技有限公司 | Positioning method and device based on laser radar, computer equipment and storage medium |
| CN115630185B (en) * | 2022-09-23 | 2024-02-02 | 深圳市云洲创新科技有限公司 | Repositioning method, water surface aircraft and storage medium |
| CN115909796A (en) * | 2022-09-29 | 2023-04-04 | 上海中海庭数智技术有限公司 | Environmental traffic target positioning method, system, medium and equipment |
| CN115308771B (en) * | 2022-10-12 | 2023-03-14 | 深圳市速腾聚创科技有限公司 | Obstacle detection method and apparatus, medium, and electronic device |
| CN116222537A (en) * | 2022-12-21 | 2023-06-06 | 深圳市云鼠科技开发有限公司 | Filtering method, device, terminal equipment and storage medium for wrong key frames |
| CN116105751B (en) * | 2022-12-23 | 2025-08-08 | 达闼科技(北京)有限公司 | Map alignment method, device, equipment and storage medium |
| CN116105752B (en) * | 2023-01-31 | 2025-12-05 | 中国科学院空天信息创新研究院 | Ranging-assisted multi-ground robot collaborative mapping methods, devices and equipment |
| CN115984504B (en) * | 2023-03-21 | 2023-07-04 | 上海仙工智能科技有限公司 | A method, system, and storage medium for automatically updating maps |
| CN116592869B (en) * | 2023-04-13 | 2024-05-03 | 广州汽车集团股份有限公司 | Map updating method and device, electronic equipment and storage medium |
| CN116626700A (en) * | 2023-05-19 | 2023-08-22 | 浙江华睿科技股份有限公司 | Robot positioning method and device, electronic equipment and storage medium |
| CN116862958B (en) * | 2023-06-09 | 2025-10-21 | 珠海云洲智能科技股份有限公司 | Point cloud registration method, device, electronic device and readable storage medium |
| CN116481517B (en) * | 2023-06-25 | 2023-10-13 | 深圳市普渡科技有限公司 | Extended mapping method, device, computer equipment and storage medium |
| CN119164393B (en) * | 2024-08-08 | 2025-11-21 | 浙江智鼎机器人有限公司 | Robot positioning method, robot, storage medium, apparatus, and program product |
| CN119533358A (en) * | 2024-11-15 | 2025-02-28 | 广州瑞松北斗汽车装备有限公司 | A calibration method and device for three-dimensional coordinate measurement |
Citations (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111311684A (en) * | 2020-04-01 | 2020-06-19 | 亮风台(上海)信息科技有限公司 | Method and equipment for initializing SLAM |
Family Cites Families (10)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR20180087827A (en) * | 2017-01-25 | 2018-08-02 | 한국과학기술연구원 | SLAM method and apparatus robust to wireless environment change |
| US11187536B2 (en) * | 2018-01-12 | 2021-11-30 | The Trustees Of The University Of Pennsylvania | Probabilistic data association for simultaneous localization and mapping |
| CN109141393B (en) * | 2018-07-02 | 2020-12-08 | 北京百度网讯科技有限公司 | Relocation method, device and storage medium |
| CN111060101B (en) * | 2018-10-16 | 2022-06-28 | 深圳市优必选科技有限公司 | Vision-assisted distance SLAM method and device, robot |
| CN109579847B (en) * | 2018-12-13 | 2022-08-16 | 歌尔股份有限公司 | Method and device for extracting key frame in synchronous positioning and map construction and intelligent equipment |
| CN110561416B (en) * | 2019-08-01 | 2021-03-02 | 深圳市银星智能科技股份有限公司 | Laser radar repositioning method and robot |
| CN110307838B (en) * | 2019-08-26 | 2019-12-10 | 深圳市优必选科技股份有限公司 | Robot repositioning method and device, computer-readable storage medium and robot |
| CN110866496B (en) * | 2019-11-14 | 2023-04-07 | 合肥工业大学 | Robot positioning and mapping method and device based on depth image |
| CN111161347B (en) * | 2020-04-01 | 2020-09-29 | 亮风台(上海)信息科技有限公司 | Method and equipment for initializing SLAM |
| CN111795687B (en) * | 2020-06-29 | 2022-08-05 | 深圳市优必选科技股份有限公司 | Robot map updating method and device, readable storage medium and robot |
-
2020
- 2020-12-08 CN CN202011424296.1A patent/CN112595323B/en active Active
- 2020-12-28 WO PCT/CN2020/140416 patent/WO2022121018A1/en not_active Ceased
Patent Citations (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111311684A (en) * | 2020-04-01 | 2020-06-19 | 亮风台(上海)信息科技有限公司 | Method and equipment for initializing SLAM |
Also Published As
| Publication number | Publication date |
|---|---|
| WO2022121018A1 (en) | 2022-06-16 |
| CN112595323A (en) | 2021-04-02 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN112595323B (en) | Robot and drawing construction method and device thereof | |
| CN110561423B (en) | Pose transformation method, robot and storage medium | |
| CN110307838B (en) | Robot repositioning method and device, computer-readable storage medium and robot | |
| US20200116498A1 (en) | Visual assisted distance-based slam method and mobile robot using the same | |
| CN112179330A (en) | Pose determination method and device of mobile equipment | |
| CN111813101A (en) | Robot path planning method and device, terminal equipment and storage medium | |
| EP3505868A1 (en) | Method and apparatus for adjusting point cloud data acquisition trajectory, and computer readable medium | |
| WO2024197815A1 (en) | Engineering machinery mapping method and device, and readable storage medium | |
| CN110991575A (en) | Asset checking method and device, readable storage medium and robot | |
| CN109074657A (en) | Target tracking method and device, electronic equipment and readable storage medium | |
| CN112946612B (en) | External parameter calibration method and device, electronic equipment and storage medium | |
| CN110969649A (en) | Matching evaluation method, medium, terminal and device of laser point cloud and map | |
| CN109558471A (en) | Update method, device, storage medium and the system of grating map | |
| CN115265489B (en) | Method and system for surveying and mapping territory monitored by dynamic remote sensing | |
| CN110796095A (en) | Instrument template establishing method, terminal equipment and computer storage medium | |
| CN115902843A (en) | A multi-laser radar calibration method, device and electronic equipment | |
| US11034028B2 (en) | Pose determining method for mobile robot and apparatus and mobile robot thereof | |
| CN111708357B (en) | Cleaning end condition identification method and device and sweeping robot | |
| CN115406452A (en) | Real-time positioning and mapping method, device and terminal equipment | |
| WO2023005020A1 (en) | Reflector localization method, robot and computer-readable storage medium | |
| WO2022135372A1 (en) | Map construction method, and robot | |
| CN113297259B (en) | Robot and environment map construction method and device thereof | |
| CN111460237B (en) | Data query method and device, readable storage medium and electronic equipment | |
| CN112212851B (en) | Pose determination method and device, storage medium and mobile robot | |
| US12253615B2 (en) | Beacon map construction method, device, and computer-readable storage medium |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |