CN115890654B - Depth camera automatic calibration algorithm based on three-dimensional feature points - Google Patents
Depth camera automatic calibration algorithm based on three-dimensional feature points Download PDFInfo
- Publication number
- CN115890654B CN115890654B CN202211225481.7A CN202211225481A CN115890654B CN 115890654 B CN115890654 B CN 115890654B CN 202211225481 A CN202211225481 A CN 202211225481A CN 115890654 B CN115890654 B CN 115890654B
- Authority
- CN
- China
- Prior art keywords
- image
- mechanical arm
- pose
- error
- tail end
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000004422 calculation algorithm Methods 0.000 title claims abstract description 32
- 238000000034 method Methods 0.000 claims abstract description 34
- 230000008569 process Effects 0.000 claims abstract description 11
- 238000006073 displacement reaction Methods 0.000 claims abstract description 5
- 230000003287 optical effect Effects 0.000 claims description 17
- 238000004590 computer program Methods 0.000 claims description 13
- 239000011159 matrix material Substances 0.000 claims description 12
- 230000000007 visual effect Effects 0.000 claims description 12
- 229910052704 radon Inorganic materials 0.000 claims description 7
- SYUHGPGVQRZVTB-UHFFFAOYSA-N radon atom Chemical compound [Rn] SYUHGPGVQRZVTB-UHFFFAOYSA-N 0.000 claims description 7
- 230000004069 differentiation Effects 0.000 claims description 3
- 238000010606 normalization Methods 0.000 claims description 3
- 230000008901 benefit Effects 0.000 abstract description 2
- 239000000284 extract Substances 0.000 abstract description 2
- 230000009466 transformation Effects 0.000 description 19
- 238000004364 calculation method Methods 0.000 description 7
- 238000009825 accumulation Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 238000006243 chemical reaction Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000010801 machine learning Methods 0.000 description 2
- 101100173586 Schizosaccharomyces pombe (strain 972 / ATCC 24843) fft2 gene Proteins 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a depth camera automatic calibration algorithm based on three-dimensional feature points, which adopts four steps of model establishment, initial calibration pose generation, pose alignment and kinematic parameter identification, extracts acquired images through image displacement in the calibration process and identifies shake and blur information of the images; the shake fuzzy information and inverse kinematics processing information and joint position controller feedback information are used together for controlling and adjusting the joint position controller, so that the advantage that the error of the mechanical arm can be compensated and quickly converged by adopting a calibration algorithm to obtain a functional relation between an error source and a tail end error is achieved.
Description
Technical Field
The invention relates to the technical field of industrial robot calibration and calibration, in particular to an automatic calibration algorithm of a depth camera based on three-dimensional feature points.
Background
In the robot calibration algorithm, high-precision measuring equipment is used for measuring the actual pose of the tail end of the robot, but the measuring instrument is very expensive, the calibration process is very complex, and the technical requirements on the installation, debugging and measuring process are high. The vision-based calibration method generally uses a camera as a measuring tool to measure the actual end pose, and the camera vision and the camera parameter errors have great influence on the kinematic parameter calibration result. In order to avoid using high-precision measuring equipment, a low-cost and easy-to-operate calibration method is used, the influence of the measuring equipment on the calibration result is reduced, and the absolute positioning precision of the robot in each application scene is very necessary to improve. The prior art, such as the Chinese patent application with publication number of CN108789404A, discloses a serial robot kinematics parameter calibration method based on vision, wherein a camera optical axis is used as a virtual straight line constraint, and a kinematics error model based on the straight line constraint is established; selecting a fixed point on a calibration plate fixed at the tail end of the robot as a characteristic point, and controlling the movement of the mechanical arm by using an image-based visual control method to enable the characteristic point to reach the optical axis of the camera; calculating a nominal position of the feature point by using positive kinematics according to joint angle data of the robot, and calculating an alignment error matrix; and estimating the kinematic parameter error by an iterative least square algorithm, and calculating the actual kinematic parameter according to the nominal kinematic parameter. The invention uses the optical axis of the camera as virtual constraint, can complete calibration by only using the joint angle data of the robot, has low cost and easy operation, does not need expensive high-precision measuring equipment, has universality for serial robot calibration, and can be widely applied to the industrial, space and underwater environments to improve the absolute positioning precision of the mechanical arm.
Disclosure of Invention
The invention aims to provide an automatic calibration algorithm of a depth camera based on three-dimensional characteristic points, which can compensate errors of a mechanical arm by adopting a function relation between an error source and an end error obtained by a calibration algorithm and quickly converge the errors.
An automatic calibration algorithm of a depth camera based on three-dimensional feature points comprises,
1) A model is built, namely a kinematic error model based on linear constraint is obtained according to a kinematic model of the mechanical arm, and the kinematic error model is used for describing the relation between the alignment error of the pose of the tail end of the mechanical arm and the kinematic parameter error of the mechanical arm;
2) The initial target pose generation, namely, in order to enable the target pose to be feasible, constructing four constraint conditions which are met by the initial target pose of the mechanical arm, and determining the initial target pose of the mechanical arm under the constraint conditions;
3) The position and pose alignment is that an image-based visual control method is used for controlling the tail end of a mechanical arm to automatically move to a plurality of positions on the optical axis of a camera from an initial standard position and sequentially reach the positions on the optical axis of the camera, so that the linear virtual constraint is satisfied;
4) The kinematic parameter identification is carried out, namely an alignment error in an error model is calculated according to the joint angle when the tail end of the mechanical arm meets the linear constraint, and then an LM algorithm is used for identification to obtain the kinematic parameter error; the image-based visual control method is characterized in that an acquired image is extracted through image displacement in the calibration process, and shake blur information of the image is identified; and the shake fuzzy information and inverse kinematics processing information and joint position controller feedback information are used for controlling and adjusting the joint position controller. The LM algorithm is the Levenberg-Marquard algorithm. The alignment error refers to the alignment error of the nominal pose of the characteristic point at different positions on the linear constraint and the linear constraint.
In order to further optimize the technical scheme, the measures adopted further comprise:
the shake blur information of the image includes a shake blur direction and a shake blur scale of the image. And obtaining and generating shake blur due to the influence of accumulated errors of a connecting structure to a final caused close stop position in the process of each pose transformation calibration through calculation of the camera shake blur direction and the scale, and correcting the shake blur after machine learning, so that convergence of the errors is gradually achieved according to the accumulated errors, and the errors are reduced.
And carrying out two-dimensional Fourier transform on the real-shot jittering blurred image, carrying out normalization processing and binarization processing on the transformed value, and obtaining the jittering blurred direction by adopting an MRT algorithm based on Radon transform. After fourier transformation and Radon transformation, the dither blur direction of the image can be obtained for providing a subsequent processing geometrical transformation projection of the vector, balancing the error accumulation and the motion deviation.
And rotating and cutting the jittering blurred image, performing first-order differentiation in the horizontal direction, defining the autocorrelation operation of the image in the blurred direction, and acquiring the jittering blurred scale by adopting a PDA algorithm for determining the blurred length by the distance between the minimum value and the origin in the autocorrelation curve.
Converting the image characteristic deviation into pose deviation DP at the tail end of the mechanical arm through the pose adjustment strategy of the mechanical arm f b The conversion from the image space to the Cartesian space at the tail end of the mechanical arm is realized; the dither blur direction and the dither blur scale are projected onto an image coordinate vector (h x ,h y ) For correcting the matrix. The projected image coordinate vector is simple geometric transformation, after projection, pose deviation calculation of each time is introduced and the adjustment of the joint position controller is gradually participated,and automatically feeds back to the kinematic process. Through accumulation iteration loops, the error convergence speed can be improved, and the compensation effect is improved.
Pose deviation DP of tail end of mechanical arm f b The method is characterized by comprising the following steps:
wherein T is b c Is a transformation matrix from a camera coordinate system to a mechanical arm base coordinate system, DP c f Is the positional deviation of the feature point and the optical axis in the camera coordinate system. u (u) 0 ,v 0 ,k x And k y Is the parameter in the camera, and is obtained through camera calibration. R is R c f Is a transformation matrix of the mechanical arm optical axis-camera coordinates.
Pose deviation DP of tail end of mechanical arm f b The method is characterized by comprising the following steps:
wherein T is b c Is a transformation matrix from a camera coordinate system to a manipulator base coordinate system.
The invention also discloses a computer device, which comprises one or more processors; a memory; and one or more computer programs, wherein the one or more computer programs are stored in the memory, the one or more computer programs comprising instructions, which when executed by the apparatus, cause the apparatus to perform the methods described above.
The invention also discloses a computer storage medium storing one or more computer programs which, when executed, perform the above-described methods.
The invention adopts four steps of establishing a model, generating initial calibration pose, aligning pose and identifying kinematic parameters, and extracts the acquired image through image displacement in the calibration process and identifies the shake and blur information of the image; the shake fuzzy information and inverse kinematics processing information and joint position controller feedback information are used together for controlling and adjusting the joint position controller, so that the advantage that the error of the mechanical arm can be compensated and quickly converged by adopting a calibration algorithm to obtain a functional relation between an error source and a tail end error is achieved.
Drawings
FIG. 1 is a schematic diagram of a kinematic parameter calibration method based on linear virtual constraint according to an embodiment of the invention;
FIG. 2 is a schematic flow chart of a feature point alignment algorithm based on visual control according to an embodiment of the present invention;
FIG. 3 is a schematic flow chart of a calibration method according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a blurred image of a calibration plate according to an embodiment of the present invention;
fig. 5 is a schematic diagram of alignment error convergence data according to an embodiment of the invention.
Detailed Description
The invention is described in further detail below in connection with the following examples.
Examples: an automatic calibration algorithm of a depth camera based on three-dimensional feature points comprises,
1) A model is built, namely a kinematic error model based on linear constraint is obtained according to a kinematic model of the mechanical arm, and the kinematic error model is used for describing the relation between the alignment error of the pose of the tail end of the mechanical arm and the kinematic parameter error of the mechanical arm;
2) The initial target pose generation, namely, in order to enable the target pose to be feasible, constructing four constraint conditions which are met by the initial target pose of the mechanical arm, and determining the initial target pose of the mechanical arm under the constraint conditions;
3) The position and pose alignment is that an image-based visual control method is used for controlling the tail end of a mechanical arm to automatically move to a plurality of positions on the optical axis of a camera from an initial standard position and sequentially reach the positions on the optical axis of the camera, so that the linear virtual constraint is satisfied;
4) The kinematic parameter identification is carried out, namely an alignment error in an error model is calculated according to the joint angle when the tail end of the mechanical arm meets the linear constraint, and then an LM algorithm is used for identification to obtain the kinematic parameter error; the image-based visual control method is characterized in that an acquired image is extracted through image displacement in the calibration process, and shake blur information of the image is identified; and the shake fuzzy information and inverse kinematics processing information and joint position controller feedback information are used for controlling and adjusting the joint position controller. The LM algorithm is the Levenberg-Marquard algorithm. The alignment error refers to the alignment error of the nominal pose of the characteristic point at different positions on the linear constraint and the linear constraint.
The shake blur information of the image includes a shake blur direction and a shake blur scale of the image. And obtaining and generating shake blur due to the influence of accumulated errors of a connecting structure to a final caused close stop position in the process of each pose transformation calibration through calculation of the camera shake blur direction and the scale, and correcting the shake blur after machine learning, so that convergence of the errors is gradually achieved according to the accumulated errors, and the errors are reduced. For the sake of brevity, this embodiment only describes a brief manner of calculating the shake blur information, and a detailed manner of obtaining the shake blur information and further deriving the shake blur information may refer to literature on study of shake compensation technology in image imaging, 2010. The method adopts shake fuzzy information to participate in the operation of the joint position controller, and then feeds back to the inverse kinematics operation step to generate the technical effect of convergence of calibration and calibration precision. The architecture and configuration of the basic control block diagram can refer to the content of the description part and the corresponding papers in the background technology, and the invention is not repeated.
And carrying out two-dimensional Fourier transform on the real-shot jittering blurred image, carrying out normalization processing and binarization processing on the transformed value, and obtaining the jittering blurred direction by adopting an MRT algorithm based on Radon transform. After fourier transformation and Radon transformation, the dither blur direction of the image can be obtained for providing a subsequent processing geometrical transformation projection of the vector, balancing the error accumulation and the motion deviation.
And rotating and cutting the jittering blurred image, performing first-order differentiation in the horizontal direction, defining the autocorrelation operation of the image in the blurred direction, and acquiring the jittering blurred scale by adopting a PDA algorithm for determining the blurred length by the distance between the minimum value and the origin in the autocorrelation curve.
Converting the image characteristic deviation into pose deviation DP at the tail end of the mechanical arm through the pose adjustment strategy of the mechanical arm f b The conversion from the image space to the Cartesian space at the tail end of the mechanical arm is realized; the dither blur direction and the dither blur scale are projected onto an image coordinate vector (h x ,h y ) For correcting the matrix. The projected image coordinate vector is simple geometric transformation, and after the projection, pose deviation calculation of each time is introduced, the adjustment of the joint position controller is gradually participated, and the motion process is automatically fed back. Through accumulation iteration loops, the error convergence speed can be improved, and the compensation effect is improved.
Preferably, the pose deviation DP of the tail end of the mechanical arm f b The method is characterized by comprising the following steps:
wherein T is b c Is a transformation matrix from a camera coordinate system to a mechanical arm base coordinate system, DP c f Is the positional deviation of the feature point and the optical axis in the camera coordinate system. u (u) 0 ,v 0 ,k x And k y Is the parameter in the camera, and is obtained through camera calibration. R is R c f Is a transformation matrix of the mechanical arm optical axis-camera coordinates.
If the simple camera configuration is adopted, the pose deviation DP of the tail end of the mechanical arm f b The method is characterized by comprising the following steps:
wherein T is b c Is a transformation matrix from a camera coordinate system to a manipulator base coordinate system.
The model is built by using an improved DH method, and then the model is calibrated by kinematics to correct errors.
The initial target pose generation step comprises working space constraint, singularity constraint, feature point visibility constraint and visual control resolution constraint.
The actual pose alignment is carried out by inputting information generated by the initial standard pose, obtaining precision information by a visual control method, and iterating to the required precision by the expected pose of the mechanical arm.
And inputting a result of alignment of the kinematic calibration model and the actual pose into the LM algorithm to identify errors of the parameters. The four constraint conditions which the initial standard pose of the mechanical arm is constructed to meet are working space constraint, singularity constraint, feature point visibility constraint and visual control resolution constraint.
The characteristic point alignment algorithm based on visual control consists of a mechanical arm control inner ring and an image characteristic control outer ring. In the image feature control outer loop, the desired image feature is the image coordinates (u f ,v f ) Image coordinates (u) with the optical axis center point 0 ,v 0 ) The coincidence, camera gathers the image information of the current scene as visual feedback, calculates the image coordinate of characteristic point and optical axis central point, image coordinate difference (u f -u 0 , v f -v 0 ) The deviation of the expected image characteristic and the current image characteristic is converted into the pose deviation DP of the tail end of the mechanical arm through the pose adjustment strategy of the mechanical arm f b The conversion from the image space to the Cartesian space at the end of the mechanical arm is realized. The dither blur direction and the dither blur scale are projected onto an image coordinate vector (h x ,h y ) For correcting the matrix.
Wherein T is b c Is a transformation matrix from a camera coordinate system to a mechanical arm base coordinate system, DP c f Is the position deviation of the characteristic point and the optical axis under the camera coordinate system, u 0 ,v 0 ,k x And k y Is an in-camera parameter, through the cameraAnd (5) calibrating to obtain the product.
The converted position and posture deviation DP of the tail end of the mechanical arm f b And feeding back the joint angle value to the inner ring of the mechanical arm control to obtain the expected pose of the tail end of the mechanical arm, calculating the expected joint angle value through inverse kinematics, and controlling the mechanical arm to move to the expected pose by using a joint position controller of the mechanical arm. When the image deviation between the characteristic point and the camera optical axis is almost 0, the characteristic point is considered to be coincident with the camera optical axis, the mechanical arm stops moving, and the joint angle of the mechanical arm at the moment is recorded.
The jitter and blur direction calculation adopts the following steps:
1) Computing a two-dimensional Fourier transform of a blurred image
G(u,v)=fft2(g)
2) Compressing the Fourier transform value dynamic range
D(u,v)=log[1+| G(u,v)|]
3) Circularly shifting the compression result to make the low-frequency component correspond to the center of spectrogram
C(u,v)=ffshift[D(u,v)]
4) Normalizing C (u, v) to obtain E (u, v);
5) Binarizing E (u, v) to obtain H (u, v);
6) And carrying out Radon transformation on E (u, v), and taking the maximum value of the Radon transformation on each angle to form an angle corresponding to the peak point of the MRT curve in the range of 0-180 degrees, so as to determine the jitter fuzzy direction.
The jitter fuzzy scale calculation adopts the following steps:
1) To reduce the amount of computation, converting color image data into grayscale image data;
2) Rotating and cutting the shake blur image according to the shake blur direction identification result, so as to simplify the two-dimensional problem into a one-dimensional problem;
3) Calculating the direction differential of the rotated and cropped jittered blurred image in the horizontal right direction to obtain a differential image;
4) Calculating the row pixel autocorrelation of the differential image in the horizontal rightward direction, and obtaining an autocorrelation image;
5) The direction of the direction differential autocorrelation of the autocorrelation images summed in the column direction to obtain the entire image is always kept horizontally to the right.
6) And drawing a direction differential autocorrelation curve, wherein an inverse peak point appears at a position at a certain distance from a zero point due to the influence of motion blur.
7) The dither blur scale may be determined in combination with the actual dither condition of the dither blur image when the directional differential autocorrelation curve exhibits a plurality of inverse peak points.
The calculation of the blur direction and the blur size of the shake is generally used for the recovery of distorted images by a digital camera. After the jitter direction and the jitter scale are obtained, the technical scheme of the invention uses the jitter direction and the jitter scale to improve the calibration speed and the learning efficiency. Fig. 5 compares the errors of the background technology and the embodiment in the multiple target positions, and can be seen from the way, the embodiment of the invention can quickly thin the face to the intersection level.
A computer device comprising one or more processors; a memory; and one or more computer programs, wherein the one or more computer programs are stored in the memory, the one or more computer programs comprising instructions, which when executed by the apparatus, cause the apparatus to perform the methods described above.
A computer storage medium storing one or more computer programs that, when executed, perform the method described above.
While the invention has been described in connection with the preferred embodiments, it is not intended to be limiting, but it will be understood by those skilled in the art that various changes, substitutions and alterations of the subject matter set forth herein can be made without departing from the spirit and scope of the invention, and it is intended that the scope of the invention shall be defined from the appended claims.
Claims (4)
1. The automatic calibration algorithm of the depth camera based on the three-dimensional feature points is characterized in that: comprising the steps of (a) a step of,
1) And (3) establishing a model: obtaining a kinematic error model based on linear constraint according to the kinematic model of the mechanical arm, wherein the kinematic error model is used for describing the relation between the alignment error of the pose of the tail end of the mechanical arm and the kinematic parameter error of the mechanical arm;
2) Generating an initial target pose: in order to enable the positioning pose to be feasible, four constraint conditions which are required to be met by the initial positioning pose of the mechanical arm are constructed, and the initial positioning pose of the mechanical arm is determined under the constraint conditions;
3) Alignment of pose: the tail end of the mechanical arm is controlled to automatically move to a plurality of positions on the optical axis of the camera from the initial target pose by using an image-based visual control method, so that the linear virtual constraint is satisfied;
4) And (3) identifying kinematic parameters: calculating an alignment error in an error model according to a joint angle when the tail end of the mechanical arm meets the linear constraint, and then identifying by using an LM algorithm to obtain a kinematic parameter error; according to the vision control method based on the image, the acquired image is extracted through image displacement in the calibration process, and the shake and blur information of the image is identified; the shake fuzzy information and inverse kinematics processing information and joint position controller feedback information are used together for controlling and adjusting the joint position controller;
the dithering fuzzy information of the image comprises the dithering fuzzy direction and the dithering fuzzy scale of the image;
performing two-dimensional Fourier transform on the real-shot jittering blurred image, performing normalization processing and binarization processing on the transformed value, and acquiring a jittering blurred direction by adopting an MRT algorithm based on Radon transform;
and rotating and cutting the jittering blurred image, performing first-order differentiation in the horizontal direction, defining the autocorrelation operation of the image in the blurred direction, and acquiring the jittering blurred scale by adopting a PDA algorithm for determining the blurred length by the distance between the minimum value and the origin in the autocorrelation curve.
2. The depth camera automatic calibration algorithm based on three-dimensional feature points according to claim 1, wherein: converting the image characteristic deviation into the pose deviation of the tail end of the mechanical arm through the pose adjustment strategy of the mechanical arm, and converting from an image space to a Cartesian space of the tail end of the mechanical arm; the dither blur direction and the dither blur scale are projected onto an image coordinate vector for correction matrix.
3. A computer device, characterized by: including one or more processors; a memory; and one or more computer programs, wherein the one or more computer programs are stored in the memory, the one or more computer programs comprising instructions that, when executed by the apparatus, cause the apparatus to perform the three-dimensional feature point-based depth camera auto-calibration algorithm of claim 1 or 2.
4. A computer storage medium characterized by: the computer storage medium stores one or more computer programs, the one or more computer programs comprising instructions that, when executed, are capable of performing the three-dimensional feature point-based depth camera auto-calibration algorithm of claim 1 or 2.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202211225481.7A CN115890654B (en) | 2022-10-09 | 2022-10-09 | Depth camera automatic calibration algorithm based on three-dimensional feature points |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202211225481.7A CN115890654B (en) | 2022-10-09 | 2022-10-09 | Depth camera automatic calibration algorithm based on three-dimensional feature points |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN115890654A CN115890654A (en) | 2023-04-04 |
| CN115890654B true CN115890654B (en) | 2023-08-11 |
Family
ID=86471625
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202211225481.7A Active CN115890654B (en) | 2022-10-09 | 2022-10-09 | Depth camera automatic calibration algorithm based on three-dimensional feature points |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN115890654B (en) |
Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2005295495A (en) * | 2003-10-02 | 2005-10-20 | Kazuo Iwane | Camera vector computing device, shake component detecting device, image stabilizing device, position and orientation stabilizing device, target object lock-on device, and live-action object attribute calling device provided in this camera vector computing device |
| EP2608938A1 (en) * | 2010-08-27 | 2013-07-03 | ABB Research LTD | Vision-guided alignment system and method |
| CN108789404A (en) * | 2018-05-25 | 2018-11-13 | 哈尔滨工程大学 | A kind of serial manipulator kinematic calibration method of view-based access control model |
| CN110288657A (en) * | 2019-05-23 | 2019-09-27 | 华中师范大学 | A 3D Registration Method for Augmented Reality Based on Kinect |
| WO2022052404A1 (en) * | 2020-09-09 | 2022-03-17 | 苏州浪潮智能科技有限公司 | Memory alignment and insertion method and system based on machine vision, device, and storage medium |
| CN114998624A (en) * | 2022-05-07 | 2022-09-02 | 北京微链道爱科技有限公司 | Image searching method and device |
Family Cites Families (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP7048535B2 (en) * | 2019-04-01 | 2022-04-05 | ファナック株式会社 | Mechanism for controlling the robot Robot control device that calibrates error parameters |
-
2022
- 2022-10-09 CN CN202211225481.7A patent/CN115890654B/en active Active
Patent Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2005295495A (en) * | 2003-10-02 | 2005-10-20 | Kazuo Iwane | Camera vector computing device, shake component detecting device, image stabilizing device, position and orientation stabilizing device, target object lock-on device, and live-action object attribute calling device provided in this camera vector computing device |
| EP2608938A1 (en) * | 2010-08-27 | 2013-07-03 | ABB Research LTD | Vision-guided alignment system and method |
| CN108789404A (en) * | 2018-05-25 | 2018-11-13 | 哈尔滨工程大学 | A kind of serial manipulator kinematic calibration method of view-based access control model |
| CN110288657A (en) * | 2019-05-23 | 2019-09-27 | 华中师范大学 | A 3D Registration Method for Augmented Reality Based on Kinect |
| WO2022052404A1 (en) * | 2020-09-09 | 2022-03-17 | 苏州浪潮智能科技有限公司 | Memory alignment and insertion method and system based on machine vision, device, and storage medium |
| CN114998624A (en) * | 2022-05-07 | 2022-09-02 | 北京微链道爱科技有限公司 | Image searching method and device |
Also Published As
| Publication number | Publication date |
|---|---|
| CN115890654A (en) | 2023-04-04 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| JP5393318B2 (en) | Position and orientation measurement method and apparatus | |
| CN112949478B (en) | Target detection method based on tripod head camera | |
| US5706419A (en) | Image capturing and processing apparatus and image capturing and processing method | |
| CN105308627B (en) | Method and system for calibrating camera | |
| US12073582B2 (en) | Method and apparatus for determining a three-dimensional position and pose of a fiducial marker | |
| CN111612794A (en) | High-precision 3D pose estimation method and system for parts based on multi-2D vision | |
| CN111524194A (en) | Positioning method and terminal for mutual fusion of laser radar and binocular vision | |
| CN114445506A (en) | Camera calibration processing method, device, equipment and storage medium | |
| CN110225321B (en) | Training sample data acquisition system and method for trapezoidal correction | |
| Martins et al. | Monocular camera calibration for autonomous driving—a comparative study | |
| CN111105467B (en) | Image calibration method and device and electronic equipment | |
| CN112971984B (en) | Coordinate registration method based on integrated surgical robot | |
| KR102016988B1 (en) | Camera pose vector calibration method for generating 3d information | |
| CN115082543B (en) | Laser correction method | |
| CN112419427A (en) | Methods for improving the accuracy of time-of-flight cameras | |
| CN115890654B (en) | Depth camera automatic calibration algorithm based on three-dimensional feature points | |
| CN119952724B (en) | Industrial robot intelligent control system and method based on machine vision | |
| CN116117800A (en) | Machine vision processing method for compensating height difference, electronic device and storage medium | |
| CN117830174A (en) | AR-HUD distortion calibration method | |
| Zhong et al. | CalQNet-detection of calibration quality for life-long stereo camera setups | |
| CN120279007B (en) | Circuit board component accurate positioning and mounting method and system based on visual guidance | |
| CN115546396B (en) | A three-dimensional reconstruction method, device and medium | |
| CN119625085B (en) | Calibration method and device for camera parameters and control equipment | |
| CN120480931B (en) | Automatic fluid loading and unloading arm dynamic flange butt joint method based on image vision servo | |
| CN120985657A (en) | AGV manipulator positioning method, device, terminal and storage medium based on 2D vision |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |