Disclosure of Invention
The invention aims to provide a visual experiment platform calibration method, a positioning method and related equipment, which can effectively finish the calibration of a visual experiment platform in a separation module mode.
In a first aspect, the application provides a visual experiment platform calibration method, which is used for calibrating a split-module type visual experiment platform, wherein the visual experiment platform comprises a camera, an objective table, an X-axis module, a Y-axis module and a Z-axis module, the camera is mounted on the Z-axis module, and the Z-axis module can drive the camera to reciprocate along the vertical direction; the Z-axis module is arranged on the X-axis module in a sliding mode, and the X-axis module can drive the Z-axis module to move in a reciprocating mode along the transverse direction; the object stage is independently arranged below the camera, the Y-axis module is fixedly connected with the object stage, and the Y-axis module can drive the object stage to move back and forth along the longitudinal direction; the visual experiment platform calibration method comprises the following steps:
s1, placing a workpiece on an objective table, wherein at least 4 mark points are arranged on one side of the workpiece facing a camera;
s2, selecting any one mark point as a first identification point;
s3, establishing a platform coordinate system according to the first identification point;
s4, taking other mark points except the first identification point as second identification points, and acquiring first platform coordinates of the second identification points in the platform coordinate system based on the platform coordinate system;
s5, acquiring a workpiece coordinate of the second identification point in a workpiece coordinate system;
s6, acquiring a conversion matrix between the workpiece coordinate system and the platform coordinate system according to the first platform coordinate and the workpiece coordinate.
The effective calibration of the visual experiment platform in the separation module mode is realized by setting the mark points and establishing a platform coordinate system by the mark points so as to obtain a conversion matrix between the platform coordinate system and the workpiece coordinate system.
Further, the specific steps in step S3 include:
s31, controlling the X-axis module and the Y-axis module to move, and acquiring a first image when the first identification point appears in the visual field of the camera;
s32, acquiring a first pixel distance between the image center of the first image and the first identification point; the first pixel distance comprises a distance in an X-axis direction and a distance in a Y-axis direction;
s33, calculating a first actual distance between the image center of the first image and the first identification point according to the first pixel distance; the first actual distance comprises a distance in an X-axis direction and a distance in a Y-axis direction;
s34, controlling the X-axis module and the Y-axis module according to the first actual distance, and acquiring a second image when the image center of the first image is superposed with the first identification point;
and S35, establishing the platform coordinate system by taking the first identification point as an origin based on the second image.
The first pixel distance between the image center of the first image and the first identification point is used for automatic alignment, and compared with manual operation, the automatic alignment method is high in speed and accuracy.
Further, the specific steps in step S4 include:
sequentially executing the following steps for each second identification point:
s41, controlling the X-axis module and the Y-axis module to move, and acquiring a third image when the second identification point appears in the visual field of the camera;
s42, acquiring a second pixel distance between the image center of the third image and the second identification point; the second pixel distance comprises a distance in an X-axis direction and a distance in a Y-axis direction;
s43, calculating a second actual distance between the image center of the third image and the second identification point according to the second pixel distance; the second actual distance comprises a distance in the X-axis direction and a distance in the Y-axis direction;
s44, controlling the X-axis module and the Y-axis module according to the second actual distance, and acquiring a fourth image when the image center of the third image is coincident with the second identification point;
s45, acquiring the first platform coordinate of the second identification point in the platform coordinate system based on the fourth image.
And the second pixel distance between the image center of the third image and the second identification point is used for automatic alignment, so that the speed is higher and the precision is higher.
Further, the specific steps in step S45 include:
acquiring a first numerical value of the X-axis module corresponding to the second image, wherein the first numerical value is an X-axis coordinate value of the objective table corresponding to the second image;
acquiring a second numerical value of the Y-axis module corresponding to the second image, wherein the second numerical value is a Y-axis coordinate value of the objective table corresponding to the second image;
acquiring a third numerical value of the X-axis module corresponding to each fourth image, wherein the third numerical value is an X-axis coordinate value of the objective table corresponding to the fourth image;
acquiring a fourth numerical value of the Y-axis module corresponding to each fourth image, wherein the fourth numerical value is a Y-axis coordinate value of the objective table corresponding to the fourth image;
obtaining an X-axis coordinate value of each second identification point in the platform coordinate system by calculating a difference value between the first numerical value and each third numerical value;
and obtaining the Y-axis coordinate value of each second identification point in the platform coordinate system by calculating the difference value between the second numerical value and each fourth numerical value.
The simple operation process occupies less resources of the system, and has relatively low requirements on hardware equipment of the system.
Further, before step S33 and step S43, the method further includes the steps of:
A1. placing the calibration plate in the visual field of the camera to obtain a calibration image;
A2. acquiring a third actual distance between any two calibration points in the calibration plate; the third actual distance comprises a distance in the X-axis direction and a distance in the Y-axis direction;
A3. acquiring a third pixel distance between the two calibration points according to the calibration image; the third pixel distance comprises a distance in an X-axis direction and a distance in a Y-axis direction;
A4. calculating to obtain a unit pixel distance according to the third actual distance and the third pixel distance, wherein the unit pixel distance is an actual distance corresponding to a unit pixel interval; the unit pixel distance comprises a distance in an X-axis direction and a distance in a Y-axis direction;
the specific steps in step S33 include:
converting the first pixel distance to the first actual distance based on the unit pixel distance;
the specific steps in step S43 include:
converting the second pixel distance to the second actual distance based on the unit pixel distance.
In a second aspect, the present application further provides a positioning method applied to a split-module type visual experiment platform calibrated by the visual experiment platform calibration method, where the positioning method includes the following steps:
B1. controlling the X-axis module and the Y-axis module to move, and when a target point appears in the visual field of the camera, acquiring a fifth image and calculating a first moving distance of the X-axis module and a second moving distance of the Y-axis module;
B2. acquiring a fourth pixel distance between the image center of the fifth image and the target point; the fourth pixel distance comprises a distance in an X-axis direction and a distance in a Y-axis direction;
B3. converting the fourth pixel distance to a fourth actual distance based on the unit pixel distance; the fourth actual distance comprises a distance in the X-axis direction and a distance in the Y-axis direction;
B4. judging the quadrant of the target point in the fifth image;
B5. calculating a second platform coordinate of the target point in the platform coordinate system according to the first moving distance of the X-axis module, the second moving distance of the Y-axis module, the fourth actual distance and the quadrant of the target point in the fifth image;
the specific steps in step B5 include:
calculating the second platform coordinates according to the following formula:
wherein,
the coordinate value of the X axis in the second platform coordinate is shown;
the coordinate value of the Y axis in the second platform coordinate is obtained;
is the first value;
is the first movement distance;
a first distance value in the X-axis direction in the fourth actual distance according to the position of the target point in the fifth imageThe quadrant is positive and negative;
is the second value;
is the second movement distance;
and the second distance value is a second distance value in the Y-axis direction in the fourth actual distance, and the second distance value is positive or negative according to the quadrant of the target point in the fifth image.
The method has simple operation process, is beneficial to improving the speed of the system to calculate the result and reducing the processing time to reduce the delay between input and output.
In a third aspect, the invention further provides a visual experiment platform calibration device, which is used for calibrating a visual experiment platform in a split module type, wherein the visual experiment platform comprises a camera, an objective table, an X-axis module, a Y-axis module and a Z-axis module, the camera is mounted on the Z-axis module, and the Z-axis module can drive the camera to reciprocate along the vertical direction; the Z-axis module is arranged on the X-axis module in a sliding mode, and the X-axis module can drive the Z-axis module to move back and forth along the transverse direction; the object stage is independently arranged below the camera, the Y-axis module is fixedly connected with the object stage, and the Y-axis module can drive the object stage to reciprocate along the longitudinal direction; the visual experiment platform calibration device comprises:
the preparation module is used for placing a workpiece on the objective table, and at least 4 mark points are arranged on one side of the workpiece facing the camera;
the selection module is used for selecting any one of the mark points as a first identification point;
the construction module is used for establishing a platform coordinate system according to the first identification point;
the first acquisition module is used for taking the mark points except the first identification point as second identification points and acquiring first platform coordinates of the second identification points in the platform coordinate system based on the platform coordinate system;
the second acquisition module is used for acquiring the workpiece coordinates of the second identification point in a workpiece coordinate system;
and the third acquisition module is used for acquiring a conversion matrix between the workpiece coordinate system and the platform coordinate system according to the first platform coordinate and the workpiece coordinate.
The visual experiment platform in the split module type can be effectively calibrated, and the dilemma that the visual experiment platform cannot be calibrated by using a traditional calibration method is broken through.
In a fourth aspect, the present invention further provides a positioning device, applied to a visual experiment platform in a split module type calibrated by the visual experiment platform calibration method, where the positioning device includes:
the fourth acquisition module is used for controlling the X-axis module and the Y-axis module to move, and acquiring a fifth image and calculating a first moving distance of the X-axis module and a second moving distance of the Y-axis module when a target point appears in the visual field of the camera;
a fifth obtaining module, configured to obtain a fourth pixel distance between an image center of the fifth image and the target point; the fourth pixel distance comprises a distance in an X-axis direction and a distance in a Y-axis direction;
a conversion module for converting the fourth pixel distance to a fourth actual distance based on the unit pixel distance; the fourth actual distance comprises a distance in the X-axis direction and a distance in the Y-axis direction;
the judging module is used for judging the quadrant of the target point in the fifth image;
the calculation module is used for calculating a second platform coordinate of the target point under the platform coordinate system according to the first moving distance of the X-axis module, the second moving distance of the Y-axis module, the fourth actual distance and the quadrant of the target point in the fifth image;
the calculation module executes when calculating a second platform coordinate of the target point in the platform coordinate system according to the first movement distance of the X-axis module, the second movement distance of the Y-axis module, the fourth actual distance, and the quadrant of the target point in the fifth image:
calculating the second platform coordinates according to the following formula:
wherein,
the coordinate value of the X axis in the second platform coordinate is shown;
the coordinate value of the Y axis in the second platform coordinate is obtained;
is the first value;
is the first movement distance;
taking the first distance value in the X-axis direction in the fourth actual distance as a positive value and a negative value according to the quadrant of the target point in the fifth image;
is the second value;
is the second movement distance;
and the second distance value is a second distance value in the Y-axis direction in the fourth actual distance, and the second distance value is positive or negative according to the quadrant of the target point in the fifth image.
The positioning process is simple in operation, rapid and accurate positioning is facilitated, and requirements on hardware of equipment are low.
In a fifth aspect, the present invention provides an electronic device, including a processor and a memory, where the memory stores computer-readable instructions, and the computer-readable instructions, when executed by the processor, perform the steps in the visual experiment platform calibration method and/or the positioning method described above.
In a sixth aspect, the present invention provides a storage medium, on which a computer program is stored, wherein the computer program, when being executed by a processor, executes the steps of the above-mentioned visual experiment platform calibration method and/or positioning method.
Therefore, the calibration method applicable to the split module type visual experiment platform can effectively overcome the technical blind area that the split module type visual experiment platform cannot be calibrated by using the traditional nine-point calibration method, and accurate and effective calibration is favorable for ensuring that the split module type visual experiment platform can complete accurate visual positioning.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only some embodiments of the present application, and not all embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, as presented in the figures, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined or explained in subsequent figures. Meanwhile, in the description of the present application, the terms "first", "second", and the like are used only for distinguishing the description, and are not construed as indicating or implying relative importance.
It should be noted that, because there is a certain distortion in the imaging of the camera, in order to improve the positioning accuracy, the first image, the second image, the third image, the fourth image, the fifth image and the calibration image, which are described below, are subjected to perspective transformation to reduce errors caused by image distortion, and the perspective transformation is performed in the prior art, and the transformation process is not described any more.
In the prior art, most of the X-axis module, the Y-axis module, and the Z-axis module in the vision experiment platform are arranged on the gantry to control the X-axis direction, the Y-axis direction, and the Z-axis direction of the camera, and the stage is fixed below the gantry, after the workpiece is placed on the stage, the camera can be driven to move to any position on the workpiece by controlling the X-axis module, the Y-axis module, and the Z-axis module, and then the platform coordinates of any position on the workpiece can be obtained by reading the values in the X-axis module, the Y-axis module, and the Z-axis module (all vision experiment platforms leave from the factory and have a preset platform coordinate system, and the values corresponding to the coordinate axes of the platform coordinate system can be read from the encoders corresponding to the X-axis module, the Y-axis module, and the Z-axis module, and can be understood as a physical coordinate system established by the X-axis module, the Y-axis module, and the Z-axis module), when the actual application is applied, the workpiece is placed on the stage, and the stage is fixed, therefore, the X-axis module, the Y-axis module, and the workpiece can move relatively, and the coordinates of the workpiece can be obtained by calculating the coordinate system of the current coordinate system based on the technology, and the image of the existing platform, and the image can be obtained by the calibration platform.
However, for the visual experiment platform of the split module type, only the X-axis module and the Z-axis module are arranged on the portal frame, the Y-axis module is separately and fixedly connected to the objective table and can drive the objective table to move along the Y-axis direction, under the visual experiment platform, the position of the camera is moved, only the X-axis coordinate value and the Z-axis coordinate value of any position of one side of the workpiece facing the camera in the platform coordinate system can be read from the encoders corresponding to the X-axis module and the Z-axis module, and the numerical value read from the encoder corresponding to the Y-axis module is only the Y-axis coordinate value of the objective table in the platform coordinate system, not the Y-axis coordinate of the point corresponding to the current camera position on the workpiece in the platform coordinate system, only the X-axis coordinate and the Z-axis coordinate of the point corresponding to the current camera position on the workpiece in the platform coordinate system can not be obtained, and the data loss of the part can not be used for calibrating the visual experiment platform of the split module type by adopting a nine-point calibration method.
In some embodiments, referring to fig. 1, a method for calibrating a visual experiment platform is used for calibrating a split-module type visual experiment platform, the visual experiment platform includes a camera, an object stage, an X-axis module, a Y-axis module and a Z-axis module, the camera is mounted on the Z-axis module, and the Z-axis module can drive the camera to reciprocate along a vertical direction; the Z-axis module is arranged on the X-axis module in a sliding manner, and the X-axis module can drive the Z-axis module to reciprocate along the transverse direction; the object stage is independently arranged below the camera, the Y-axis module is fixedly connected with the object stage, and the Y-axis module can drive the object stage to reciprocate along the longitudinal direction; the visual experiment platform calibration method comprises the following steps:
s1, placing a workpiece on an objective table, wherein at least 4 mark points are arranged on one side of the workpiece facing a camera;
s2, selecting any one mark point as a first identification point;
s3, establishing a platform coordinate system according to the first identification point;
s4, taking other mark points except the first identification point as second identification points, and acquiring first platform coordinates of the second identification points in a platform coordinate system based on the platform coordinate system;
s5, acquiring a workpiece coordinate of the second identification point in a workpiece coordinate system;
and S6, acquiring a conversion matrix between the workpiece coordinate system and the platform coordinate system according to the first platform coordinate and the workpiece coordinate.
In this embodiment, at least 4 marker points need to be set on the surface of the workpiece facing the camera in advance during calibration (if one marker point is used as a reference point, then calibration can be completed only by 3 additional sets of data, which is similar to the data requirements of the nine-point calibration method and is not described herein again).
It should be noted that, the platform coordinate system established by the first identification point is actually a "relative" platform coordinate system reconstructed under the platform coordinate system preset by the visual experiment platform itself, where the first identification point itself is a point under the platform coordinate system preset by the visual experiment platform itself, and the reestablishment of the "relative" platform coordinate system by taking the first identification point as a reference point does not affect the coordinate acquisition and positioning control in the future, and only changes to acquiring the "relative" coordinate and performing the "relative" positioning control by taking the first identification point as a new reference (instead of the coordinate origin of the platform coordinate system preset by the visual experiment platform itself).
In certain embodiments, the specific steps in step S3 include:
s31, controlling the X-axis module and the Y-axis module to move, and acquiring a first image when a first identification point appears in the visual field of the camera;
s32, acquiring a first pixel distance between the image center of the first image and the first identification point; the first pixel distance comprises a distance in an X-axis direction and a distance in a Y-axis direction;
s33, calculating a first actual distance between the image center of the first image and the first identification point according to the first pixel distance; the first actual distance comprises a distance in the X-axis direction and a distance in the Y-axis direction;
s34, controlling an X-axis module and a Y-axis module according to the first actual distance, and acquiring a second image when the image center of the first image is coincident with the first identification point;
and S35, establishing a platform coordinate system by taking the first identification point as an origin on the basis of the second image.
During practical application, the image center of the first image is controlled to be aligned with the first identification point in the calibration process, the X-axis module, the Y-axis module and the Z-axis module can be manually controlled to move, and the embodiment automatically aligns by using the first pixel distance between the image center of the first image and the first identification point, and is higher in speed and precision compared with manual operation.
In certain embodiments, the specific steps in step S4 include:
and sequentially executing the following steps for each second identification point:
s41, controlling the X-axis module and the Y-axis module to move, and acquiring a third image when a second identification point appears in the visual field of the camera;
s42, acquiring a second pixel distance between the image center of the third image and the second identification point; the second pixel distance comprises a distance in the X-axis direction and a distance in the Y-axis direction;
s43, calculating a second actual distance between the image center of the third image and the second identification point according to the second pixel distance; the second actual distance comprises a distance in the X-axis direction and a distance in the Y-axis direction;
s44, controlling the X-axis module and the Y-axis module according to the second actual distance, and acquiring a fourth image when the image center of the third image is coincident with the second identification point;
s45, acquiring first platform coordinates of the second identification point in a platform coordinate system based on the fourth image.
Like the above embodiment, the second pixel distance between the center of the third image and the second recognition point is used for automatic alignment, so that the speed is higher and the precision is higher.
In certain embodiments, the specific steps in step S45 include:
acquiring a first numerical value of the X-axis module corresponding to the second image, wherein the first numerical value is an X-axis coordinate value of the objective table corresponding to the second image;
acquiring a second numerical value of the Y-axis module corresponding to the second image, wherein the second numerical value is a Y-axis coordinate value of the objective table corresponding to the second image;
acquiring a third numerical value of the X-axis module corresponding to each fourth image, wherein the third numerical value is an X-axis coordinate value of the objective table corresponding to the fourth image;
acquiring a fourth numerical value of the Y-axis module corresponding to each fourth image, wherein the fourth numerical value is a Y-axis coordinate value of the objective table corresponding to the fourth image;
obtaining the X-axis coordinate value of each second identification point in the platform coordinate system by calculating the difference value between the first numerical value and each third numerical value;
and obtaining the Y-axis coordinate value of each second identification point in the platform coordinate system by calculating the difference value between the second numerical value and each fourth numerical value.
It should be noted that the first numerical value of the X-axis module corresponding to the second image refers to a numerical value read from the X-axis module in this state when the visual experiment platform shoots the second image, which is taken as the first numerical value, and the rest of the second numerical values, the third numerical value, and the fourth numerical value are the same, and are not described herein again.
The difference value between the first numerical value and each third numerical value obtained by calculation reflects the distance between each second identification point and each first identification point along the X-axis direction, namely the X-axis coordinate value of each second identification point under a platform coordinate system established by taking the first identification point as a reference point; similarly, the calculated difference value between the second numerical value and each fourth numerical value reflects the distance between each second identification point and the first identification point along the Y-axis direction, that is, the Y-axis coordinate value of each second identification point in the platform coordinate system established by using the first identification point as the reference point; and then the coordinates of each second identification point in a platform coordinate system established by taking the first identification point as a reference point are obtained.
In some embodiments, before step S33 and step S43, the method further comprises the steps of:
A1. placing the calibration plate in the visual field of a camera to obtain a calibration image;
A2. acquiring a third actual distance between any two calibration points in the calibration plate; the third actual distance comprises a distance in the X-axis direction and a distance in the Y-axis direction;
A3. acquiring a third pixel distance between the two calibration points according to the calibration image; the third pixel distance comprises a distance in the X-axis direction and a distance in the Y-axis direction;
A4. calculating to obtain a unit pixel distance according to the third actual distance and the third pixel distance, wherein the unit pixel distance is an actual distance corresponding to the unit pixel interval; the unit pixel distance comprises a distance in an X-axis direction and a distance in a Y-axis direction;
the specific steps in step S33 include:
converting the first pixel distance into a first actual distance based on the unit pixel distance;
the specific steps in step S43 include:
the second pixel distance is converted into a second actual distance based on the unit pixel distance.
In this embodiment, in order to ensure the accuracy of the automatic alignment process in the above embodiments, the actual distance corresponding to a unit pixel in the image acquired by the camera, that is, the unit pixel distance, needs to be obtained in advance, and the pixel distance can be accurately converted into the actual distance, and the accurate actual distance is input into the X-axis module, the Y-axis module, and the Z-axis module, so that the movement can be accurately controlled, and the center of the image is aligned to the identification point (referred to as the first identification point and the second identification point).
Referring to fig. 2, fig. 2 is a schematic diagram of a positioning method applied to a visual experiment platform in a split module type calibrated by a visual experiment platform calibration method according to an embodiment of the present application, including the steps of:
B1. controlling the X-axis module and the Y-axis module to move, and acquiring a fifth image and calculating a first moving distance of the X-axis module and a second moving distance of the Y-axis module when a target point (the target point refers to a position point which needs to be positioned by a user) appears in the visual field of the camera;
B2. acquiring a fourth pixel distance between the image center of the fifth image and the target point; the fourth pixel distance comprises a distance in the X-axis direction and a distance in the Y-axis direction;
B3. converting the fourth pixel distance to a fourth actual distance based on the unit pixel distance; the fourth actual distance comprises a distance in the X-axis direction and a distance in the Y-axis direction;
B4. judging the quadrant of the target point in the fifth image;
B5. and calculating a second platform coordinate of the target point in the platform coordinate system according to the first moving distance of the X-axis module, the second moving distance of the Y-axis module, the fourth actual distance and the quadrant of the target point in the fifth image.
In this embodiment, the vision experiment platform is calibrated in the above embodiments, and can be applied to positioning control after obtaining the unit pixel distance, and in the positioning control process, it is only necessary to make the target point appear in the camera view without controlling the camera to move to make the image center coincide with the target point; in practical application, for example, a user acquires workpiece coordinates of a target point through a drawing, the workpiece coordinates are input into a visual experiment platform, the visual experiment platform converts the input workpiece coordinates according to a conversion matrix obtained by calibration to obtain corresponding platform coordinates, and finally the visual experiment platform controls an X-axis module, a Y-axis module and a Z-axis module according to the converted platform coordinates to align the center of an image acquired by a camera to the target point, so that positioning is completed on the visual experiment platform; for example, the user determines the platform coordinates of the target point from the visual experiment platform, the visual experiment platform converts the platform coordinates of the target point into corresponding workpiece coordinates according to the conversion matrix obtained by calibration and feeds the workpiece coordinates back to the user, and the user can accurately position the target point on the corresponding drawing according to the corresponding workpiece coordinates obtained by conversion, so that the positioning is completed on the workpiece drawing.
It should be noted that the quadrant distribution in the fifth image is distinguished based on the image center of the fifth image, the image center of the fifth image is used as an original point, a cross mark is set on the original point, and two line segments forming the cross mark are respectively parallel to the moving direction of the Y-axis module and the moving direction of the X-axis module.
In certain embodiments, the specific steps in step B5 include:
the second platform coordinates are calculated according to the following formula:
wherein,
is the X-axis coordinate value in the second platform coordinate;
the coordinate value of the Y axis in the second platform coordinate;
is a first value;
a first movement distance;
the first distance value in the X-axis direction in the fourth actual distance is positive or negative according to the quadrant of the target point in the fifth image;
is a second numberA value;
is the second movement distance;
and the second distance value is a second distance value in the Y-axis direction in the fourth actual distance, and the second distance value is positive or negative according to the quadrant of the target point in the fifth image.
It is required to be noted that
And
with its own sign, the above embodiment determines the quadrant of the target point in the fifth image for determining
And
taking a positive or negative value, the rule is determined as follows:
if the target point falls in the first quadrant or the fourth quadrant, then
Is a positive value, otherwise
Is a negative value;
if the target point is in the first quadrant or the second quadrant, the target point is in the first quadrant or the second quadrant
Is positive, otherwise
Is negative.
Referring to fig. 3, fig. 3 is a schematic diagram of a visual experiment platform calibration apparatus for calibrating a split-module type visual experiment platform according to some embodiments of the present application, where the visual experiment platform includes a camera, an object stage, an X-axis module, a Y-axis module, and a Z-axis module, the camera is mounted on the Z-axis module, and the Z-axis module can drive the camera to reciprocate along a vertical direction; the Z-axis module is arranged on the X-axis module in a sliding manner, and the X-axis module can drive the Z-axis module to reciprocate along the transverse direction; the object stage is independently arranged below the camera, the Y-axis module is fixedly connected with the object stage, and the Y-axis module can drive the object stage to reciprocate along the longitudinal direction; the visual experiment platform calibration device is integrated in the rear-end control equipment of the visual experiment platform calibration device in the form of a computer program, and comprises:
a preparation module 100, configured to place a workpiece on the stage, where a side of the workpiece facing the camera is provided with at least 4 mark points;
a selecting module 200, configured to select any one of the mark points as a first identification point;
a building module 300, configured to build a platform coordinate system according to the first identification point;
a first obtaining module 400, configured to take other mark points except the first identification point as second identification points, and obtain, based on the platform coordinate system, first platform coordinates of the second identification points in the platform coordinate system;
a second obtaining module 500, configured to obtain workpiece coordinates of the second identification point in the workpiece coordinate system;
and a third obtaining module 600, configured to obtain a transformation matrix between the workpiece coordinate system and the platform coordinate system according to the first platform coordinate and the workpiece coordinate.
In certain embodiments, the building module 300 performs when used to establish the platform coordinate system from the first identified point:
s31, controlling the X-axis module and the Y-axis module to move, and acquiring a first image when a first identification point appears in the visual field of the camera;
s32, acquiring a first pixel distance between the image center of the first image and the first identification point; the first pixel distance comprises a distance in an X-axis direction and a distance in a Y-axis direction;
s33, calculating a first actual distance between the image center of the first image and the first identification point according to the first pixel distance; the first actual distance comprises a distance in the X-axis direction and a distance in the Y-axis direction;
s34, controlling the X-axis module and the Y-axis module according to the first actual distance, and acquiring a second image when the image center of the first image is coincident with the first identification point;
and S35, establishing a platform coordinate system by taking the first identification point as an origin on the basis of the second image.
In some embodiments, the first obtaining module 400 performs, when the second identifying point is a mark point other than the first identifying point, obtaining the first platform coordinate of the second identifying point in the platform coordinate system based on the platform coordinate system:
and sequentially executing the following steps for each second identification point:
s41, controlling the X-axis module and the Y-axis module to move, and acquiring a third image when a second identification point appears in the visual field of the camera;
s42, acquiring a second pixel distance between the image center of the third image and the second identification point; the second pixel distance comprises a distance in the X-axis direction and a distance in the Y-axis direction;
s43, calculating a second actual distance between the image center of the third image and the second identification point according to the second pixel distance; the second actual distance comprises a distance in the X-axis direction and a distance in the Y-axis direction;
s44, controlling the X-axis module and the Y-axis module according to the second actual distance, and acquiring a fourth image when the image center of the third image is coincident with the second identification point;
s45, acquiring first platform coordinates of the second identification point in a platform coordinate system based on the fourth image.
In some embodiments, the first obtaining module 400 performs when configured to obtain the first platform coordinates of the second identification point in the platform coordinate system based on the fourth image:
acquiring a first numerical value of the X-axis module corresponding to the second image, wherein the first numerical value is an X-axis coordinate value of the objective table corresponding to the second image;
acquiring a second numerical value of the Y-axis module corresponding to the second image, wherein the second numerical value is a Y-axis coordinate value of the objective table corresponding to the second image;
acquiring a third numerical value of the X-axis module corresponding to each fourth image, wherein the third numerical value is an X-axis coordinate value of the objective table corresponding to the fourth image;
acquiring a fourth numerical value of the Y-axis module corresponding to each fourth image, wherein the fourth numerical value is a Y-axis coordinate value of the objective table corresponding to the fourth image;
obtaining the X-axis coordinate value of each second identification point under the platform coordinate system by calculating the difference value between the first numerical value and each third numerical value;
and obtaining the Y-axis coordinate value of each second identification point in the platform coordinate system by calculating the difference value between the second numerical value and each fourth numerical value.
In some embodiments, the construction module 300 is configured to calculate a first actual distance between the image center of the first image and the first identified point based on the first pixel distance; the first actual distance comprises the distance in the X-axis direction and the distance in the Y-axis direction, and the first obtaining module 400 is configured to calculate a second actual distance between the center of the image of the third image and the second recognition point according to the second pixel distance; the second actual distance comprises the distance in the X-axis direction and the distance in the Y-axis direction, and the following steps are performed before:
A1. placing the calibration plate in the visual field of a camera to obtain a calibration image;
A2. acquiring a third actual distance between any two calibration points in the calibration plate; the third actual distance comprises a distance in the X-axis direction and a distance in the Y-axis direction;
A3. acquiring a third pixel distance between the two calibration points according to the calibration image; the third pixel distance comprises a distance in the X-axis direction and a distance in the Y-axis direction;
A4. calculating to obtain a unit pixel distance according to the third actual distance and the third pixel distance, wherein the unit pixel distance is an actual distance corresponding to the unit pixel interval; the unit pixel distance comprises a distance in an X-axis direction and a distance in a Y-axis direction;
the construction module 300 is used for calculating a first actual distance between the image center of the first image and the first identification point according to the first pixel distance; when the first actual distance comprises the distance in the X-axis direction and the distance in the Y-axis direction, the following steps are performed:
converting the first pixel distance into a first actual distance based on the unit pixel distance;
the first obtaining module 400 is configured to calculate a second actual distance between the image center of the third image and the second recognition point according to the second pixel distance; performing when the second actual distance includes a distance in the X-axis direction and a distance in the Y-axis direction:
the second pixel distance is converted into a second actual distance based on the unit pixel distance.
Referring to fig. 4, fig. 4 is a positioning apparatus in some embodiments of the present application, applied to a visual experiment platform in a split-module type calibrated by a visual experiment platform calibration method, the positioning apparatus being integrated in a back-end control device of the positioning apparatus in the form of a computer program, the positioning apparatus including:
a fourth obtaining module 700, configured to control the X-axis module and the Y-axis module to move, and when a target point appears in the field of view of the camera, obtain a fifth image and calculate a first moving distance of the X-axis module and a second moving distance of the Y-axis module;
a fifth obtaining module 800, configured to obtain a fourth pixel distance between the image center of the fifth image and the target point; the fourth pixel distance comprises a distance in the X-axis direction and a distance in the Y-axis direction;
a conversion module 900 for converting the fourth pixel distance into a fourth actual distance based on the unit pixel distance; the fourth actual distance comprises a distance in the X-axis direction and a distance in the Y-axis direction;
the judging module 1000 is configured to judge a quadrant of the target point in the fifth image;
the calculating module 1100 is configured to calculate a second platform coordinate of the target point in the platform coordinate system according to the first moving distance of the X-axis module, the second moving distance of the Y-axis module, the fourth actual distance, and a quadrant of the target point in the fifth image;
the calculation module 1100 is executed when calculating the second platform coordinate of the target point in the platform coordinate system according to the first movement distance of the X-axis module, the second movement distance of the Y-axis module, the fourth actual distance, and the quadrant of the target point in the fifth image:
the second platform coordinates are calculated according to the following formula:
wherein,
is the X-axis coordinate value in the second platform coordinate;
the coordinate value of the Y axis in the second platform coordinate;
is a first value;
is a first movement distance;
the first distance value in the X-axis direction in the fourth actual distance is positive or negative according to the quadrant of the target point in the fifth image;
is a second value;
is the second movement distance;
is the fourth embodimentAnd a second distance value in the Y-axis direction in the inter-distance, wherein the second distance value is positive or negative according to the quadrant of the target point in the fifth image.
Referring to fig. 5, fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure, where the present disclosure provides an electronic device including: the processor 1301 and the memory 1302, the processor 1301 and the memory 1302 are interconnected and communicate with each other through a communication bus 1303 and/or other connection mechanism (not shown), and the memory 1302 stores a computer program executable by the processor 1301, and when the computing apparatus runs, the processor 1301 executes the computer program to execute the visual experiment platform calibration method in any optional implementation manner of the embodiment of the first aspect, so as to implement the following functions: placing a workpiece on the objective table, wherein at least 4 mark points are arranged on one side of the workpiece facing the camera; selecting any one mark point as a first identification point; establishing a platform coordinate system according to the first identification point; taking other mark points except the first identification point as second identification points, and acquiring first platform coordinates of the second identification points in a platform coordinate system based on the platform coordinate system; acquiring the workpiece coordinates of the second identification point in a workpiece coordinate system; and acquiring a conversion matrix between the workpiece coordinate system and the platform coordinate system according to the first platform coordinate and the workpiece coordinate.
And/or to perform the positioning method in any of the alternative implementations of the embodiments of the second aspect described above, to implement the following functions: the X-axis module and the Y-axis module are controlled to move, when a target point appears in the visual field of the camera, acquiring a fifth image and calculating a first moving distance of the X-axis module and a second moving distance of the Y-axis module; acquiring a fourth pixel distance between the image center of the fifth image and the target point; the fourth pixel distance comprises a distance in the X-axis direction and a distance in the Y-axis direction; converting the fourth pixel distance to a fourth actual distance based on the unit pixel distance; the fourth actual distance comprises a distance in the X-axis direction and a distance in the Y-axis direction; judging the quadrant of the target point in the fifth image; calculating a second platform coordinate of the target point in the platform coordinate system according to the first moving distance of the X-axis module, the second moving distance of the Y-axis module, the fourth actual distance and the quadrant of the target point in the fifth image; the second platform coordinates are calculated according to the following formula:
wherein,
the coordinate value of the X axis in the second platform coordinate is taken;
the coordinate value of the Y axis in the second platform coordinate;
is a first value;
is a first movement distance;
the first distance value in the X-axis direction in the fourth actual distance is positive or negative according to the quadrant of the target point in the fifth image;
is a second value;
is the second movement distance;
and the second distance value is a second distance value in the Y-axis direction in the fourth actual distance, and the second distance value is positive or negative according to the quadrant of the target point in the fifth image.
An embodiment of the present application provides a storage medium, where a computer program is stored, and when the computer program is executed by a processor, the method for calibrating a visual experiment platform in any optional implementation manner of the embodiment of the first aspect is executed, so as to implement the following functions: placing a workpiece on an objective table, wherein at least 4 mark points are arranged on one side of the workpiece facing a camera; selecting any one mark point as a first identification point; establishing a platform coordinate system according to the first identification point; taking other mark points except the first identification point as second identification points, and acquiring first platform coordinates of the second identification points in a platform coordinate system based on the platform coordinate system; acquiring the workpiece coordinates of the second identification point in a workpiece coordinate system; and acquiring a conversion matrix between the workpiece coordinate system and the platform coordinate system according to the first platform coordinate and the workpiece coordinate.
And/or performing the positioning method in any optional implementation manner of the embodiment of the second aspect to implement the following functions: controlling the X-axis module and the Y-axis module to move, and acquiring a fifth image and calculating a first moving distance of the X-axis module and a second moving distance of the Y-axis module when a target point appears in the visual field of the camera; acquiring a fourth pixel distance between the image center of the fifth image and the target point; the fourth pixel distance comprises a distance in the X-axis direction and a distance in the Y-axis direction; converting the fourth pixel distance to a fourth actual distance based on the unit pixel distance; the fourth actual distance comprises a distance in the X-axis direction and a distance in the Y-axis direction; judging the quadrant of the target point in the fifth image; calculating a second platform coordinate of the target point in the platform coordinate system according to the first moving distance of the X-axis module, the second moving distance of the Y-axis module, the fourth actual distance and the quadrant of the target point in the fifth image; the second platform coordinates are calculated according to the following formula:
wherein,
Is the X-axis coordinate value in the second platform coordinate;
the coordinate value of the Y axis in the second platform coordinate;
is a first value;
is a first movement distance;
taking the first distance value in the X-axis direction in the fourth actual distance, and taking the positive and negative values of the first distance value according to the quadrant of the target point in the fifth image;
is a second value;
is the second movement distance;
and the second distance value is a second distance value in the Y-axis direction in the fourth actual distance, and the second distance value is positive or negative according to the quadrant of the target point in the fifth image.
The storage medium may be implemented by any type of volatile or nonvolatile storage device or combination thereof, such as a Static Random Access Memory (SRAM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), an Erasable Programmable Read-Only Memory (EPROM), a Programmable Read-Only Memory (PROM), a Read-Only Memory (ROM), a magnetic Memory, a flash Memory, a magnetic disk, or an optical disk.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described apparatus embodiments are merely illustrative, and for example, the division of the units into only one type of logical function may be implemented in other ways, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
In addition, units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
Furthermore, the functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
In this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
The above description is only an example of the present application and is not intended to limit the scope of the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.