CN118411704A - Mobile robot control method, mobile robot and storage medium - Google Patents
Mobile robot control method, mobile robot and storage medium Download PDFInfo
- Publication number
- CN118411704A CN118411704A CN202410463505.5A CN202410463505A CN118411704A CN 118411704 A CN118411704 A CN 118411704A CN 202410463505 A CN202410463505 A CN 202410463505A CN 118411704 A CN118411704 A CN 118411704A
- Authority
- CN
- China
- Prior art keywords
- image
- area
- function
- parameter
- mobile robot
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/58—Extraction of image or video features relating to hyperspectral data
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/88—Image or video recognition using optical means, e.g. reference filters, holographic masks, frequency domain filters or spatial domain filters
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
Description
技术领域Technical Field
本申请涉及移动机器人技术领域,具体涉及一种移动机器人控制方法、移动机器人及计算机可读存储介质。The present application relates to the technical field of mobile robots, and in particular to a mobile robot control method, a mobile robot and a computer-readable storage medium.
背景技术Background technique
目前相关技术中,移动机器人在执行清洁任务过程中的避障方式一般包括双目视觉避障、结构光避障等方式。但各个避障方式的优劣均比较明显,如双目视觉避障很难针对无纹理信息或纹理信息较少的目标物体(如镜面、平整的白墙等)获取到精确度较高的深度信息,而结构光避障在针对障碍物进行深度计算时因为红外图像的单通道图像的特征导致图像细节信息缺失问题等。In the current related technologies, the obstacle avoidance methods of mobile robots in the process of performing cleaning tasks generally include binocular vision obstacle avoidance, structured light obstacle avoidance, etc. However, the advantages and disadvantages of each obstacle avoidance method are relatively obvious. For example, binocular vision obstacle avoidance is difficult to obtain high-precision depth information for target objects with no texture information or less texture information (such as mirrors, flat white walls, etc.), and structured light obstacle avoidance has problems such as missing image detail information when calculating the depth of obstacles due to the single-channel image characteristics of infrared images.
发明内容Summary of the invention
本申请实施方式提供了一种移动机器人控制方法、移动机器人及计算机可读存储介质。Embodiments of the present application provide a mobile robot control method, a mobile robot, and a computer-readable storage medium.
本申请实施方式中的移动机器人控制方法,包括如下步骤:The mobile robot control method in the embodiment of the present application comprises the following steps:
获取目标物体的第一图像;Acquire a first image of the target object;
对所述第一图像进行第一光谱变换处理,确定与所述第一图像对应的第二图像;Performing a first spectral transformation process on the first image to determine a second image corresponding to the first image;
获取所述目标物体的第三图像;Acquiring a third image of the target object;
对所述第三图像进行第二光谱变换处理,确定与所述第三图像对应的第四图像,其中所述第一图像与所述第三图像光谱类型不同,所述第一图像与所述第四图像光谱类型相同,所述第二图像与所述第三图像光谱类型相同;Performing a second spectral transformation process on the third image to determine a fourth image corresponding to the third image, wherein the first image and the third image have different spectral types, the first image and the fourth image have the same spectral type, and the second image and the third image have the same spectral type;
根据所述第一图像与所述第四图像,以及所述第二图像与所述第三图像,确定避障参考信息;Determining obstacle avoidance reference information according to the first image and the fourth image, and the second image and the third image;
控制所述移动机器人根据所述避障参考信息避让所述目标物体。The mobile robot is controlled to avoid the target object according to the obstacle avoidance reference information.
如此,本申请实施方式中的移动机器人可以根据针对同一目标物体的不同光谱类型的图像,通过光谱变换得到两组光谱类型不同的图像,并基于两组共4个图像最终确定一个精确度较高、可信度也较高的避障参考信息,从而避免了仅凭借单个光谱种类的图像进行避障时存在的精确度缺失、以及简单凭借两个光谱种类不同的图像进行避障时存在的数据不完整的问题。In this way, the mobile robot in the implementation mode of the present application can obtain two groups of images with different spectral types through spectral transformation based on images of different spectral types for the same target object, and finally determine an obstacle avoidance reference information with higher accuracy and higher credibility based on the two groups of 4 images, thereby avoiding the lack of accuracy when only relying on images of a single spectral type for obstacle avoidance, and the problem of incomplete data when simply relying on two images of different spectral types for obstacle avoidance.
在某些实施方式中,移动机器人控制方法还包括:In some embodiments, the mobile robot control method further comprises:
基于设置于所述移动机器人的激光装置,向所述目标物体发射参考激光,其中所述第一图像以及所述第四图像均由所述目标物体反射所述参考激光形成。Based on a laser device arranged on the mobile robot, a reference laser is emitted to the target object, wherein the first image and the fourth image are both formed by the reference laser reflected by the target object.
如此,本申请实施方式中的移动机器人还能够通过设置于自身的激光装置为目标物体提供激光光源,从而使得移动机器人获取到的图像在特定的光谱类型下能够补足纹理信息的缺失。In this way, the mobile robot in the embodiment of the present application can also provide a laser light source for the target object through a laser device arranged on itself, so that the image acquired by the mobile robot can make up for the lack of texture information under a specific spectral type.
在某些实施方式中,所述第一图像为红外光图像,所述第三图像为可见光图像;In some embodiments, the first image is an infrared light image, and the third image is a visible light image;
所述对所述第一图像进行第一光谱变换处理,确定与所述第一图像对应的第二图像,包括:The performing a first spectral transformation process on the first image to determine a second image corresponding to the first image includes:
根据所述第一图像上的第一区域、以及所述第三图像上与所述第一区域对应的第二区域,确定第一函数,其中所述第一函数包括对应于可见光图像的三个颜色维度的第一参数、第二参数以及第三参数;Determine a first function according to a first area on the first image and a second area on the third image corresponding to the first area, wherein the first function includes a first parameter, a second parameter, and a third parameter corresponding to three color dimensions of the visible light image;
根据所述第一图像上的第三区域、所述第一函数、以及所述第三图像上与所述第三区域对应的第四区域,确定第二函数,其中所述第二函数包括对应于可见光图像的三个颜色维度的第四参数、第五参数以及第六参数;Determine a second function according to a third area on the first image, the first function, and a fourth area on the third image corresponding to the third area, wherein the second function includes a fourth parameter, a fifth parameter, and a sixth parameter corresponding to three color dimensions of the visible light image;
根据所述第一函数、所述第二函数以及所述第一图像,确定所述第二图像。The second image is determined according to the first function, the second function and the first image.
如此,本申请实施方式能够根据移动机器人获取到的两组光谱类型不同的图像,通过确定两组对应函数的方式确定出与获取到的图像中的一个光谱类型相同的图像,为避障参考信息的确定做数据准备。In this way, the implementation method of the present application can determine an image with the same spectral type as one of the acquired images by determining two sets of corresponding functions based on the two sets of images with different spectral types acquired by the mobile robot, thereby preparing data for determining obstacle avoidance reference information.
在某些实施方式中,所述根据所述第一图像上的第一区域、以及所述第三图像上与所述第一区域对应的第二区域,确定第一函数,包括:In some embodiments, determining the first function according to the first area on the first image and the second area on the third image corresponding to the first area includes:
根据所述第一区域的像素值以及所述第二区域的像素值,确定所述第一参数、所述第二参数、所述第三参数,其中所述第一区域与所述第二区域具有相同的图像特征。The first parameter, the second parameter, and the third parameter are determined according to the pixel value of the first area and the pixel value of the second area, wherein the first area and the second area have the same image feature.
如此,本申请实施方式还提供了确定第一函数中的参数的具体方式。Thus, the embodiments of the present application also provide a specific method for determining the parameters in the first function.
在某些实施方式中,所述根据所述第一图像上的第三区域、所述第一函数、以及所述第三图像上与所述第三区域对应的第四区域,确定第二函数,包括:In some embodiments, determining the second function according to the third area on the first image, the first function, and a fourth area on the third image corresponding to the third area includes:
根据所述第三区域以及所述第一函数,确定第一比较图像,其中所述第一比较图像为可见光图像;determining a first comparison image according to the third area and the first function, wherein the first comparison image is a visible light image;
根据所述第一比较图像以及所述第四区域的差异,确定所述第四参数、所述第五参数以及所述第六参数。The fourth parameter, the fifth parameter and the sixth parameter are determined according to the difference between the first comparison image and the fourth region.
如此,本申请实施方式还提供了确定第二函数中的参数的具体方式。Thus, the embodiments of the present application also provide a specific method for determining the parameters in the second function.
在某些实施方式中,在某些实施方式中,所述第一图像为红外光图像,所述第三图像为可见光图像;In some embodiments, in some embodiments, the first image is an infrared light image, and the third image is a visible light image;
所述对所述第三图像进行第二光谱变换处理,确定与所述第三图像对应的第四图像,包括:The performing a second spectral transformation process on the third image to determine a fourth image corresponding to the third image includes:
根据所述第三图像上的第五区域、以及所述第一图像上与所述第五区域对应的第六区域,确定第三函数,其中所述第三函数包括对应于可见光图像的三个颜色维度的第七参数、第八参数以及第九参数;determining a third function according to a fifth area on the third image and a sixth area on the first image corresponding to the fifth area, wherein the third function includes a seventh parameter, an eighth parameter, and a ninth parameter corresponding to three color dimensions of the visible light image;
根据所述第三图像上的第七区域、所述第三函数、以及所述第一图像上与所述第七区域对应的第八区域,确定第四函数,其中所述第四函数包括对应于可见光图像的三个颜色维度的第十参数、第十一参数以及第十二参数;determining a fourth function according to a seventh area on the third image, the third function, and an eighth area on the first image corresponding to the seventh area, wherein the fourth function includes a tenth parameter, an eleventh parameter, and a twelfth parameter corresponding to three color dimensions of the visible light image;
根据所述第三函数、所述第四函数以及所述第三图像,确定所述第四图像。The fourth image is determined according to the third function, the fourth function and the third image.
如此,本申请实施方式能够根据移动机器人获取到的两组光谱类型不同的图像,通过确定两组对应函数的方式确定出与获取到的图像中的一个光谱类型相同的图像,为避障参考信息的确定做数据准备。In this way, the implementation method of the present application can determine an image with the same spectral type as one of the acquired images by determining two sets of corresponding functions based on the two sets of images with different spectral types acquired by the mobile robot, thereby preparing data for determining obstacle avoidance reference information.
在某些实施方式中,所述根据所述第三图像上的第五区域、以及所述第一图像上与所述第五区域对应的第六区域,确定第三函数,包括:In some embodiments, determining the third function according to the fifth area on the third image and the sixth area on the first image corresponding to the fifth area includes:
根据所述第五区域的像素值以及所述第六区域的像素值,确定所述第七参数、所述第八参数、所述第九参数,其中所述第五区域与所述第六区域具有相同的图像特征。The seventh parameter, the eighth parameter, and the ninth parameter are determined according to the pixel value of the fifth region and the pixel value of the sixth region, wherein the fifth region and the sixth region have the same image feature.
如此,本申请实施方式还提供了确定第三函数中的参数的具体方式。Thus, the embodiments of the present application also provide a specific method for determining the parameters in the third function.
在某些实施方式中,所述根据所述第三图像上的第七区域、所述第三函数、以及所述第一图像上与所述第七区域对应的第八区域,确定第四函数,包括:In some embodiments, determining the fourth function according to the seventh area on the third image, the third function, and an eighth area on the first image corresponding to the seventh area includes:
根据所述第七区域以及所述第三函数,确定第二比较图像,其中所述第二比较图像为红外光图像;determining a second comparison image according to the seventh area and the third function, wherein the second comparison image is an infrared light image;
根据所述第二比较图像以及所述第八区域的差异,确定所述第十参数、所述第十一参数以及所述第十二参数。The tenth parameter, the eleventh parameter and the twelfth parameter are determined according to the difference between the second comparison image and the eighth region.
如此,本申请实施方式还提供了确定第四函数中的参数的具体方式。Thus, the embodiments of the present application also provide a specific method for determining the parameters in the fourth function.
在某些实施方式中,所述根据所述第一图像与所述第四图像,以及所述第二图像与所述第三图像,确定避障参考信息,包括:In some embodiments, determining obstacle avoidance reference information according to the first image and the fourth image, and the second image and the third image, includes:
根据所述第一图像与所述第四图像,确定第一深度信息;determining first depth information according to the first image and the fourth image;
根据所述第二图像与所述第三图像,确定第二深度信息;determining second depth information according to the second image and the third image;
根据所述第一深度信息以及所述第二深度信息,确定所述避障参考信息。The obstacle avoidance reference information is determined according to the first depth information and the second depth information.
如此,本申请实施方式能够将获取到的图像和进一步确定的图像根据光谱类型两两分组,并根据两组图像分别确定出一个目标物体的深度信息,并基于两组深度信息最终确定避障参考信息,以提高避障参考信息的精确度。In this way, the implementation method of the present application can group the acquired images and the further determined images in pairs according to the spectral type, and determine the depth information of a target object based on the two groups of images respectively, and finally determine the obstacle avoidance reference information based on the two groups of depth information to improve the accuracy of the obstacle avoidance reference information.
在某些实施方式中,所述根据所述第一深度信息以及所述第二深度信息,确定所述避障参考信息,包括:In some implementations, determining the obstacle avoidance reference information according to the first depth information and the second depth information includes:
根据所述第一深度信息以及第二函数,确定第一损失信息,其中所述第二函数根据所述第一图像上的第三区域、第一函数以及所述第三图像上与所述第三区域对应的第四区域确定,所述第一函数根据所述第一图像上的第一区域、以及所述第三图像上与所述第一区域对应的第二区域确定;determining first loss information according to the first depth information and a second function, wherein the second function is determined according to a third area on the first image, a first function, and a fourth area on the third image corresponding to the third area, and the first function is determined according to a first area on the first image and a second area on the third image corresponding to the first area;
根据所述第二深度信息以及第四函数,确定第二损失信息,其中所述第四函数根据所述第三图像上的第七区域、第三函数以及所述第一图像上与所述第七区域对应的第八区域确定,所述第三函数根据所述第三图像上的第五区域、以及所述第一图像上与所述第五区域对应的第六区域确定;determining second loss information according to the second depth information and a fourth function, wherein the fourth function is determined according to a seventh area on the third image, a third function, and an eighth area on the first image corresponding to the seventh area, and the third function is determined according to a fifth area on the third image and a sixth area on the first image corresponding to the fifth area;
在所述第一损失信息与所述第二损失信息之差位于预设阈值范围内的情况下,确定所述第一损失信息以及所述第二损失信息的平均值为所述避障参考信息;或者When the difference between the first loss information and the second loss information is within a preset threshold range, determining an average value of the first loss information and the second loss information as the obstacle avoidance reference information; or
在所述第一损失信息与所述第二损失信息之差位于所述预设阈值范围外的情况下,确定所述第一损失信息以及所述第二损失信息中所对应的深度信息较大的一个为所述避障参考信息。When the difference between the first loss information and the second loss information is outside the preset threshold range, the one corresponding to the larger depth information of the first loss information and the second loss information is determined as the obstacle avoidance reference information.
如此,本申请实施方式还提供了根据两组深度信息具体确定避障参考信息的方式。Thus, the embodiments of the present application also provide a method for specifically determining obstacle avoidance reference information based on two sets of depth information.
本申请实施方式中的移动机器人,包括可见光摄像装置、红外光摄像装置以及激光装置;所述移动机器人还包括存储器与处理器,所述存储器存储有计算机程序,在所述计算机程序被所述处理器执行的情况下,实现如上述的方法。The mobile robot in the implementation mode of the present application includes a visible light camera device, an infrared light camera device and a laser device; the mobile robot also includes a memory and a processor, the memory stores a computer program, and when the computer program is executed by the processor, the method as described above is implemented.
本申请实施方式中的计算机可读存储介质存储有计算机程序,在所述计算机程序被一个或多个处理器执行的情况下,实现上述的方法。The computer-readable storage medium in the embodiments of the present application stores a computer program, and when the computer program is executed by one or more processors, the above method is implemented.
本申请的实施方式的附加方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本申请的实施方式的实践了解到。Additional aspects and advantages of the embodiments of the present application will be given in part in the description below, and in part will become apparent from the description below, or will be learned through the practice of the embodiments of the present application.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
本申请上述的和/或附加的方面和优点从下面结合附图对实施例的描述中将变得明显和容易理解,其中:The above and/or additional aspects and advantages of the present application will become apparent and easily understood from the following description of the embodiments in conjunction with the accompanying drawings, in which:
图1为本申请实施方式中移动机器人控制方法的流程示意图之一;FIG1 is a flow chart of a mobile robot control method according to an embodiment of the present application;
图2为本申请实施方式中移动机器人控制方法的流程示意图之二;FIG2 is a second flow chart of the mobile robot control method in the embodiment of the present application;
图3为本申请实施方式中移动机器人控制方法的应用场景示意图之一;FIG3 is a schematic diagram of one of the application scenarios of the mobile robot control method in an embodiment of the present application;
图4为本申请实施方式中移动机器人控制方法的应用场景示意图之二;FIG4 is a second schematic diagram of an application scenario of the mobile robot control method in an embodiment of the present application;
图5为本申请实施方式中移动机器人控制方法的流程示意图之三;FIG5 is a third flow chart of the mobile robot control method in the embodiment of the present application;
图6为本申请实施方式中移动机器人控制方法的流程示意图之四;FIG6 is a fourth flow chart of the mobile robot control method in the embodiment of the present application;
图7为本申请实施方式中移动机器人控制方法的流程示意图之五;FIG7 is a fifth flow chart of the mobile robot control method in the embodiment of the present application;
图8为本申请实施方式中移动机器人控制方法的流程示意图之六;FIG8 is a sixth flow chart of the mobile robot control method in the embodiment of the present application;
图9为本申请实施方式中移动机器人控制方法的应用场景示意图之三;FIG9 is a third schematic diagram of an application scenario of the mobile robot control method in an embodiment of the present application;
图10为本申请实施方式中移动机器人控制方法的流程示意图之七;FIG10 is a seventh flow chart of a mobile robot control method in an embodiment of the present application;
图11为本申请实施方式中移动机器人的结构示意图。FIG. 11 is a schematic diagram of the structure of a mobile robot in an embodiment of the present application.
具体实施方式Detailed ways
下面详细描述本申请的实施方式,实施方式的示例在附图中示出,其中,相同或类似的标号自始至终表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施方式是示例性的,仅用于解释本申请的实施方式,而不能理解为对本申请的实施方式的限制。The embodiments of the present application are described in detail below, and examples of the embodiments are shown in the accompanying drawings, wherein the same or similar reference numerals represent the same or similar elements or elements having the same or similar functions from beginning to end. The embodiments described below with reference to the accompanying drawings are exemplary and are only used to explain the embodiments of the present application, and cannot be understood as limiting the embodiments of the present application.
请参阅图1,本申请实施方式中的移动机器人控制方法,具体包括如下步骤:Please refer to FIG1 , the mobile robot control method in the embodiment of the present application specifically includes the following steps:
01:获取目标物体的第一图像;01: Get the first image of the target object;
02:对第一图像进行第一光谱变换处理,确定与第一图像对应的第二图像;02: Perform a first spectrum transformation process on the first image to determine a second image corresponding to the first image;
03:获取目标物体的第三图像;03: Acquire the third image of the target object;
04:对第三图像进行第二光谱变换处理,确定与第三图像对应的第四图像,04: Perform a second spectrum transformation process on the third image to determine a fourth image corresponding to the third image.
其中第一图像与第三图像光谱类型不同,第一图像与第四图像光谱类型相同,第二图像与第三图像光谱类型相同;The first image and the third image have different spectral types, the first image and the fourth image have the same spectral type, and the second image and the third image have the same spectral type;
05:根据第一图像与第四图像,以及第二图像与第三图像,确定避障参考信息;05: Determine obstacle avoidance reference information according to the first image and the fourth image, and the second image and the third image;
06:控制移动机器人根据避障参考信息避让目标物体。06: Control the mobile robot to avoid the target object based on the obstacle avoidance reference information.
本申请实施方式中的移动机器人控制装置,可以实现上述的移动机器人控制方法。具体地,移动机器人控制装置包括图像获取模块、光谱变换模块、信息确定模块以及运动控制模块,其中图像获取模块用于获取目标物体的第一图像、以及用于获取目标物体的第三图像,光谱变换模块用于对第一图像进行第一光谱变换处理,确定与第一图像对应的第二图像、以及用于对第三图像进行第二光谱变换处理,确定与第三图像对应的第四图像,信息确定模块用于根据第一图像与第四图像,以及第二图像与第三图像,确定避障参考信息,运动控制模块用于。控制移动机器人根据避障参考信息避让目标物体。The mobile robot control device in the embodiment of the present application can implement the above-mentioned mobile robot control method. Specifically, the mobile robot control device includes an image acquisition module, a spectral transformation module, an information determination module and a motion control module, wherein the image acquisition module is used to acquire a first image of the target object, and is used to acquire a third image of the target object, the spectral transformation module is used to perform a first spectral transformation process on the first image to determine a second image corresponding to the first image, and is used to perform a second spectral transformation process on the third image to determine a fourth image corresponding to the third image, the information determination module is used to determine obstacle avoidance reference information based on the first image and the fourth image, and the second image and the third image, and the motion control module is used to control the mobile robot to avoid the target object according to the obstacle avoidance reference information.
本申请实施方式中的移动机器人,包括可见光摄像装置以及红外光摄像装置,可见光摄像装置以及红外光摄像装置可以分别用于获取目标物体的第一图像以及第三图像。此外,移动机器人还包括存储器与处理器,可以实现上述的移动机器人控制方法,其中存储器存储有计算机程序,处理器用于获取目标物体的第一图像,以及用于对第一图像进行第一光谱变换处理,确定与第一图像对应的第二图像,以及用于获取目标物体的第三图像,以及用于对第三图像进行第二光谱变换处理,确定与第三图像对应的第四图像,以及用于根据第一图像与第四图像,以及第二图像与第三图像,确定避障参考信息,以及用于控制移动机器人根据避障参考信息避让目标物体。The mobile robot in the embodiment of the present application includes a visible light camera and an infrared light camera, which can be used to obtain a first image and a third image of a target object, respectively. In addition, the mobile robot also includes a memory and a processor, which can implement the above-mentioned mobile robot control method, wherein the memory stores a computer program, and the processor is used to obtain a first image of the target object, and to perform a first spectral transformation process on the first image to determine a second image corresponding to the first image, and to obtain a third image of the target object, and to perform a second spectral transformation process on the third image to determine a fourth image corresponding to the third image, and to determine obstacle avoidance reference information based on the first image and the fourth image, and the second image and the third image, and to control the mobile robot to avoid the target object according to the obstacle avoidance reference information.
具体地,目前移动机器人的避障方案有双目视觉避障,结构光避障,红外避障等。Specifically, the current obstacle avoidance solutions for mobile robots include binocular vision obstacle avoidance, structured light obstacle avoidance, infrared obstacle avoidance, etc.
双目视觉避障通过左右两个相同规格的摄像头设置在同一水平线的情况下同时拍摄同一时刻、同一目标物体的图像(比如两个相同规格的RGB摄像头,或者两个相同规格的IR摄像头等),获取两个相同规格摄像头图案,然后通过立体匹配算法获取图像中的深度信息,并根据上述的深度信息控制移动机器人的运动,从而实现避障。Binocular vision obstacle avoidance uses two left and right cameras of the same specifications set on the same horizontal line to simultaneously capture images of the same target object at the same time (such as two RGB cameras of the same specifications, or two IR cameras of the same specifications, etc.), obtain two camera patterns of the same specifications, and then obtain the depth information in the image through a stereo matching algorithm. The movement of the mobile robot is controlled according to the above depth information to achieve obstacle avoidance.
而红外结构光避障通过主动发射红外光源,通过红外摄像头拍摄图案,根据物体距离不同会导致反射的光源信息会发生不同形变从而可以根据拍摄图案里面不同像素形变量来获取深度值,实现避障。Infrared structured light obstacle avoidance works by actively emitting infrared light sources and photographing patterns with an infrared camera. Depending on the distance to the object, the reflected light source information will undergo different deformations. The depth value can be obtained based on the different pixel deformation amounts in the photographed pattern to achieve obstacle avoidance.
对于红外结构光避障和双目避障方案,各有优势。其中双目视觉避障可以获取更高精度、更多深度信息,但双目为被动接收,对于无纹理信息的环境(比如镜面或者平面的白墙等场景)双目视觉会出现失效现象。Infrared structured light obstacle avoidance and binocular obstacle avoidance solutions each have their own advantages. Binocular vision obstacle avoidance can obtain higher accuracy and more depth information, but binocular vision is passive reception, and binocular vision will fail in environments without texture information (such as mirrors or flat white walls).
而红外结构光避障则一般依靠主动激光发射、单个红外摄像头接收图像的方式实现,可以覆盖无纹理的场景,但目前相关技术中移动机器人采用的结构光避障方式中,激光发射的方式大多为单、双线激光的主动发射,但获取深度信息往往是单线场景下的功能,囿于红外光图像的单通道特性,通过红外结构光图像得到的深度信息不如根据双目视觉获取RGB图像得到的深度信息丰富。Infrared structured light obstacle avoidance is generally achieved by active laser emission and a single infrared camera receiving images, which can cover scenes without textures. However, in the structured light obstacle avoidance method currently used by mobile robots in related technologies, the laser emission method is mostly active emission of single- or dual-line lasers, but obtaining depth information is often a function in single-line scenes. Due to the single-channel characteristics of infrared light images, the depth information obtained through infrared structured light images is not as rich as the depth information obtained by obtaining RGB images based on binocular vision.
为了能够提高深度信息确定过程的可靠性,以及深度信息的精确性,本申请提出了一种移动机器人控制方法。In order to improve the reliability of the depth information determination process and the accuracy of the depth information, the present application proposes a mobile robot control method.
首先,移动机器人基于设置于自身的一对不同光谱类型的摄像装置,获取同一个目标物体的两个图像(对应于第一图像以及第三图像),上述的两个图像各自具有与对应的摄像装置相同的光谱类型。比如某些示例中,移动机器人基于设置于自身的可见光摄像装置以及红外光摄像装置,可见光摄像装置获取一个RGB可见光图像,红外光摄像装置获取一个IR红外光图像。First, the mobile robot obtains two images (corresponding to the first image and the third image) of the same target object based on a pair of cameras of different spectral types provided on the mobile robot, and the two images each have the same spectral type as the corresponding camera. For example, in some examples, the mobile robot is based on a visible light camera and an infrared light camera provided on the mobile robot, the visible light camera obtains an RGB visible light image, and the infrared light camera obtains an IR infrared light image.
然后,分别基于上述的第一图像以及第三图像,各自确定一个与自身光谱类型不同的图像。而且,为了实现提高生成的目标物体深度信息的目的,生成的图像应与另一个由摄像装置获取的图像相同,也即基于第一图像生成的图像与第三图像的光谱类型相同,基于第三图像生成的图像与第一图像的光谱类型相同。比如基于上述示例,基于获取到的RGB可见光图像确定一个在图像内容上与其自身相对应的IR红外光图像,再基于获取到的IR红外光图像确定一个在图像内容上与其自身相对应的RGB可见光图像。Then, based on the first image and the third image, an image with a spectrum type different from its own is determined respectively. Moreover, in order to achieve the purpose of improving the depth information of the generated target object, the generated image should be the same as another image acquired by the camera device, that is, the image generated based on the first image has the same spectrum type as the third image, and the image generated based on the third image has the same spectrum type as the first image. For example, based on the above example, an IR infrared light image corresponding to itself in image content is determined based on the acquired RGB visible light image, and then an RGB visible light image corresponding to itself in image content is determined based on the acquired IR infrared light image.
接下来,将上述四个图像中光谱类型相同的图像各自分组,通过分组计算再整合数据的方式,确定出目标物体的深度信息(对应于避障参考信息),以便于控制移动机器人根据上述的深度信息对目标物体与自身之间的当前状况进行判断,进而辅助控制移动机器人进行避障。上述的方案既可以保有RGB图像深度判断精确度较高的优点,又可以保有IR图像在纹理判断上可靠性较高的优点,同时还能够从两种不同的光谱类型出发对深度信息进行重复计算与数据整合,有效提高了避障相关信息的精确度与可靠性。Next, the images with the same spectral type in the above four images are grouped separately, and the depth information of the target object (corresponding to the obstacle avoidance reference information) is determined by grouping calculation and then integrating the data, so as to control the mobile robot to judge the current situation between the target object and itself according to the above depth information, and then assist in controlling the mobile robot to avoid obstacles. The above scheme can not only retain the advantages of high accuracy of RGB image depth judgment, but also retain the advantages of high reliability of IR image in texture judgment. At the same time, it can also repeatedly calculate and integrate the depth information from two different spectral types, effectively improving the accuracy and reliability of obstacle avoidance related information.
需要注意的是,上述获取第一、第三图像的执行顺序可以根据实际情况进行调换,也可以同时进行。此外,根据第一、第三图像确定第二、第四图像等实施步骤只需在获取到对应的图像之后执行即可,具体的执行顺序可以根据实际情况进行调换,也可以同时进行,本实施方式叙述的仅是一种示例性的步骤执行顺序,而并非是对执行顺序的限定。It should be noted that the execution order of acquiring the first and third images can be changed according to actual conditions, or can be performed simultaneously. In addition, the implementation steps of determining the second and fourth images according to the first and third images only need to be performed after the corresponding images are acquired. The specific execution order can be changed according to actual conditions, or can be performed simultaneously. This embodiment describes only an exemplary execution order of the steps, and does not limit the execution order.
如此,本申请实施方式中的移动机器人可以根据针对同一目标物体的不同光谱类型的图像,通过光谱变换得到两组光谱类型不同的图像,并基于两组共4个图像最终确定一个精确度较高、可信度也较高的避障参考信息,从而避免了仅凭借单个光谱种类的图像进行避障时存在的精确度缺失、以及简单凭借两个光谱种类不同的图像进行避障时存在的数据不完整的问题。In this way, the mobile robot in the implementation mode of the present application can obtain two groups of images with different spectral types through spectral transformation based on images of different spectral types for the same target object, and finally determine an obstacle avoidance reference information with higher accuracy and higher credibility based on the two groups of 4 images, thereby avoiding the lack of accuracy when only relying on images of a single spectral type for obstacle avoidance, and the problem of incomplete data when simply relying on two images of different spectral types for obstacle avoidance.
在某些实施方式中,移动机器人控制方法还包括:In some embodiments, the mobile robot control method further comprises:
基于设置于移动机器人的激光装置,向目标物体发射参考激光,Based on the laser device set on the mobile robot, a reference laser is emitted to the target object.
其中第一图像以及第四图像均由目标物体反射参考激光形成。The first image and the fourth image are both formed by the target object reflecting the reference laser.
在某些实施方式中,移动机器人控制装置还包括激光发射模块,具体用于基于设置于移动机器人的激光装置,向目标物体发射参考激光。In some embodiments, the mobile robot control device further includes a laser emission module, which is specifically configured to emit a reference laser to a target object based on a laser device disposed on the mobile robot.
在某些实施方式中,移动机器人还包括激光装置,上述激光装置可以实现向目标物体发射参考激光。此外,处理器还用于基于设置于移动机器人的激光装置,向目标物体发射参考激光。In some embodiments, the mobile robot further comprises a laser device, and the laser device can emit a reference laser to the target object. In addition, the processor is further configured to emit a reference laser to the target object based on the laser device disposed on the mobile robot.
具体地,常识性地,在不使用主动光源、仅凭借环境光源进行图像获取的情况下,传统红外摄像头双目、RGB摄像头双目在无纹理的场景下无法获取有效视差信息,也即无法获得物体深度信息。例如,可见光摄像装置在获取RGB图像时,若目标物体的纹理特征较少,有可能会出现根据RGB图像无法有效获取视差信息的情况。比如目标物体是镜子或者白色的墙面等纹理缺失的情景,在此情景下仅凭借RGB图像确定深度信息会因纹理信息的缺失而导致很大的误差。Specifically, it is common sense that when an active light source is not used and only ambient light sources are used to acquire images, traditional infrared camera binoculars and RGB camera binoculars cannot obtain effective disparity information in texture-free scenes, that is, they cannot obtain object depth information. For example, when a visible light camera device acquires an RGB image, if the target object has fewer texture features, it may be impossible to effectively obtain disparity information based on the RGB image. For example, in a scenario where the target object is a mirror or a white wall with missing textures, determining the depth information based solely on the RGB image will result in a large error due to the lack of texture information.
另外,若环境光强度较低时,为了保证图像的质量,一般需要对目标物体进行补光,但在某些场景下,用可见白光进行补光会导致当前环境中的光照量过大,会对用户的正常生活产生不利影响。In addition, if the ambient light intensity is low, in order to ensure the quality of the image, it is generally necessary to fill in the target object with light. However, in some scenarios, using visible white light for fill in will cause the amount of light in the current environment to be too large, which will have an adverse effect on the user's normal life.
考虑到不同光谱类型的图像的成像质量,以及尽可能降低对用户正常生活的干扰,在某些示例中,可以通过在移动机器人上设置激光装置的方式,利用激光装置向目标物体发射参考激光来对目标物体进行补光。一般而言,上述的参考激光的光谱类型与移动机器人上的其中一个摄像装置的光谱类型相同。比如示例性地,参考激光的光谱类型为红外光,而移动机器人在获取目标物体的图像时,纹理信息可以通过红外光摄像装置基于目标物体反射参考激光所形成的红外光图像反映出来,如此既可以尽可能避免目标物体的纹理信息不足时基于RGB图像确定深度信息时容易出现较大误差的现象,又可以通过非可见光补光来降低对用户生活的干扰。Taking into account the imaging quality of images of different spectral types and minimizing interference with the normal life of users, in some examples, a laser device can be set on the mobile robot, and the laser device can be used to emit a reference laser to the target object to fill in the light of the target object. Generally speaking, the spectral type of the above-mentioned reference laser is the same as the spectral type of one of the camera devices on the mobile robot. For example, illustratively, the spectral type of the reference laser is infrared light, and when the mobile robot acquires the image of the target object, the texture information can be reflected by the infrared light image formed by the infrared light camera device based on the reflection of the reference laser by the target object. In this way, it is possible to avoid the phenomenon that large errors are prone to occur when determining the depth information based on the RGB image when the texture information of the target object is insufficient, and to reduce interference with the user's life by using non-visible light fill-in.
如此,本申请实施方式中的移动机器人还能够通过设置于自身的激光装置为目标物体提供激光光源,从而使得移动机器人获取到的图像在特定的光谱类型下能够补足纹理信息的缺失。In this way, the mobile robot in the embodiment of the present application can also provide a laser light source for the target object through a laser device arranged on itself, so that the image acquired by the mobile robot can make up for the lack of texture information under a specific spectral type.
在某些实施方式中,第一图像为红外光图像,第三图像为可见光图像。In some embodiments, the first image is an infrared light image, and the third image is a visible light image.
请参阅图2,在此基础上,步骤02包括:Please refer to Figure 2. Based on this, step 02 includes:
021:根据第一图像上的第一区域、以及第三图像上与第一区域对应的第二区域,确定第一函数,021: determining a first function according to a first area on the first image and a second area on the third image corresponding to the first area,
其中第一函数包括对应于可见光图像的三个颜色维度的第一参数、第二参数以及第三参数;The first function includes a first parameter, a second parameter and a third parameter corresponding to three color dimensions of the visible light image;
022:根据第一图像上的第三区域、第一函数、以及第三图像上与第三区域对应的第四区域,确定第二函数,022: determining a second function according to the third area on the first image, the first function, and a fourth area on the third image corresponding to the third area,
其中第二函数包括对应于可见光图像的三个颜色维度的第四参数、第五参数以及第六参数;wherein the second function includes a fourth parameter, a fifth parameter, and a sixth parameter corresponding to three color dimensions of the visible light image;
023:根据第一函数、第二函数以及第一图像,确定第二图像。023: Determine the second image according to the first function, the second function and the first image.
在某些实施方式中,光谱变换模块还用于根据第一图像上的第一区域、以及第三图像上与第一区域对应的第二区域,确定第一函数,以及用于根据第一图像上的第三区域、第一函数、以及第三图像上与第三区域对应的第四区域,确定第二函数,以及用于根据第一函数、第二函数以及第一图像,确定第二图像。In some embodiments, the spectral transformation module is also used to determine a first function based on a first area on the first image and a second area on the third image corresponding to the first area, and to determine a second function based on the third area on the first image, the first function, and a fourth area on the third image corresponding to the third area, and to determine a second image based on the first function, the second function, and the first image.
在某些实施方式中,处理器还用于根据第一图像上的第一区域、以及第三图像上与第一区域对应的第二区域,确定第一函数,以及用于根据第一图像上的第三区域、第一函数、以及第三图像上与第三区域对应的第四区域,确定第二函数,以及用于根据第一函数、第二函数以及第一图像,确定第二图像。In some embodiments, the processor is also used to determine a first function based on a first area on the first image and a second area on the third image corresponding to the first area, and to determine a second function based on the third area on the first image, the first function, and a fourth area on the third image corresponding to the third area, and to determine a second image based on the first function, the second function, and the first image.
具体地,接下来针对根据获取到的图像确定一个与其光谱类型不同的图像过程进行具体说明。为了方便说明,在接下来有关于具体实施方式的叙述中,不妨设上述实施方式中的第一图像为红外光图像(IR图像)、第三图像为可见光图像(RGB图像)。第一图像以及第三图像的实际光谱类型可以根据实际情况进行调整,本申请不做具体限定,仅以IR图像以及RGB图像做示例性说明。Specifically, the following is a specific description of the process of determining an image with a different spectral type from the acquired image. For the convenience of explanation, in the following description of the specific implementation method, it is assumed that the first image in the above implementation method is an infrared light image (IR image) and the third image is a visible light image (RGB image). The actual spectral types of the first image and the third image can be adjusted according to actual conditions, and this application does not make specific limitations, and only uses IR images and RGB images as exemplary descriptions.
请参阅图3,移动机器人上设置有红外光摄像装置(红外摄像头)、可见光摄像装置(RGB摄像头)以及激光装置,移动机器人通过激光装置向目标物体发射参考激光来实施补光,红外摄像头以及RGB摄像头分别获取目标物体的IR图像以及RGB图像,其中IR图像是目标物体反射上述的参考激光形成、并基于红外摄像头获取到的,RGB图像也可以是利用激光补光方案来获取的。然后以上述获取到的IR图像以及RGB图像为数据基础,确定出用于光谱类型变换的函数,并根据得到的函数以及IR图像得到经过光谱类型变换后的变换RGB图像(对应于第二图像)。Please refer to Figure 3. The mobile robot is provided with an infrared light camera (infrared camera), a visible light camera (RGB camera) and a laser device. The mobile robot uses the laser device to emit a reference laser to the target object to implement fill light. The infrared camera and the RGB camera respectively obtain an IR image and an RGB image of the target object, wherein the IR image is formed by the target object reflecting the reference laser and obtained based on the infrared camera, and the RGB image can also be obtained using a laser fill light solution. Then, based on the above-obtained IR image and RGB image as data, a function for spectral type conversion is determined, and a transformed RGB image (corresponding to the second image) after spectral type conversion is obtained according to the obtained function and IR image.
具体而言,上述用于光谱类型变换的函数一般包括两个,其一为用于图像转换的映射函数G1(对应于第一函数),其二为用于避免图像信息缺失或信息冗余的损失函数Loss1(对应于第二函数),上述两函数的输入侧数据均为上述的IR图像,两函数输出结果的叠加即为上述的变换RGB图像。其中映射函数G1包括三个参数(对应于第一参数、第二参数以及第三参数),每一个参数分别对应RGB的三个颜色通道中的一个。同样地,损失函数Loss1也包括三个参数(对应于第四参数、第五参数以及第六参数),每一个参数也分别对应RGB的三个颜色通道中的一个。Specifically, the above-mentioned functions for spectral type conversion generally include two, one is a mapping function G1 (corresponding to the first function) for image conversion, and the other is a loss function Loss1 (corresponding to the second function) for avoiding image information loss or information redundancy. The input side data of the above two functions are the above-mentioned IR images, and the superposition of the output results of the two functions is the above-mentioned transformed RGB image. The mapping function G1 includes three parameters (corresponding to the first parameter, the second parameter and the third parameter), and each parameter corresponds to one of the three color channels of RGB. Similarly, the loss function Loss1 also includes three parameters (corresponding to the fourth parameter, the fifth parameter and the sixth parameter), and each parameter also corresponds to one of the three color channels of RGB.
为了确定映射函数G1以及损失函数Loss1,需要对上述的第一至第六共六个参数进行确定,在上述六个参数均确定的情况下,结合函数预设的计算方式即可以确定出映射函数G以及损失函数Loss,并将IR图像(第一图像)作为输入进行计算,即可以得到变换RGB图像。上述的基于映射函数G1以及损失函数Loss1针对IR图像进行的光谱变换处理即对应于第一光谱变换处理。In order to determine the mapping function G1 and the loss function Loss1, it is necessary to determine the six parameters from the first to the sixth mentioned above. When the six parameters are determined, the mapping function G and the loss function Loss can be determined in combination with the preset calculation method of the function, and the IR image (first image) is used as input for calculation, and the transformed RGB image can be obtained. The above-mentioned spectral transformation processing based on the mapping function G1 and the loss function Loss1 for the IR image corresponds to the first spectral transformation processing.
在上述确定第一至第六共六个参数的过程中,由于设置于移动机器人上的摄像头位置不同,从而导致获取到的IR图像与RGB图像之间存在视差,因而无法通过直接比较两图像上相同位置处的图像内容的方式来直接确定上述参数。为了确定能够直接用于图像比对的图像区域,示例性地,需要针对获取到的IR图像以及RGB图像进行图像特征分析,以确定出两图像上互相对应的、存在相同图像特征的兴趣区域(Regionof Interest,ROI)。In the process of determining the first to sixth parameters, the camera positions on the mobile robot are different, resulting in parallax between the acquired IR image and the RGB image, so the above parameters cannot be directly determined by directly comparing the image contents at the same position on the two images. In order to determine the image area that can be directly used for image comparison, it is necessary to perform image feature analysis on the acquired IR image and RGB image to determine the corresponding regions of interest (ROI) on the two images that have the same image features.
请参阅图4,图4示例性示出了某环境下设置于移动机器人上的两组摄像装置针对同一目标物体获取到的两个图像,其中第一图像为IR图像,第三图像为RGB图像,由于两组摄像装置在移动机器人上的位置不同,因此需要对两组图像进行图像特征分析,将具有类似特征的区域作为两个图像上对应的ROI。图像特征分析方式——也即ROI的具体确定方式可以采用目前相关技术中的相关技术手段,本申请不做具体限定。Please refer to Figure 4, which exemplarily shows two images of the same target object acquired by two sets of camera devices set on a mobile robot in a certain environment, wherein the first image is an IR image and the third image is an RGB image. Since the two sets of camera devices are located at different positions on the mobile robot, it is necessary to perform image feature analysis on the two sets of images, and use regions with similar features as corresponding ROIs on the two images. The image feature analysis method, that is, the specific method for determining ROI, can adopt relevant technical means in the current relevant technology, and this application does not make specific limitations.
比如图4中,R1特征与R1-1特征具有类似的图像特征,则将R1所在的区域与R1-1所在的区域确定为互相对应的ROI,R2特征与R2-1特征具有类似的图像特征,则将R2所在的区域与R2-1所在的区域确定为互相对应的ROI。For example, in FIG4 , the R1 feature and the R1-1 feature have similar image features, so the region where R1 is located and the region where R1-1 is located are determined as corresponding ROIs, and the R2 feature and the R2-1 feature have similar image features, so the region where R2 is located and the region where R2-1 is located are determined as corresponding ROIs.
在实际应用中,第一图像与第三图像上能够被确定的对应的ROI数量不止一组,一般会从中选择任一组作为计算第一、第二以及第三参数的数据基础,在确定出第一、第二以及第三参数,也即确定出映射函数G1后,利用其他ROI组与映射函数G1进行计算,并利用计算得到的数据确定出第四、第五、第六参数,以及损失函数Loss1。此外,为了提高损失函数Loss1的精确度,还可以选择第一图像以及第三图像上的其他ROI对损失函数Loss1中的第四、第五、第六参数进行多次重复计算与验证,直到损失函数Loss1符合预期为止。In practical applications, there are more than one group of corresponding ROIs that can be determined on the first image and the third image. Generally, any one group is selected as the data basis for calculating the first, second and third parameters. After the first, second and third parameters are determined, that is, the mapping function G1 is determined, other ROI groups and the mapping function G1 are used for calculation, and the fourth, fifth and sixth parameters and the loss function Loss1 are determined using the calculated data. In addition, in order to improve the accuracy of the loss function Loss1, other ROIs on the first image and the third image can also be selected to repeatedly calculate and verify the fourth, fifth and sixth parameters in the loss function Loss1 until the loss function Loss1 meets expectations.
示例性地,比如图4中,以R1特征(对应于第一区域)以及R1-1特征(对应于第二区域)为对应ROI组计算出第一、第二、第三参数,并得到映射函数G1。然后以R2以及R2-1为对应ROI组,将R2特征(对应于第三区域)作为映射函数G1的输入侧进行计算,最后将映射函数G1的输出值与R2-1特征(对应于第四区域)进行比对,即可以得到第四、第五与第六参数,最终确定出损失函数Loss1。For example, in FIG4 , the first, second, and third parameters are calculated with the R1 feature (corresponding to the first region) and the R1-1 feature (corresponding to the second region) as the corresponding ROI group, and the mapping function G1 is obtained. Then, with R2 and R2-1 as the corresponding ROI group, the R2 feature (corresponding to the third region) is used as the input side of the mapping function G1 for calculation, and finally the output value of the mapping function G1 is compared with the R2-1 feature (corresponding to the fourth region), so that the fourth, fifth, and sixth parameters can be obtained, and the loss function Loss1 is finally determined.
最后,根据上述的映射函数G1以及损失函数Loss1,将IR图像作为上述两组函数的输入侧数据进行计算,两组函数计算结果的叠加即为变换RGB图像(对应于第二图像)。Finally, according to the above-mentioned mapping function G1 and loss function Loss1, the IR image is used as the input side data of the above-mentioned two groups of functions for calculation, and the superposition of the calculation results of the two groups of functions is the transformed RGB image (corresponding to the second image).
如此,本申请能够根据移动机器人获取到的两组光谱类型不同的图像,通过确定两组对应函数的方式确定出与获取到的图像中的一个光谱类型相同的图像,为避障参考信息的确定做数据准备。In this way, the present application can determine an image with the same spectral type as one of the acquired images by determining two sets of corresponding functions based on the two sets of images with different spectral types acquired by the mobile robot, thereby preparing data for determining obstacle avoidance reference information.
请参阅图5,在某些实施方式中,步骤021包括:Referring to FIG. 5 , in some embodiments, step 021 includes:
0211:根据第一区域的像素值以及第二区域的像素值,确定第一参数、第二参数、第三参数,0211: Determine a first parameter, a second parameter, and a third parameter according to the pixel value of the first area and the pixel value of the second area,
其中第一区域与第二区域具有相同的图像特征;wherein the first region and the second region have the same image features;
另外,在某些实施方式中,步骤022包括:Additionally, in some embodiments, step 022 includes:
0221:根据第三区域以及第一函数,确定第一比较图像,0221: Determine a first comparison image according to the third area and the first function,
其中第一比较图像为可见光图像;The first comparison image is a visible light image;
0222:根据第一比较图像以及第四区域的差异,确定第四参数、第五参数以及第六参数。0222: Determine a fourth parameter, a fifth parameter, and a sixth parameter according to the difference between the first comparison image and the fourth region.
在某些实施方式中,光谱变换模块还用于根据第一区域的像素值以及第二区域的像素值,确定第一参数、第二参数、第三参数,以及用于根据第三区域以及第一函数,确定第一比较图像,以及用于根据第一比较图像以及第四区域的差异,确定第四参数、第五参数以及第六参数。In some embodiments, the spectral transformation module is also used to determine the first parameter, the second parameter, and the third parameter based on the pixel value of the first area and the pixel value of the second area, and to determine the first comparison image based on the third area and the first function, and to determine the fourth parameter, the fifth parameter, and the sixth parameter based on the difference between the first comparison image and the fourth area.
在某些实施方式中,处理器还用于根据第一区域的像素值以及第二区域的像素值,确定第一参数、第二参数、第三参数,以及用于根据第三区域以及第一函数,确定第一比较图像,以及用于根据第一比较图像以及第四区域的差异,确定第四参数、第五参数以及第六参数。In some embodiments, the processor is further used to determine the first parameter, the second parameter, and the third parameter based on the pixel value of the first area and the pixel value of the second area, and to determine the first comparison image based on the third area and the first function, and to determine the fourth parameter, the fifth parameter, and the sixth parameter based on the difference between the first comparison image and the fourth area.
具体地,接下来具体说明计算映射函数G1以及损失函数Loss1的方式。在上述实施方式以及示例的基础上,请参阅图4,首先选择R1以及R1-1作为对应ROI来计算映射函数G1。示例性地,具体计算的方式为:对于R1特征以及R1-1,选取R1特征所在的区域内m×n范围内的各个像素以及R1-1特征所在的区域内对应的m×n范围内的各个像素,其中m与n分别为R1特征以及R1-1特征所在的区域的横纵两个方向上的像素值。Specifically, the following describes in detail the method for calculating the mapping function G1 and the loss function Loss1. Based on the above implementation and examples, please refer to Figure 4. First, select R1 and R1-1 as corresponding ROIs to calculate the mapping function G1. Exemplarily, the specific calculation method is: for the R1 feature and R1-1, select each pixel within the m×n range in the area where the R1 feature is located and each pixel within the corresponding m×n range in the area where the R1-1 feature is located, where m and n are the pixel values in the horizontal and vertical directions of the area where the R1 feature and the R1-1 feature are located, respectively.
然后,利用目前相关技术中的插值计算方式,分别对应于RGB图像的三个颜色通道计算出上述第一、第二以及第三参数,从而确定映射函数G1,其中第一、第二、第三参数分别对应R、G、B中的哪一个颜色通道可以根据实际情况进行设置,本申请不做具体限定。Then, the interpolation calculation method in the current relevant technology is used to calculate the first, second and third parameters corresponding to the three color channels of the RGB image respectively, so as to determine the mapping function G1, wherein the first, second and third parameters correspond to which color channel of R, G and B respectively, which can be set according to actual conditions, and this application does not make specific limitations.
接下来,将IR图像上的R2特征作为映射函数G1的输入侧数据并进行计算,得到一个可见光比对图像(对应于第一比较图像),然后基于上述的可见光比对图像与R2-1特征进行比对分析,确定出可见光比对图像与R2-1特征之间的差异,并基于上述差异进行数据转化,分别对应于RGB图像的三个颜色通道计算出上述第四、第五以及第六参数,从而确定损失函数Loss1,其中第四、第五、第六参数分别对应R、G、B中的哪一个颜色通道可以根据实际情况进行设置,本申请不做具体限定。Next, the R2 feature on the IR image is used as the input side data of the mapping function G1 and calculated to obtain a visible light comparison image (corresponding to the first comparison image), and then a comparison analysis is performed based on the above-mentioned visible light comparison image and the R2-1 feature to determine the difference between the visible light comparison image and the R2-1 feature, and data conversion is performed based on the above-mentioned difference, and the above-mentioned fourth, fifth and sixth parameters are calculated corresponding to the three color channels of the RGB image respectively, so as to determine the loss function Loss1, wherein the fourth, fifth and sixth parameters correspond to which color channel of R, G, and B respectively, which can be set according to actual conditions, and this application does not make specific limitations.
如此,本申请实施方式还提供了确定对应函数中的参数的具体方式。Thus, the embodiments of the present application also provide a specific method for determining the parameters in the corresponding function.
请参阅图6,在某些实施方式中,步骤04包括:Referring to FIG. 6 , in some embodiments, step 04 includes:
041:根据第三图像上的第五区域、以及第一图像上与第五区域对应的第六区域,确定第三函数,041: Determine a third function according to a fifth area on the third image and a sixth area on the first image corresponding to the fifth area,
其中第三函数包括对应于可见光图像的三个颜色维度的第七参数、第八参数以及第九参数;The third function includes a seventh parameter, an eighth parameter and a ninth parameter corresponding to three color dimensions of the visible light image;
042:根据第三图像上的第七区域、第三函数、以及第一图像上与第七区域对应的第八区域,确定第四函数,042: Determine a fourth function according to the seventh area on the third image, the third function, and the eighth area on the first image corresponding to the seventh area,
其中第四函数包括对应于可见光图像的三个颜色维度的第十参数、第十一参数以及第十二参数;wherein the fourth function includes a tenth parameter, an eleventh parameter, and a twelfth parameter corresponding to three color dimensions of the visible light image;
043:根据第三函数、第四函数以及第三图像,确定第四图像。043: Determine a fourth image according to the third function, the fourth function and the third image.
在某些实施方式中,光谱变换模块还用于根据第三图像上的第五区域、以及第一图像上与第五区域对应的第六区域,确定第三函数,以及用于根据第三图像上的第七区域、第三函数、以及第一图像上与第七区域对应的第八区域,确定第四函数,以及用于根据第三函数、第四函数以及第三图像,确定第四图像。In some embodiments, the spectral transformation module is also used to determine a third function based on a fifth area on the third image and a sixth area on the first image corresponding to the fifth area, and to determine a fourth function based on a seventh area on the third image, a third function, and an eighth area on the first image corresponding to the seventh area, and to determine a fourth image based on the third function, the fourth function, and the third image.
在某些实施方式中,处理器还用于根据第三图像上的第五区域、以及第一图像上与第五区域对应的第六区域,确定第三函数,以及用于根据第三图像上的第七区域、第三函数、以及第一图像上与第七区域对应的第八区域,确定第四函数,以及用于根据第三函数、第四函数以及第三图像,确定第四图像。In some embodiments, the processor is also used to determine a third function based on a fifth area on the third image and a sixth area on the first image corresponding to the fifth area, and to determine a fourth function based on a seventh area on the third image, a third function, and an eighth area on the first image corresponding to the seventh area, and to determine a fourth image based on the third function, the fourth function, and the third image.
具体地,在上述实施方式的基础上,请参阅图3,移动机器人上设置有红外光摄像装置(红外摄像头)、可见光摄像装置(RGB摄像头)以及激光装置,移动机器人通过激光装置向目标物体发射参考激光来实施补光,红外摄像头以及RGB摄像头分别获取目标物体的IR图像以及RGB图像,其中IR图像是目标物体反射上述的参考激光形成的。然后以上述获取到的RGB图像以及IR图像为数据基础,确定出用于光谱类型变换的函数,并根据得到的函数以及RGB图像得到经过光谱类型变换后的变换IR图像(对应于第四图像)。上述的基于映射函数G2以及损失函数Loss2针对RGB图像进行的光谱变换处理即对应于第二光谱变换处理。Specifically, on the basis of the above-mentioned implementation, please refer to FIG3 , the mobile robot is provided with an infrared light camera device (infrared camera), a visible light camera device (RGB camera) and a laser device, the mobile robot transmits a reference laser to the target object through the laser device to implement fill light, the infrared camera and the RGB camera respectively obtain the IR image and the RGB image of the target object, wherein the IR image is formed by the target object reflecting the above-mentioned reference laser. Then, based on the above-mentioned RGB image and IR image obtained as data, a function for spectral type conversion is determined, and a converted IR image (corresponding to the fourth image) after spectral type conversion is obtained according to the obtained function and RGB image. The above-mentioned spectral conversion processing based on the mapping function G2 and the loss function Loss2 for the RGB image corresponds to the second spectral conversion processing.
具体而言,上述用于光谱类型变换的函数一般包括两个,其一为用于图像转换的映射函数G2(对应于第三函数),其二为用于避免图像信息缺失或信息冗余的损失函数Loss2(对应于第四函数),上述两函数的输入侧数据均为上述的RGB图像,两函数输出结果的叠加即为上述的变换IR图像。其中映射函数G2包括三个参数(对应于第七参数、第八参数以及第九参数),每一个参数分别对应RGB的三个颜色通道中的一个。同样地,损失函数Loss2也包括三个参数(对应于第十参数、第十一参数以及第十二参数),每一个参数也分别对应RGB的三个颜色通道中的一个。Specifically, the above-mentioned functions for spectral type conversion generally include two, one is the mapping function G2 (corresponding to the third function) for image conversion, and the other is the loss function Loss2 (corresponding to the fourth function) for avoiding image information loss or information redundancy. The input side data of the above two functions are the above-mentioned RGB images, and the superposition of the output results of the two functions is the above-mentioned transformed IR image. The mapping function G2 includes three parameters (corresponding to the seventh parameter, the eighth parameter and the ninth parameter), and each parameter corresponds to one of the three color channels of RGB. Similarly, the loss function Loss2 also includes three parameters (corresponding to the tenth parameter, the eleventh parameter and the twelfth parameter), and each parameter also corresponds to one of the three color channels of RGB.
为了确定映射函数G2以及损失函数Loss2,需要对上述的第七至第十二共六个参数进行确定,在上述六个参数均确定的情况下,结合函数预设的计算方式即可以确定出映射函数G2以及损失函数Loss2,并将RGB图像(第三图像)作为输入进行计算,即可以得到变换IR图像。In order to determine the mapping function G2 and the loss function Loss2, it is necessary to determine the above-mentioned six parameters from the seventh to the twelfth. When the above six parameters are determined, the mapping function G2 and the loss function Loss2 can be determined in combination with the preset calculation method of the function, and the RGB image (the third image) is used as input for calculation, and the transformed IR image can be obtained.
在上述确定第七至第十二共六个参数的过程中,由于设置于移动机器人上的摄像头位置不同,从而导致获取到的IR图像与RGB图像之间存在视差,因而无法通过直接比较两图像上相同位置处的图像内容的方式来直接确定上述参数。为了确定能够直接用于图像比对的图像区域,示例性地,需要针对获取到的IR图像以及RGB图像进行图像特征分析,以确定出两图像上互相对应的、存在相同图像特征的兴趣区域(Region of Interest,ROI)。In the process of determining the seventh to twelfth parameters, a total of six parameters, due to the different positions of the cameras set on the mobile robot, there is a parallax between the acquired IR image and the RGB image, so the above parameters cannot be directly determined by directly comparing the image content at the same position on the two images. In order to determine the image area that can be directly used for image comparison, it is necessary to perform image feature analysis on the acquired IR image and RGB image to determine the corresponding regions of interest (ROI) on the two images that have the same image features.
在实际应用中,第一图像与第三图像上能够被确定的对应的ROI数量不止一组,一般会从中选择一组作为计算第七、第八以及第九参数的数据基础,在确定出第七、第八以及第九参数,也即确定出映射函数G2后,利用其他ROI组与映射函数G2进行计算,并利用计算得到的数据确定出第十、第十一、第十二参数,以及损失函数Loss2。此外,为了提高损失函数Loss2的精确度,还可以选择第一图像以及第三图像上的其他ROI对损失函数Loss2中的第十、第十一、第十二参数进行多次重复计算与验证,直到损失函数Loss2符合预期为止。In practical applications, there are more than one group of corresponding ROIs that can be determined on the first image and the third image. Generally, one group is selected as the data basis for calculating the seventh, eighth and ninth parameters. After the seventh, eighth and ninth parameters are determined, that is, the mapping function G2 is determined, other ROI groups and mapping function G2 are used for calculation, and the tenth, eleventh and twelfth parameters and loss function Loss2 are determined using the calculated data. In addition, in order to improve the accuracy of the loss function Loss2, other ROIs on the first image and the third image can also be selected to repeatedly calculate and verify the tenth, eleventh and twelfth parameters in the loss function Loss2 until the loss function Loss2 meets expectations.
示例性地,比如图4中,以R1-1特征(对应于第五区域)以及R1特征(对应于第六区域)为对应ROI组计算出第七、第八、第九参数,并得到映射函数G2。然后以R2-1特征(对应于第七区域)以及R2特征(对应于第八区域)为对应ROI组,将R2-1特征作为映射函数G2的输入侧进行计算,最后将映射函数G2的输出值与R2特征进行比对,即可以得到第十、第十一与第十二参数,最终确定出损失函数Loss2。For example, in FIG4 , the seventh, eighth, and ninth parameters are calculated with the R1-1 feature (corresponding to the fifth region) and the R1 feature (corresponding to the sixth region) as the corresponding ROI group, and the mapping function G2 is obtained. Then, the R2-1 feature (corresponding to the seventh region) and the R2 feature (corresponding to the eighth region) are used as the corresponding ROI group, and the R2-1 feature is used as the input side of the mapping function G2 for calculation. Finally, the output value of the mapping function G2 is compared with the R2 feature, and the tenth, eleventh, and twelfth parameters can be obtained, and the loss function Loss2 is finally determined.
最后,根据上述的映射函数G2以及损失函数Loss2,将RGB图像作为上述两组函数的输入侧数据进行计算,两组函数计算结果的叠加即为变换IR图像(对应于第四图像)。Finally, according to the above-mentioned mapping function G2 and loss function Loss2, the RGB image is used as the input side data of the above-mentioned two groups of functions for calculation, and the superposition of the calculation results of the two groups of functions is the transformed IR image (corresponding to the fourth image).
如此,本申请能够根据移动机器人获取到的两组光谱类型不同的图像,通过确定两组对应函数的方式确定出与获取到的图像中的一个光谱类型相同的图像,为避障参考信息的确定做数据准备。In this way, the present application can determine an image with the same spectral type as one of the acquired images by determining two sets of corresponding functions based on the two sets of images with different spectral types acquired by the mobile robot, thereby preparing data for determining obstacle avoidance reference information.
请参阅图7,在某些实施方式中,步骤041包括:Referring to FIG. 7 , in some embodiments, step 041 includes:
0411:根据第五区域的像素值以及第六区域的像素值,确定第七参数、第八参数、第九参数,0411: Determine the seventh parameter, the eighth parameter, and the ninth parameter according to the pixel value of the fifth area and the pixel value of the sixth area,
其中第五区域与第六区域具有相同的图像特征;The fifth region and the sixth region have the same image features;
另外,在某些实施方式中,步骤042包括:Additionally, in some embodiments, step 042 includes:
0421:根据第七区域以及第三函数,确定第二比较图像,0421: Determine a second comparison image according to the seventh region and the third function,
其中第二比较图像为红外光图像;The second comparison image is an infrared light image;
0422:根据第二比较图像以及第八区域的差异,确定第十参数、第十一参数以及第十二参数。0422: Determine a tenth parameter, an eleventh parameter, and a twelfth parameter according to the difference between the second comparison image and the eighth region.
在某些实施方式中,光谱变换模块还用于根据第五区域的像素值以及第六区域的像素值,确定第七参数、第八参数、第九参数,以及用于根据第七区域以及第三函数,确定第二比较图像,以及用于根据第二比较图像以及第八区域的差异,确定第十参数、第十一参数以及第十二参数。In some embodiments, the spectral transformation module is also used to determine the seventh parameter, the eighth parameter, and the ninth parameter based on the pixel value of the fifth area and the pixel value of the sixth area, and to determine the second comparison image based on the seventh area and the third function, and to determine the tenth parameter, the eleventh parameter, and the twelfth parameter based on the difference between the second comparison image and the eighth area.
在某些实施方式中,处理器还用于根据第五区域的像素值以及第六区域的像素值,确定第七参数、第八参数、第九参数,以及用于根据第七区域以及第三函数,确定第二比较图像,以及用于根据第二比较图像以及第八区域的差异,确定第十参数、第十一参数以及第十二参数。In some embodiments, the processor is further used to determine a seventh parameter, an eighth parameter, and a ninth parameter based on the pixel value of the fifth region and the pixel value of the sixth region, and to determine a second comparison image based on the seventh region and the third function, and to determine a tenth parameter, an eleventh parameter, and a twelfth parameter based on the difference between the second comparison image and the eighth region.
具体地,接下来具体说明计算映射函数G2以及损失函数Loss2的方式。在上述实施方式以及示例的基础上,请参阅图4,首先选择R1-1以及R1作为对应ROI来计算映射函数G2。示例性地,具体计算的方式为:对于R1-1特征以及R1,选取R1-1特征所在的区域内m×n范围内的各个像素以及R1-1特征所在的区域内对应的m×n范围内的各个像素,其中m与n分别为R1-1特征以及R1特征所在的区域的横纵两个方向上的像素值。Specifically, the following describes in detail the method for calculating the mapping function G2 and the loss function Loss2. Based on the above implementation and examples, please refer to Figure 4. First, select R1-1 and R1 as corresponding ROIs to calculate the mapping function G2. Exemplarily, the specific calculation method is: for the R1-1 feature and R1, select each pixel within the m×n range in the area where the R1-1 feature is located and each pixel within the corresponding m×n range in the area where the R1-1 feature is located, where m and n are the pixel values in the horizontal and vertical directions of the area where the R1-1 feature and the R1 feature are located, respectively.
然后,利用目前相关技术中的插值计算方式,分别对应于RGB图像的三个颜色通道计算出上述第七、第八以及第九参数,从而确定映射函数G1,其中第七、第八、第九参数分别对应R、G、B中的哪一个颜色通道可以根据实际情况进行设置,本申请不做具体限定。Then, the interpolation calculation method in the current relevant technology is used to calculate the seventh, eighth and ninth parameters corresponding to the three color channels of the RGB image respectively, so as to determine the mapping function G1, wherein the seventh, eighth and ninth parameters correspond to which color channel of R, G and B respectively, which can be set according to actual conditions, and this application does not make specific limitations.
接下来,将RGB图像上的R2-1特征作为映射函数G2的输入侧数据并进行计算,得到一个红外光比对图像(对应于第二比较图像),然后基于上述的红外光比对图像与R2特征进行比对分析,确定出红外光比对图像与R2特征之间的差异,并基于上述差异进行数据转化,分别对应于RGB图像的三个颜色通道计算出上述第十、第十一以及第十二参数,从而确定损失函数Loss2,其中第十、第十一、第十二参数分别对应R、G、B中的哪一个颜色通道可以根据实际情况进行设置,本申请不做具体限定。Next, the R2-1 feature on the RGB image is used as the input side data of the mapping function G2 and calculated to obtain an infrared light comparison image (corresponding to the second comparison image), and then a comparison analysis is performed based on the above infrared light comparison image and the R2 feature to determine the difference between the infrared light comparison image and the R2 feature, and data conversion is performed based on the above difference, and the tenth, eleventh and twelfth parameters are calculated corresponding to the three color channels of the RGB image respectively, so as to determine the loss function Loss2, wherein the tenth, eleventh and twelfth parameters correspond to which color channel of R, G and B respectively, which can be set according to actual conditions, and this application does not make specific limitations.
如此,本申请实施方式还提供了确定对应函数中的参数的具体方式。Thus, the embodiments of the present application also provide a specific method for determining the parameters in the corresponding function.
请参阅图8,在某些实施方式中,步骤05还包括:Please refer to FIG. 8 , in some embodiments, step 05 further includes:
051:根据第一图像与第四图像,确定第一深度信息;051: Determine first depth information according to the first image and the fourth image;
052:根据第二图像与第三图像,确定第二深度信息;052: Determine second depth information according to the second image and the third image;
053:根据第一深度信息以及第二深度信息,确定避障参考信息。053: Determine obstacle avoidance reference information according to the first depth information and the second depth information.
在某些实施方式中,信息确定模块还用于根据第一图像与第四图像,确定第一深度信息,以及用于根据第二图像与第三图像,确定第二深度信息,以及用于根据第一深度信息以及第二深度信息,确定避障参考信息。In some embodiments, the information determination module is further used to determine first depth information based on the first image and the fourth image, to determine second depth information based on the second image and the third image, and to determine obstacle avoidance reference information based on the first depth information and the second depth information.
在某些实施方式中,处理器还用于根据第一图像与第四图像,确定第一深度信息,以及用于根据第二图像与第三图像,确定第二深度信息,以及用于根据第一深度信息以及第二深度信息,确定避障参考信息。In some embodiments, the processor is further used to determine first depth information based on the first image and the fourth image, to determine second depth information based on the second image and the third image, and to determine obstacle avoidance reference information based on the first depth information and the second depth information.
具体地,在上述实施方式的基础上,在基于获取到的IR图像(对应于第一图像)以及RGB图像(对应于第三图像),确定出变换RGB图像(对应于第二图像)以及变换IR图像(对应于第四图像)的基础上,根据光谱类型,将图像分为IR组以及RGB组,分别根据两组图像计算目标物体相对于移动机器人的深度信息。Specifically, on the basis of the above-mentioned implementation mode, based on the acquired IR image (corresponding to the first image) and RGB image (corresponding to the third image), the transformed RGB image (corresponding to the second image) and the transformed IR image (corresponding to the fourth image) are determined, and according to the spectral type, the images are divided into IR group and RGB group, and the depth information of the target object relative to the mobile robot is calculated based on the two groups of images.
有关于深度信息的计算原理,请参阅图9。图9原理性地示出了任意两个摄像头针对同一目标物体进行深度信息确定的过程。示例性地,图9示出的坐标系以摄像装置1上的一点为原点,示例性地以图9中摄像装置1的左下角端点作为上述坐标系的原点。摄像装置1以及摄像装置2(可以设置为上述实施方式中的RGB摄像头以及IR摄像头)设置于上述坐标系的横轴上,目标物体到横轴的间距即为待测的深度信息Depth,摄像装置1与摄像装置2的光心所在直线间的距离为Base,摄像装置1以及摄像装置2的焦距均为f。在上述情况下,摄像装置1所成的目标物体的图像与摄像装置2所成的目标物体的图像之间存在视差x。为了得到视差x,可以将两组图像重合处理,并计算重合后的图像上目标物体的两个像之间的视差,即可以得到上述的视差x。其中,深度信息Depth与视差x的数据形式可以是标量,矢量或图像矩阵信息,上述的数据形式可以根据实际情况进行调整,本申请不做具体限定。For the calculation principle of depth information, please refer to FIG9 . FIG9 schematically shows the process of determining the depth information of the same target object by any two cameras. Exemplarily, the coordinate system shown in FIG9 takes a point on the camera device 1 as the origin, and exemplarily takes the lower left corner endpoint of the camera device 1 in FIG9 as the origin of the above coordinate system. The camera device 1 and the camera device 2 (which can be set as the RGB camera and the IR camera in the above embodiment) are set on the horizontal axis of the above coordinate system, and the distance between the target object and the horizontal axis is the depth information to be measured Depth, the distance between the straight lines where the optical centers of the camera device 1 and the camera device 2 are located is Base, and the focal lengths of the camera device 1 and the camera device 2 are both f. In the above case, there is a parallax x between the image of the target object formed by the camera device 1 and the image of the target object formed by the camera device 2. In order to obtain the parallax x, the two sets of images can be overlapped, and the parallax between the two images of the target object on the overlapped image is calculated, that is, the above parallax x can be obtained. The data forms of the depth information Depth and the disparity x may be scalar, vector or image matrix information. The above data forms may be adjusted according to actual conditions and are not specifically limited in this application.
在得到了上述各个数据的情况下,深度信息Depth的计算方式如下:When the above data are obtained, the depth information Depth is calculated as follows:
在上述原理以及上述实施方式的基础上,由于变换RGB图像(对应于第二图像)是基于IR图像(对应于第一图像)确定的,变换IR图像(对应于第四图像)是基于RGB图像(对应于第三图像)确定的,而IR图像与RGB图像上目标物体的像存在上述的视差,因此上述的四个图像中光谱类型相同的两图像上目标物体的像也均具有视差。On the basis of the above principles and implementation methods, since the transformed RGB image (corresponding to the second image) is determined based on the IR image (corresponding to the first image), the transformed IR image (corresponding to the fourth image) is determined based on the RGB image (corresponding to the third image), and the images of the target object on the IR image and the RGB image have the above-mentioned parallax, the images of the target object on the two images with the same spectral type in the above-mentioned four images also have parallax.
因此,当将IR图像与变换IR图像作为一组、将RGB图像与变换RGB图像作为一组,分别通过上述两组图像对深度信息Depth进行计算时,焦距f以及摄像装置的光心所在直线的间距Base均为已知常数,只需得知该组图像中目标物体的像的视差x,即可依照上述的计算方式确定出对应情况下的深度信息Depth。故可以根据IR图像以及变换IR图像确定出红外光下的深度信息Depth1(对应于第一深度信息),根据RGB图像以及变换RGB图像确定出可见光下的深度信息Depth2(对应于第二深度信息),最后根据Depth1以及Depth2经过数据处理,确定出最终可以直接用于避障的深度信息Depth*(对应于避障参考信息)。Therefore, when the IR image and the transformed IR image are taken as a group, and the RGB image and the transformed RGB image are taken as a group, and the depth information Depth is calculated through the above two groups of images, the focal length f and the distance Base of the straight line where the optical center of the camera device is located are both known constants. It is only necessary to know the parallax x of the image of the target object in the group of images to determine the depth information Depth in the corresponding situation according to the above calculation method. Therefore, the depth information Depth1 (corresponding to the first depth information) under infrared light can be determined based on the IR image and the transformed IR image, and the depth information Depth2 (corresponding to the second depth information) under visible light can be determined based on the RGB image and the transformed RGB image. Finally, after data processing based on Depth1 and Depth2, the depth information Depth* (corresponding to the obstacle avoidance reference information) that can be directly used for obstacle avoidance is determined.
需要注意的是,上述确定Depth1以及Depth2的步骤的具体执行顺序可以根据实际情况进行调换,也可以同时进行,本实施方式叙述的仅是一种示例性的步骤执行顺序,而并非是对执行顺序的限定。It should be noted that the specific execution order of the above steps of determining Depth1 and Depth2 can be changed according to actual conditions, or can be performed simultaneously. This embodiment describes only an exemplary execution order of the steps, and is not a limitation on the execution order.
如此,本申请实施方式能够将获取到的图像和进一步确定的图像根据光谱类型两两分组,并根据两组图像分别确定出一个目标物体的深度信息,并基于两组深度信息最终确定避障参考信息,以提高避障参考信息的精确度。In this way, the implementation method of the present application can group the acquired images and the further determined images in pairs according to the spectral type, and determine the depth information of a target object based on the two groups of images respectively, and finally determine the obstacle avoidance reference information based on the two groups of depth information to improve the accuracy of the obstacle avoidance reference information.
请参阅图10,在某些实施方式中,步骤053还包括:Please refer to FIG. 10 , in some embodiments, step 053 further includes:
0531:根据第一深度信息以及第二函数,确定第一损失信息,0531: Determine first loss information according to the first depth information and the second function,
其中第二函数根据第一图像上的第三区域、第一函数以及第三图像上与第三区域对应的第四区域确定,第一函数根据第一图像上的第一区域、以及第三图像上与第一区域对应的第二区域确定;The second function is determined according to the third area on the first image, the first function, and a fourth area on the third image corresponding to the third area, and the first function is determined according to the first area on the first image and a second area on the third image corresponding to the first area;
0532:根据第二深度信息以及第四函数,确定第二损失信息,0532: Determine second loss information according to the second depth information and the fourth function,
其中第四函数根据第三图像上的第七区域、第三函数以及第一图像上与第七区域对应的第八区域确定,第三函数根据第三图像上的第五区域、以及第一图像上与第五区域对应的第六区域确定;The fourth function is determined according to the seventh area on the third image, the third function, and the eighth area on the first image corresponding to the seventh area, and the third function is determined according to the fifth area on the third image and the sixth area on the first image corresponding to the fifth area;
0533:在第一损失信息与第二损失信息之差位于预设阈值范围内的情况下,确定第一损失信息以及第二损失信息的平均值为避障参考信息;或者0533: when the difference between the first loss information and the second loss information is within a preset threshold range, determining an average value of the first loss information and the second loss information as the obstacle avoidance reference information; or
0534:在第一损失信息与第二损失信息之差位于预设阈值范围外的情况下,确定第一损失信息以及第二损失信息中所对应的深度信息较大的一个为避障参考信息。0534: When the difference between the first loss information and the second loss information is outside a preset threshold range, determine that the one with larger depth information corresponding to the first loss information and the second loss information is the obstacle avoidance reference information.
在某些实施方式中,信息确定模块还用于根据第一深度信息以及第二函数,确定第一损失信息,以及用于根据第二深度信息以及第四函数,确定第二损失信息,以及用于在第一损失信息与第二损失信息之差位于预设阈值范围内的情况下,确定第一损失信息以及第二损失信息的平均值为避障参考信息,以及用于在第一损失信息与第二损失信息之差位于预设阈值范围外的情况下,确定第一损失信息以及第二损失信息中所对应的深度信息较大的一个为避障参考信息。In certain embodiments, the information determination module is also used to determine the first loss information based on the first depth information and the second function, and to determine the second loss information based on the second depth information and the fourth function, and to determine the average value of the first loss information and the second loss information as the obstacle avoidance reference information when the difference between the first loss information and the second loss information is within a preset threshold range, and to determine the larger one of the first loss information and the second loss information corresponding to the depth information as the obstacle avoidance reference information when the difference between the first loss information and the second loss information is outside the preset threshold range.
在某些实施方式中,处理器还用于根据第一深度信息以及第二函数,确定第一损失信息,以及用于根据第二深度信息以及第四函数,确定第二损失信息,以及用于在第一损失信息与第二损失信息之差位于预设阈值范围内的情况下,确定第一损失信息以及第二损失信息的平均值为避障参考信息,以及用于在第一损失信息与第二损失信息之差位于预设阈值范围外的情况下,确定第一损失信息以及第二损失信息中所对应的深度信息较大的一个为避障参考信息。In certain embodiments, the processor is further used to determine first loss information based on the first depth information and the second function, and to determine second loss information based on the second depth information and a fourth function, and to determine, when the difference between the first loss information and the second loss information is within a preset threshold range, an average value of the first loss information and the second loss information as obstacle avoidance reference information, and to determine, when the difference between the first loss information and the second loss information is outside a preset threshold range, the one with the larger depth information corresponding to the first loss information and the second loss information as the obstacle avoidance reference information.
具体地,在上述实施方式确定出红外光下的深度信息Depth1以及可见光下的深度信息Depth2的情况下,接下来说明根据上述数据确定深度信息Depth*的方式。Specifically, in the case where the depth information Depth1 under infrared light and the depth information Depth2 under visible light are determined in the above embodiment, the method of determining the depth information Depth* according to the above data is described below.
首先,基于上述实施方式中已经确定的红外光下的深度信息Depth1,作为根据IR图像确定变换RGB图像过程中确定出的损失函数Loss1的输入侧数据进行计算,得到一个损失深度信息Loss1(Depth1)(对应于第一损失信息)。First, based on the depth information Depth1 under infrared light that has been determined in the above embodiment, a loss depth information Loss1 (Depth1) (corresponding to the first loss information) is obtained by calculating the input side data of the loss function Loss1 determined in the process of transforming the RGB image according to the IR image.
接下来,基于上述实施方式中已经确定的可见光下的深度信息Depth2,作为根据RGB图像确定变换IR图像过程中确定出的损失函数Loss2的输入侧数据进行计算,得到一个损失深度信息Loss2(Depth2)(对应于第二损失信息)。Next, based on the depth information Depth2 under visible light that has been determined in the above embodiment, a loss depth information Loss2 (Depth2) (corresponding to the second loss information) is obtained by calculation as the input side data of the loss function Loss2 determined in the process of transforming the IR image according to the RGB image.
然后将Loss1(Depth1)与Loss2(Depth2)作差,依据所得的差与预设的损失信息阈值范围进行比较,并根据上述的比较结果来确定Depth*的最终取值。Then, the difference between Loss1 (Depth1) and Loss2 (Depth2) is calculated, and the obtained difference is compared with the preset loss information threshold range, and the final value of Depth* is determined according to the above comparison result.
示例性地,若Loss1(Depth1)与Loss2(Depth2)的差在上述阈值范围内,则说明Depth1与Depth2的精确度满足目前的避障需求,为了保证Depth*的可靠性,Depth*取Loss1(Depth1)与Loss2(Depth2)的平均值即可。For example, if the difference between Loss1(Depth1) and Loss2(Depth2) is within the above threshold range, it means that the accuracy of Depth1 and Depth2 meets the current obstacle avoidance requirements. In order to ensure the reliability of Depth*, Depth* can be taken as the average value of Loss1(Depth1) and Loss2(Depth2).
另外,若Loss1(Depth1)与Loss2(Depth2)的差在上述阈值范围外,则说明Depth1与Depth2中的至少一个存在精确度问题,为了保证Depth*的可靠性,Depth*取Loss1(Depth1)与Loss2(Depth2)中较大的一个,以尽可能使移动机器人在避障过程中加大与目标物体间的距离,保证避障过程的安全。In addition, if the difference between Loss1(Depth1) and Loss2(Depth2) is outside the above threshold range, it means that at least one of Depth1 and Depth2 has an accuracy problem. In order to ensure the reliability of Depth*, Depth* takes the larger one of Loss1(Depth1) and Loss2(Depth2) to increase the distance between the mobile robot and the target object as much as possible during the obstacle avoidance process to ensure the safety of the obstacle avoidance process.
需要注意的是,上述确定Loss1(Depth1)与Loss2(Depth2)的步骤的具体执行顺序可以根据实际情况进行调换,也可以同时进行,本实施方式叙述的仅是一种示例性的步骤执行顺序,而并非是对执行顺序的限定。It should be noted that the specific execution order of the above steps of determining Loss1 (Depth1) and Loss2 (Depth2) can be swapped according to actual conditions or performed simultaneously. This embodiment describes only an exemplary step execution order and is not a limitation on the execution order.
如此,本申请实施方式还提供了根据两组深度信息具体确定避障参考信息的方式。Thus, the embodiments of the present application also provide a method for specifically determining obstacle avoidance reference information based on two sets of depth information.
本申请实施例中涉及的移动机器人是指为了自动执行工作而设计的机械设备,包括地面、空中、水面和水下移动机器人等,其移动机构有轮式、履带式、足式、混合式、特殊式等类型。移动机器人按工作环境分可包括室内移动机器人和室外移动机器人等,按移动方式分可包括轮式移动机器人、步行移动机器人、蛇形移动机器人、履带式移动机器人等,按功能与用途分可包括清洁机器人、助残机器人、服务机器人、军用机器人、医疗机器人等。The mobile robots involved in the embodiments of the present application refer to mechanical equipment designed for automatically performing work, including ground, air, water and underwater mobile robots, etc., and their mobile mechanisms include wheeled, tracked, footed, hybrid, special, etc. Mobile robots can include indoor mobile robots and outdoor mobile robots according to the working environment, wheeled mobile robots, walking mobile robots, snake-like mobile robots, tracked mobile robots, etc. according to the movement mode, and cleaning robots, disabled assistance robots, service robots, military robots, medical robots, etc. according to functions and uses.
以清洁机器人为例,包括但不限于:吸尘器、洗地机、吸尘吸水机、扫地机、拖地机、扫拖一体机等等。为便于说明,本申请实施例的移动机器人为扫拖一体机为例进行说明。Taking a cleaning robot as an example, it includes but is not limited to: a vacuum cleaner, a floor scrubber, a vacuum and water suction machine, a sweeper, a mop, a sweeper and mop all-in-one machine, etc. For ease of description, the mobile robot in the embodiment of the present application is described as a sweeper and mop all-in-one machine.
图11为一实施方式中移动机器人的示意性框图。移动机器人包括机器人主体、驱动电机102、传感器单元103、控制器104、清洁件105、行走单元106、存储器107、通信单元108、交互单元109、储能单元110等。Fig. 11 is a schematic block diagram of a mobile robot in one embodiment. The mobile robot includes a robot body, a drive motor 102, a sensor unit 103, a controller 104, a cleaning member 105, a walking unit 106, a memory 107, a communication unit 108, an interaction unit 109, an energy storage unit 110, and the like.
在移动机器人主体上设置的传感器单元103可以包括以下至少一种传感器:雷达传感器(如激光雷达)、视觉传感器(如RGB摄像头)、碰撞传感器、距离传感器、跌落传感器、计数器、陀螺仪等。举例而言,激光雷达设置在移动机器人主体的顶部或周侧,在工作时,可得到周围的环境信息,例如障碍物相对激光雷达的距离和角度等。此外,也可用摄像头等视觉传感器替代激光雷达,通过对摄像头拍摄的图像中的障碍物进行分析,也可得到障碍物相对摄像头的距离、角度等。碰撞传感器例如包括碰撞壳体和触发感应件;当移动机器人通过碰撞壳体与障碍物碰撞时,碰撞壳体向移动机器人内部移动,且压缩弹性缓冲件,以起到缓冲作用。在碰撞壳体向移动机器人内部移动一定距离后,碰撞壳体与触发感应件接触,触发感应件被触发产生信号,该信号可发送到移动机器人主体内的控制器104,以进行处理。在与障碍物发生碰撞后,移动机器人远离障碍物,在弹性缓冲件的作用下,碰撞壳体移回原位。距离传感器具体可以为红外探测传感器,可用于探测障碍物至距离传感器的距离。距离传感器可以设置在移动机器人主体的侧面,从而通过距离传感器可测出位于移动机器人侧面附近的障碍物至距离传感器的距离值。距离传感器也可以是超声波测距传感器、激光测距传感器或者深度传感器等。跌落传感器设置在移动机器人主体的底部边缘,当移动机器人移动到地面的边缘位置时,通过跌落传感器可探测出移动机器人有从高处跌落的风险,从而执行相应的防跌落反应,例如移动机器人停止移动、或往远离跌落位置的方向移动等。在移动机器人主体的内部还设有计数器和陀螺仪。计数器用于检测移动机器人移动的距离长度。陀螺仪用于检测移动机器人转动的角度,从而可确定出移动机器人的朝向。The sensor unit 103 provided on the main body of the mobile robot may include at least one of the following sensors: a radar sensor (such as a laser radar), a visual sensor (such as an RGB camera), a collision sensor, a distance sensor, a drop sensor, a counter, a gyroscope, etc. For example, the laser radar is provided on the top or the side of the main body of the mobile robot. When working, it can obtain the surrounding environment information, such as the distance and angle of the obstacle relative to the laser radar. In addition, the laser radar can also be replaced by a visual sensor such as a camera. By analyzing the obstacle in the image taken by the camera, the distance and angle of the obstacle relative to the camera can also be obtained. The collision sensor, for example, includes a collision shell and a trigger sensor. When the mobile robot collides with the obstacle through the collision shell, the collision shell moves toward the inside of the mobile robot and compresses the elastic buffer to play a buffering role. After the collision shell moves a certain distance into the inside of the mobile robot, the collision shell contacts the trigger sensor, and the trigger sensor is triggered to generate a signal, which can be sent to the controller 104 in the main body of the mobile robot for processing. After colliding with the obstacle, the mobile robot moves away from the obstacle, and under the action of the elastic buffer, the collision shell moves back to its original position. The distance sensor may specifically be an infrared detection sensor, which can be used to detect the distance from the obstacle to the distance sensor. The distance sensor may be arranged on the side of the mobile robot body, so that the distance value from the obstacle located near the side of the mobile robot to the distance sensor can be measured by the distance sensor. The distance sensor may also be an ultrasonic distance sensor, a laser distance sensor or a depth sensor, etc. The drop sensor is arranged at the bottom edge of the mobile robot body. When the mobile robot moves to the edge of the ground, the drop sensor can detect that the mobile robot is at risk of falling from a height, thereby executing a corresponding anti-fall response, such as the mobile robot stopping moving, or moving in a direction away from the falling position, etc. A counter and a gyroscope are also arranged inside the mobile robot body. The counter is used to detect the length of the distance moved by the mobile robot. The gyroscope is used to detect the angle of rotation of the mobile robot, so as to determine the direction of the mobile robot.
控制器104设置在移动机器人主体内部,控制器104用于控制移动机器人执行具体的操作。该控制器104例如可以为中央处理器(Central Processing Unit,CPU)、或微处理器(Microprocessor)等。如图11所示,控制器104与储能单元110、存储器107、驱动电机102、行走单元106、传感器单元103、交互单元109以及清洁件105等部件电连接,以对这些部件进行控制。The controller 104 is disposed inside the main body of the mobile robot, and is used to control the mobile robot to perform specific operations. The controller 104 may be, for example, a central processing unit (CPU) or a microprocessor. As shown in FIG11 , the controller 104 is electrically connected to components such as an energy storage unit 110, a memory 107, a drive motor 102, a walking unit 106, a sensor unit 103, an interaction unit 109, and a cleaning member 105 to control these components.
清洁件105可用于对地面进行清洁,清洁件105的数量可以为一个或多个。清洁件105例如包括拖布。拖布例如包括以下至少一种:旋转拖布、平板拖布、滚筒式拖布、履带式拖布等,当然也不限于此。拖布设置在移动机器人主体的底部,具体可以为移动机器人主体的底部靠后的位置。以清洁件为旋转拖布为例,在移动机器人主体内部设有驱动电机102,在移动机器人主体的底部伸出两个转轴,拖布套接在转轴上。驱动电机102可带动转轴旋转,从而转轴带动拖布旋转。清洁件105还可以是边刷、滚刷等,此处不作限制。The cleaning member 105 can be used to clean the floor, and the number of the cleaning members 105 can be one or more. The cleaning member 105 includes, for example, a mop. The mop includes, for example, at least one of the following: a rotating mop, a flat mop, a roller mop, a crawler mop, etc., but is certainly not limited thereto. The mop is disposed at the bottom of the mobile robot body, and specifically can be at the rear of the bottom of the mobile robot body. Taking the cleaning member as a rotating mop as an example, a drive motor 102 is provided inside the mobile robot body, two rotating shafts extend from the bottom of the mobile robot body, and the mop is sleeved on the rotating shaft. The drive motor 102 can drive the rotating shaft to rotate, so that the rotating shaft drives the mop to rotate. The cleaning member 105 can also be a side brush, a roller brush, etc., which are not limited here.
行走单元106为与移动机器人的移动相关的部件,行走单元106例如包括驱动轮和万向轮。万向轮和驱动轮配合实现移动机器人的转向和移动。The walking unit 106 is a component related to the movement of the mobile robot, and the walking unit 106 includes, for example, a driving wheel and a universal wheel. The universal wheel and the driving wheel cooperate to realize the steering and movement of the mobile robot.
存储器107设置在移动机器人主体上,存储器107上存储有程序,该程序被控制器104执行时实现相应的操作。存储器107还用于存储供移动机器人使用的参数。其中,存储器107包括但不限于磁盘存储器、只读光盘(Compact Disc Read-Only Memory,CD-ROM)、光学存储器等。The memory 107 is arranged on the main body of the mobile robot, and a program is stored in the memory 107, and the corresponding operation is realized when the program is executed by the controller 104. The memory 107 is also used to store parameters used by the mobile robot. The memory 107 includes but is not limited to a disk memory, a compact disc read-only memory (CD-ROM), an optical memory, etc.
通信单元108设置在移动机器人主体上,通信单元108用于让移动机器人和外部设备进行通信;例如与终端或与基站进行通信。其中,基站为配合移动机器人使用的清洁设备。The communication unit 108 is arranged on the main body of the mobile robot, and is used to allow the mobile robot to communicate with external devices, such as a terminal or a base station, wherein the base station is a cleaning device used in conjunction with the mobile robot.
交互单元109设置在移动机器人主体上,用户可通过交互单元109和移动机器人进行交互。交互单元109例如包括触控屏、开关按钮、扬声器等中的至少一种。例如用户可通过按压开关按钮控制移动机器人启动工作或停止工作。The interaction unit 109 is disposed on the main body of the mobile robot, and the user can interact with the mobile robot through the interaction unit 109. The interaction unit 109 includes, for example, at least one of a touch screen, a switch button, a speaker, etc. For example, the user can control the mobile robot to start or stop working by pressing the switch button.
储能单元110设置在移动机器人主体内部,储能单元110用于为移动机器人提供电力。The energy storage unit 110 is disposed inside the main body of the mobile robot, and the energy storage unit 110 is used to provide power to the mobile robot.
移动机器人主体上还设有充电部件,该充电部件用于从外部设备(例如基站)获取电力,从而向移动机器人的储能单元110进行充电。The main body of the mobile robot is also provided with a charging component, which is used to obtain power from an external device (such as a base station) to charge the energy storage unit 110 of the mobile robot.
应该理解,图11中描述的移动机器人只是本申请实施例中的一个具体示例,并不对本申请实施例的移动机器人构成具体限定。本申请实施例的移动机器人还可以为其它的具体实现方式。在其它的实现方式中,移动机器人可以比图11所示的移动机器人有更多或更少的部件;例如,移动机器人可包括用于储存清水的清水腔室和/或用于储存脏污的脏污容纳部,移动机器人可以将清水腔室储存的清水输送到拖擦件和/或地面,以润湿拖擦件,以及基于润湿后的拖擦件对地面进行清洁,移动机器人还可以将地面的脏污或者含有脏污的污水收集至脏污容纳部中;移动机器人还可以将清水腔室储存的清水输送到拖擦件,以对拖擦件进行清洗,清洗拖擦件后的含有脏污的污水也可以输送至脏污容纳部中。It should be understood that the mobile robot described in FIG. 11 is only a specific example in the embodiment of the present application, and does not constitute a specific limitation on the mobile robot in the embodiment of the present application. The mobile robot in the embodiment of the present application may also be other specific implementations. In other implementations, the mobile robot may have more or fewer components than the mobile robot shown in FIG. 11; for example, the mobile robot may include a clean water chamber for storing clean water and/or a dirt holding portion for storing dirt, and the mobile robot may transport the clean water stored in the clean water chamber to the mop and/or the ground to wet the mop, and clean the ground based on the wetted mop, and the mobile robot may also collect dirt on the ground or sewage containing dirt into the dirt holding portion; the mobile robot may also transport the clean water stored in the clean water chamber to the mop to clean the mop, and the sewage containing dirt after cleaning the mop may also be transported to the dirt holding portion.
本申请实施方式中的计算机可读存储介质存储有计算机程序,在所述计算机程序被一个或多个处理器执行的情况下,实现上述的方法。The computer-readable storage medium in the embodiments of the present application stores a computer program, and when the computer program is executed by one or more processors, the above method is implemented.
本申请实施方式中的计算机可读存储介质存储有计算机程序,在计算机程序被一个或多个处理器执行的情况下,实现上述的方法。该计算机可读存储介质可以包括平板电脑的存储部件、个人计算机的硬盘、只读存储器(ROM)、可擦除可编程只读存储器(EPROM)、便携式只读存储器(CD-ROM)、USB存储器、或者上述存储介质的任意组合。计算机可读存储介质可以是一个或多个计算机可读存储介质的任意组合。The computer-readable storage medium in the embodiment of the present application stores a computer program, and when the computer program is executed by one or more processors, the above method is implemented. The computer-readable storage medium may include a storage component of a tablet computer, a hard disk of a personal computer, a read-only memory (ROM), an erasable programmable read-only memory (EPROM), a portable read-only memory (CD-ROM), a USB memory, or any combination of the above storage media. The computer-readable storage medium may be any combination of one or more computer-readable storage media.
在本说明书的描述中,参考术语“某些实施方式”、“一个例子中”、“示例地”等的描述意指结合实施方式或示例描述的具体特征、结构、材料或者特点包含于本申请的至少一个实施方式或示例中。在本说明书中,对上述术语的示意性表述不一定指的是相同的实施方式或示例。而且,描述的具体特征、结构、材料或者特点可以在任何的一个或多个实施方式或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技术人员可以将本说明书中描述的不同实施例或示例以及不同实施例或示例的特征进行结合和组合。In the description of this specification, the descriptions with reference to the terms "certain embodiments", "in an example", "exemplarily", etc., mean that the specific features, structures, materials or characteristics described in conjunction with the embodiments or examples are included in at least one embodiment or example of the present application. In this specification, the schematic representations of the above terms do not necessarily refer to the same embodiment or example. Moreover, the specific features, structures, materials or characteristics described may be combined in any one or more embodiments or examples in a suitable manner. In addition, those skilled in the art may combine and combine the different embodiments or examples described in this specification and the features of the different embodiments or examples, unless they are contradictory.
流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于实现特定逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分,并且本申请的优选实施方式的范围包括另外的实现,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本申请的实施例所属技术领域的技术人员所理解。Any process or method description in a flowchart or otherwise described herein may be understood to represent a module, segment or portion of code that includes one or more executable instructions for implementing the steps of a specific logical function or process, and the scope of the preferred embodiments of the present application includes alternative implementations in which functions may not be performed in the order shown or discussed, including performing functions in a substantially simultaneous manner or in the reverse order depending on the functions involved, which should be understood by technicians in the technical field to which the embodiments of the present application belong.
尽管上面已经示出和描述了本申请的实施方式,可以理解的是,上述实施方式是示例性的,不能理解为对本申请的限制,本领域的普通技术人员在本申请的范围内可以对上述实施方式进行变化、修改、替换和变型。Although the embodiments of the present application have been shown and described above, it can be understood that the above embodiments are exemplary and cannot be understood as limitations to the present application. Ordinary technicians in this field can change, modify, replace and modify the above embodiments within the scope of the present application.
Claims (12)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202410463505.5A CN118411704A (en) | 2024-04-16 | 2024-04-16 | Mobile robot control method, mobile robot and storage medium |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202410463505.5A CN118411704A (en) | 2024-04-16 | 2024-04-16 | Mobile robot control method, mobile robot and storage medium |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN118411704A true CN118411704A (en) | 2024-07-30 |
Family
ID=91988978
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202410463505.5A Pending CN118411704A (en) | 2024-04-16 | 2024-04-16 | Mobile robot control method, mobile robot and storage medium |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN118411704A (en) |
-
2024
- 2024-04-16 CN CN202410463505.5A patent/CN118411704A/en active Pending
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| JP7442063B2 (en) | Vacuum cleaner control method and control system | |
| US11537135B2 (en) | Moving robot and controlling method for the moving robot | |
| CN106998983B (en) | Electric vacuum cleaner | |
| KR100735565B1 (en) | Object detection method using structured light and robot using same | |
| CN113916230B (en) | Systems and methods for performing simultaneous localization and mapping using a machine vision system | |
| US10966585B2 (en) | Moving robot and controlling method thereof | |
| WO2019007038A1 (en) | Floor sweeping robot, floor sweeping robot system and working method thereof | |
| US20190254490A1 (en) | Vacuum cleaner and travel control method thereof | |
| CN108247647A (en) | A kind of clean robot | |
| TWI726031B (en) | Electric sweeper | |
| CN111405862B (en) | Electric vacuum cleaner | |
| EP2838410A1 (en) | Autonomous coverage robot | |
| JP2014085940A (en) | Plane detection device and autonomous moving device including the same | |
| CN113613536B (en) | robot cleaner | |
| AU2023249470A1 (en) | Automatic cleaning devices, control method and storage medium | |
| JP6912937B2 (en) | Vacuum cleaner | |
| JP7719943B2 (en) | Robot vacuum cleaner and control method for robot vacuum cleaner | |
| CN118411704A (en) | Mobile robot control method, mobile robot and storage medium | |
| JP2020047188A (en) | Autonomous traveling vacuum cleaner | |
| KR20230012855A (en) | Method and device for real time measurement of distance from and width of objects using cameras and artificial intelligence object recognition, robot vacuum cleaners comprising the device, and movement control method for avoiding obstacles | |
| JP2020052601A (en) | Autonomous travel cleaner and control method | |
| KR20220012001A (en) | Robot Cleaner and Controlling method thereof | |
| RU2800503C1 (en) | Cleaning robot and method of automatic control of cleaning robot | |
| CN114488103A (en) | Ranging system, ranging method, robot, equipment and storage medium | |
| JP2024006218A (en) | Movement control system, movement control method, and program |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination |