CN112097769B - Homing pigeon brain-hippocampus-imitated unmanned aerial vehicle simultaneous positioning and mapping navigation system and method - Google Patents
Homing pigeon brain-hippocampus-imitated unmanned aerial vehicle simultaneous positioning and mapping navigation system and method Download PDFInfo
- Publication number
- CN112097769B CN112097769B CN202010776277.9A CN202010776277A CN112097769B CN 112097769 B CN112097769 B CN 112097769B CN 202010776277 A CN202010776277 A CN 202010776277A CN 112097769 B CN112097769 B CN 112097769B
- Authority
- CN
- China
- Prior art keywords
- experience
- image
- navigation
- module
- cell
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/28—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
- G01C21/30—Map- or contour-matching
- G01C21/32—Structuring or formatting of map data
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Automation & Control Theory (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Image Analysis (AREA)
Abstract
本发明公开一种仿信鸽脑‑海马的无人机同时定位与建图导航系统及方法,该系统包括7个模块,分别为图像采集与预处理模块、局部视场细胞模块、里程计模块、位姿细胞模块、头朝向细胞模块、网格细胞模块、经验地图模块;本发明方法模拟鸽脑海马具有两种独立导航机制的特点,基于连续吸引子神经网络模型,分别建立了两种类脑导航路径积分算法,并结合局部视图细胞、位姿细胞和经验地图等模块,实现冗余、鲁棒的定位和导航,弥补了原始RatSLAM算法不具备冗余导航的缺点,可以辅助无人机实现导航与定位,提高无人机智能水平。
The invention discloses a simultaneous positioning and mapping navigation system and method for an unmanned aerial vehicle imitating the brain-hippocampus of a carrier pigeon. The pose cell module, the head orientation cell module, the grid cell module and the experience map module; the method of the invention simulates the characteristics of two independent navigation mechanisms in the hippocampus of pigeons, and based on the continuous attractor neural network model, two types of brain-like navigation are established respectively. The path integral algorithm, combined with modules such as local view cells, pose cells and experience maps, realizes redundant and robust positioning and navigation, which makes up for the shortcomings of the original RatSLAM algorithm that does not have redundant navigation, and can assist UAVs to achieve navigation And positioning, improve the level of UAV intelligence.
Description
技术领域technical field
本发明涉及一种仿信鸽脑-海马的无人机同时定位与建图导航系统及方法,属于无人机自主导航领域。The invention relates to a simultaneous positioning and mapping navigation system and method for an unmanned aerial vehicle imitating the brain-hippocampus of a carrier pigeon, belonging to the field of autonomous navigation of unmanned aerial vehicles.
背景技术Background technique
信息时代的到来,导航技术在新型战场环境中扮演着越来越重要的角色。以无人机为代表的智能机器人函待解决的重要问题是自主导航,即无人机在动态非结构化的环境中,通过自身的传感器进行感知和学习,实现面向目标的避障自主移动的过程。2007年12月,美国国防部发布的《无人飞行器系统路线图2007-2032》指出,随着无人机自主控制级别定义不断明确,其自主导航技术正与人工智能相结合,并将在一定程度上具备对环境感知和决策的能力。With the advent of the information age, navigation technology plays an increasingly important role in the new battlefield environment. The important problem to be solved by intelligent robots represented by UAVs is autonomous navigation, that is, UAVs can sense and learn through their own sensors in a dynamic and unstructured environment, so as to achieve goal-oriented obstacle avoidance and autonomous movement. process. In December 2007, the U.S. Department of Defense issued the "Roadmap for Unmanned Aerial Vehicle Systems 2007-2032", which pointed out that as the definition of the autonomous control level of UAVs continues to be clarified, its autonomous navigation technology is being combined with artificial intelligence, and will be used in certain areas. Ability to perceive and make decisions about the environment.
目前,无人机已经成体系地发展出惯性导航、卫星导航等导航方式,但是与生物自身的导航能力相比其智能化水平还是相对较低。卫星导航系统容易受到恶意干扰和破坏,且在城市高楼、山底、峡谷间等情况下导航信号可能会被拒止、削弱;惯性导航随着时间增加,误差会快速累计,高精度惯导传感器成本较高,体积较大。同时定位与地图构建(Simultaneous localization and mapping,SLAM)是无人机在非结构化或未知复杂环境下自主导航与定位的主要方式之一,但现有的无人机SLAM系统存在如下突出问题:依赖高精度昂贵激光雷达等环境感知传感器,需要建立准确世界和无人机物理模型;受环境影响较大,自主智能水平较低,无法较好地满足无人机对导航系统的要求,需要发展自主智能的导航方式。At present, UAVs have systematically developed navigation methods such as inertial navigation and satellite navigation, but their intelligence level is still relatively low compared with the navigation capabilities of organisms themselves. The satellite navigation system is vulnerable to malicious interference and damage, and the navigation signal may be rejected or weakened in the case of high-rise buildings, mountain bottoms, and canyons. Higher cost and larger volume. Simultaneous localization and mapping (SLAM) is one of the main methods for autonomous navigation and positioning of UAVs in unstructured or unknown complex environments, but the existing UAV SLAM systems have the following outstanding problems: Relying on environmental perception sensors such as high-precision and expensive lidar, it is necessary to establish an accurate physical model of the world and UAV; it is greatly affected by the environment, and the level of autonomous intelligence is low, which cannot well meet the requirements of UAVs for navigation systems, and needs to develop Autonomous and intelligent navigation.
经过数亿年的进化,许多动物都进化出了感知外界环境信息并将其用于导航的奇异能力。与人类依赖高科技和精密设备不同,自然界中的生物通过其特有的生理结构,仅依赖环境信息就可以实现精确导航,从而实现觅食、归巢及长途迁徙等行为。只有302个神经元的线虫仍旧能够通过嗅觉信号,根据气味的浓度找到食物。其他神经系统稍复杂如蜜蜂,会有更多更复杂的路径规划策略。鸟类和哺乳动物更是凭借更加强大复杂的神经网络才得以在复杂的环境中生存。研究高等动物如何将复杂的环境信息抽象成脑内的认知信息,形成对环境的理解,从而复现高等动物在复杂环境中表现出的智能是解决自主智能导航问题的关键。Over hundreds of millions of years of evolution, many animals have evolved the bizarre ability to perceive information about the outside world and use it for navigation. Unlike humans who rely on high-tech and sophisticated equipment, creatures in nature can achieve precise navigation through their unique physiological structure and rely only on environmental information, so as to achieve behaviors such as foraging, homing and long-distance migration. The worms, with only 302 neurons, were still able to find food based on the concentration of smells through olfactory signals. Other nervous systems that are slightly more complex, like bees, have more and more sophisticated path-planning strategies. Birds and mammals rely on more powerful and complex neural networks to survive in complex environments. The key to solving the problem of autonomous intelligent navigation is to study how higher animals abstract complex environmental information into cognitive information in the brain to form an understanding of the environment, so as to reproduce the intelligence displayed by higher animals in complex environments.
动物在不具备高精度感觉器官和高分辨率地图的情况下仍然能够准确快速进行定位与导航,这种高度智能的导航能力,依赖于大脑海马结构产生的能够表征环境事物空间关系的认知地图,认知地图的形成与大脑海马区中存在的空间细胞密切相关。海马区空间细胞的放电活动被认为能够形成对环境内在地图的表达,即所谓的认知地图。随着大鼠等哺乳动物大脑海马区中与认知导航相关的位置细胞、头朝向细胞、网格细胞以及边界细胞等几种关键导航细胞的陆续发现,对深入理解动物的大脑导航机理提供了可能。受这些机理启发,近年来基于人工智能的类脑导航技术得到了较大发展,并提出了的RatSLAM为代表的模拟啮齿类动物大脑导航神经过程的导航方法。现阶段基于啮齿类脑海马环境认知机理的机器人导航研究主要有两个方面,一种是基于海马解剖学结构和空间细胞的认知机理,构建神经网络模型,用于移动机器人自主导航,并搭载仿大鼠脑神经系统的移动机器人;另一种是基于海马神经行为学的机器人实时定位与地图构建(SimultaneousLocalization And Mapping,SLAM)。前者侧重从行为学层面模拟动物的导航行为,实现机器人的认知导航;后者则更侧重对海马结构的模拟,严格仿照海马体内各区域之间的信息传递机制,实现各种空间细胞的作用。相对来说,前者更贴近实际,适于现有导航应用,并且已经有用于项目的RatSLAM。Animals can still locate and navigate accurately and quickly without having high-precision sensory organs and high-resolution maps. This highly intelligent navigation ability relies on the cognitive map generated by the brain's hippocampus that can represent the spatial relationship of environmental things , the formation of cognitive maps is closely related to the presence of spatial cells in the hippocampus of the brain. The firing activity of spatial cells in the hippocampus is thought to form an expression of an internal map of the environment, the so-called cognitive map. With the successive discovery of several key navigation cells, such as place cells, head-oriented cells, grid cells and border cells, which are related to cognitive navigation in the hippocampus of mammalian brains such as rats, it has provided insights into the mechanism of animal brain navigation. possible. Inspired by these mechanisms, brain-like navigation technology based on artificial intelligence has been greatly developed in recent years, and a navigation method to simulate the neural process of rodent brain navigation, represented by RatSLAM, has been proposed. At present, there are two main aspects of robot navigation research based on the cognitive mechanism of the rodent hippocampus environment. The mobile robot is equipped with a rat brain nervous system; the other is a robot real-time localization and map construction (Simultaneous Localization And Mapping, SLAM) based on hippocampal neurobehavior. The former focuses on simulating the navigation behavior of animals from the behavioral level to realize the cognitive navigation of robots; the latter focuses more on the simulation of the hippocampal structure, strictly imitating the information transmission mechanism between various regions in the hippocampus, and realizing the role of various spatial cells. . Relatively speaking, the former is more practical, suitable for existing navigation applications, and there is already RatSLAM for projects.
相比啮齿类动物,信鸽的导航系统更具优势,更适用于无人机智能导航场景需求。研究表明,信鸽海马结构在自然环境下的地标导航学习中的关键作用,并且其导航特性具有两种明显独立的机制:一是通过记忆的视觉标志直接提供归巢的指引,称为“引航”机制,仅依赖视觉就可以实现自主定位与导航;二是通过熟悉地标回忆起罗盘方向而归巢,称为“马赛克地图”。因此,信鸽海马结构结合了罗盘定向和地标制导,形成一个同时或振荡切换的双重控制系统,信鸽在熟悉区域的导航是结合了罗盘和地标信息的联合导航策略。如果直接采用基于啮齿类动物启发的RatSLAM以及其他基于哺乳动物的仿生SLAM方法模拟信鸽的海马导航机制,缺乏足够的生理学依据,不能准确地体现出信鸽海马结构在导航中的生理学现象和认知功能实现过程。同时,鸽子的智能导航归巢机制与无人机视觉自主智能导航的技术需求十分吻合,特别是如果能够将鸽子导航归巢过程中涌现出的智能机制应用于无人机自主导航,无疑具有广阔的应用前景。Compared with rodents, the navigation system of carrier pigeons has more advantages and is more suitable for the needs of intelligent navigation scenarios of drones. Studies have shown that the hippocampus of carrier pigeons plays a key role in the learning of landmark navigation in the natural environment, and its navigation characteristics have two apparently independent mechanisms: one is to directly provide homing guidance through memorized visual landmarks, called "piloting" Mechanism, autonomous positioning and navigation can be achieved only by relying on vision; the second is to recall the compass direction through familiar landmarks and homing, which is called "mosaic map". Therefore, the hippocampal structure of carrier pigeons combines compass orientation and landmark guidance to form a dual control system with simultaneous or oscillatory switching. The navigation of carrier pigeons in familiar areas is a joint navigation strategy that combines compass and landmark information. If the rodent-inspired RatSLAM and other mammal-based bionic SLAM methods are directly used to simulate the hippocampal navigation mechanism of carrier pigeons, there is insufficient physiological basis to accurately reflect the physiological phenomenon and cognitive function of the hippocampal structure of carrier pigeons in navigation. Implementation process. At the same time, the intelligent navigation and homing mechanism of pigeons is very consistent with the technical requirements of UAV visual autonomous intelligent navigation, especially if the intelligent mechanism emerging in the process of pigeon navigation and homing can be applied to the autonomous navigation of UAVs, it will undoubtedly have broad applications. application prospects.
本发明针对现有类脑神经网络系统导航系统的不足,受信鸽脑海马导航机制启发,提出一种仿信鸽脑-海马的无人机SLAM导航系统及其方法。本发明提出的SLAM导航方法,从仿生的角度模拟信鸽脑-海马神经网络导航机制,集成了两种独立的基于吸引子神经网络的路径积分技术,并配合局部视图校正实现冗余导航建图与定位,有效提高无人机自主性。Aiming at the shortcomings of the existing brain-like neural network system navigation system, the invention is inspired by the navigation mechanism of the hippocampus of the carrier pigeon, and proposes an unmanned aerial vehicle SLAM navigation system and method imitating the brain-hippocampus of the carrier pigeon. The SLAM navigation method proposed by the present invention simulates the navigation mechanism of the carrier pigeon brain-hippocampus neural network from the perspective of bionics, integrates two independent path integration technologies based on the attractor neural network, and cooperates with local view correction to realize redundant navigation mapping and Positioning can effectively improve the autonomy of UAVs.
发明内容SUMMARY OF THE INVENTION
本发明目的是提供一种仿信鸽脑-海马的无人机同时定位与建图导航系统及方法,旨在模拟信鸽大脑海马的认知导航机制,建立仿信鸽脑-海马导航机制的无人机导航系统,具体实现为模拟信鸽海马具有两种定位和导航机制的特点,建立两种连续吸引子神经网络模型,分别实现两种路径积分算法,并配合局部视图校正建立拓扑实现鲁棒、冗余导航实现冗余导航,提高无人机智能水平。The purpose of the present invention is to provide a simultaneous positioning and mapping navigation system and method for an unmanned aerial vehicle imitating the brain-hippocampus of a homing pigeon, aiming at simulating the cognitive navigation mechanism of the hippocampus of the homing pigeon brain, and establishing an unmanned aerial vehicle imitating the brain-hippocampus of a homing pigeon. The navigation system is specifically implemented to simulate the characteristics of the homing pigeon hippocampus with two positioning and navigation mechanisms. Two continuous attractor neural network models are established to implement two path integral algorithms respectively, and the topology is established with local view correction to achieve robustness and redundancy. Navigation realizes redundant navigation and improves the intelligence level of UAVs.
本发明是一种仿信鸽脑-海马的无人机同时定位与建图导航系统,以下简称SLAM导航系统,系统框架如图1所示,具体如下:The present invention is a simultaneous positioning and mapping navigation system for unmanned aerial vehicles imitating the pigeon brain-hippocampus, hereinafter referred to as the SLAM navigation system, and the system framework is shown in FIG.
所述的SLAM导航系统由7个模块组成,分别为:1)图像采集与预处理模块、2)局部视场细胞(Local view cells)模块、3)里程计模块、4)位姿细胞模块、5)头朝向细胞模块、6)网格细胞模块、7)经验地图模块,该系统依据不同的功能,又可以划分为两个独立的类脑导航子系统,其区别是基于不同吸引子网络模型将里程信息与地标感知融合,形成环境的一致表示,其中图像采集与预处理模块、局部视场细胞模块、里程计模块、位姿细胞模块、经验地图模块共同构成了第一个信鸽脑海马认知导航子系统,图像采集与预处理模块、局部视场细胞模块、位姿细胞模块、头朝向细胞模块、网格细胞模块、经验地图模块共同构成了第二个信鸽脑海马认知导航子系统。The SLAM navigation system consists of 7 modules, namely: 1) image acquisition and preprocessing module, 2) local view cells module, 3) odometer module, 4) pose cell module, 5) Head-facing cell module, 6) Grid cell module, and 7) Experience map module. According to different functions, the system can be divided into two independent brain-like navigation subsystems. The difference is based on different attractor network models. The mileage information and landmark perception are integrated to form a consistent representation of the environment. The image acquisition and preprocessing module, the local field of view cell module, the odometer module, the pose cell module, and the experience map module together constitute the first carrier pigeon hippocampus recognition module. The cognitive navigation subsystem, the image acquisition and preprocessing module, the local field of view cell module, the pose cell module, the head orientation cell module, the grid cell module, and the experience map module together constitute the second pigeon hippocampal cognitive navigation subsystem .
1)图像采集与预处理模块,包括图像采集与图像预处理。对于图像采集,需要采用视觉传感器采集实时图像;然后图像预处理将彩色图像转为灰度图,并做图像增强和图像分块处理,分块后的图像用于视觉里程计模块和局部视图模块计算。1) Image acquisition and preprocessing module, including image acquisition and image preprocessing. For image acquisition, a visual sensor needs to be used to collect real-time images; then image preprocessing converts color images into grayscale images, and performs image enhancement and image segmentation processing. The segmented images are used in the visual odometry module and the partial view module. calculate.
2)局部视场细胞(Local view cells,LV)模块,是一个可扩展细胞数组,每个LV表示环境中不同的视觉场景。LV模拟动物看到的当前环境快照,当看到一个新的视觉场景时,会创建一个新的LV,并与该场景中的原始像素数据和位姿细胞(Pose cells,PC)相关联;并在遇到已知的区域(回环)时,提供激活的效果。如图1中所示LV模块部分,以及图2a、b、c所示不同图像提取的特征-扫描线强度轮廓。2) The Local view cells (LV) module is an expandable cell array, and each LV represents a different visual scene in the environment. The LV simulates a snapshot of the current environment seen by the animal, and when a new visual scene is seen, a new LV is created and associated with the original pixel data and pose cells (PCs) in that scene; and Provides an activated effect when a known area (loopback) is encountered. The LV module section is shown in Fig. 1, and the extracted features-scanline intensity profiles of different images are shown in Fig. 2a,b,c.
3)里程计模块,包括两个独立的里程计模块,一个是基于扫描线强度轮廓(Scanline intensity profile)的视觉里程计,通过比较连续图像的差异来粗略确定摄像机的运动,如图3所示。另一个是基于特征点的视觉里程计模块,为头朝向细胞模块和网格细胞模块提供运动信息,较为精确,基于特征点的视觉里程计流程如图1左下角所示。3) The odometer module, including two independent odometer modules, one is a visual odometer based on the Scanline intensity profile, which roughly determines the motion of the camera by comparing the difference between consecutive images, as shown in Figure 3 . The other is the visual odometry module based on feature points, which provides motion information for the head facing cell module and the grid cell module, which is more accurate. The visual odometry process based on feature points is shown in the lower left corner of Figure 1.
4)位姿细胞(Pose cells,PC)模块,PC被实现为一个三维(Three dimension,3D)竞争吸引子神经网络,通过局部激活和全局抑制在其单元上收敛到一个稳定的激活模式。竞争吸引子神经网络单元可以有许多配置,但是通常每个单元都会激励靠近自己的细胞,并抑制更远的细胞,这导致一个活动团最终占据主导地位,称为活动包。活动包附近的活动将倾向于被激活,远离它注入的活动将被抑制。如果注入足够的活动,新数据包可以“获胜”,而旧数据包则消失。如图1所示PC模块部分和图4a、b所示PC放电效果和空间分布图。4) Pose cells (PC) module, PC is implemented as a three-dimensional (3D) competitive attractor neural network that converges to a stable activation pattern on its cells through local activation and global inhibition. Competing attractor neural network units can have many configurations, but typically each unit excites cells that are close to itself and inhibits cells further away, which results in a clump of activity that eventually dominates, called an activity packet. Activities near the activity pack will tend to be activated, and activities injected away from it will be inhibited. If enough activity is injected, new packets can "win" while old packets disappear. The PC module part shown in Figure 1 and the PC discharge effect and spatial distribution diagram shown in Figure 4a and b.
5)头朝向细胞(Head direction cells,HD)模块,当动物面向特定方向时,HD会产生最大放电,模拟HD偏好方向机制,每个细胞有不同的偏好方向,HD的联合响应完成了动估对水平方向的编码,当所有HD的信息整合到一起时就产生了一个连续的头朝向信号。HD需要接受外部的运动信号,由于传统的RatSLAM核心运动传感信息来源于粗略的视觉里程计,受外界环境的影响较大,存在可靠性低、环境适应性较差的问题。本发明采用基于特征点的视觉里程计运动估计算法估计方向,HD的放电特性和4种偏好朝向的HD示意图如图5所示。5) Head direction cells (HD) module, when the animal faces a specific direction, the HD will produce the maximum discharge, simulating the HD preference direction mechanism, each cell has a different preference direction, and the joint response of the HD completes the dynamic estimation For horizontal encoding, a continuous head orientation signal is produced when all the HD information is integrated. HD needs to accept external motion signals. Since the traditional RatSLAM core motion sensing information comes from a rough visual odometry, it is greatly affected by the external environment, and there are problems of low reliability and poor environmental adaptability. The present invention uses the visual odometry motion estimation algorithm based on feature points to estimate the direction, and the HD schematic diagram of the discharge characteristics of the HD and four preferred directions is shown in FIG. 5 .
6)网格细胞(Grid cells,GC)模块,GC的对动物在二维空间的位置显示出明显的规则性放电反应,被认为是形成推算航位的神经基础。为了模拟GC的神经响应,采用了一个2D连续吸引子神经网络,网络模式将由速度输入驱动流动,而动物的位移可以用流动来表示,这样就可以根据运动信息来实现航位推算。如果从视觉系统中估计出无人机的运动信息,就可以计算出无人机的速度矢量和航向,然后利用它们驱动周期性网络生成点阵图流,从而生物学的角度模拟GC路径积分机制。图6显示了GC两个连续晶格模式,对应活动神经元被红色圆圈标记,从中我们可以发现晶格模式流向左边。GC形成机制的模型利用速度调节的输入信号的路径积分表征周期性的网格野,即该模块模拟GC放电实现里程计算,因此该模块接收无人机的速度信息。GC需要接收基于特征点的视觉里程计运动估计算法估计的速度,同时接收HD的方向输入,共同驱动GC形成运动流。6) Grid cells (GC) module, GC shows obvious regular discharge response to the position of animals in two-dimensional space, which is considered to be the neural basis of dead position reckoning. To simulate the neural response of the GC, a 2D continuous attractor neural network is used, the network pattern will be driven by the velocity input to flow, and the displacement of the animal can be represented by the flow, so that dead reckoning can be achieved based on the motion information. If the motion information of the UAV is estimated from the vision system, the speed vector and heading of the UAV can be calculated, and then used to drive the periodic network to generate a bitmap stream, thereby simulating the GC path integration mechanism from a biological point of view . Figure 6 shows two consecutive lattice patterns of the GC, with corresponding active neurons marked by red circles, from which we can see that the lattice patterns flow to the left. The model of the GC formation mechanism uses the path integral of the speed-adjusted input signal to characterize the periodic grid field, that is, the module simulates the GC discharge to realize the mileage calculation, so the module receives the speed information of the UAV. The GC needs to receive the velocity estimated by the visual odometry motion estimation algorithm based on feature points, and at the same time receive the direction input of the HD to jointly drive the GC to form a motion flow.
7)经验地图(Experience map,EP)模块,EP模拟信鸽脑中的认知地图,可以将来自PC、LV和自运动估计的信息流组织成一组相关的空间经验,构建一个拓扑地图。EP中的单个经验由PC中的活动模式和LV中的活动模式的连接定义。在EP中,每个经验都有一个相关的位置和方向。创建新节点时还将创建指向先前活动节点的连接,连接封装了基于里程信息的节点之间的姿态转换;当无人机重新回到已知区域时,LV与以前的经验相同,将通过将EP中存储的经验位置和方向与当前位置的转换信息对齐来执行地图校正,如图1所示的EP模块。7) Experience map (EP) module, EP simulates the cognitive map in the pigeon brain, which can organize the information flow from PC, LV and self-motion estimation into a set of related spatial experiences to construct a topological map. A single experience in the EP is defined by the connection of the active mode in the PC and the active mode in the LV. In EP, every experience has an associated location and orientation. Creating a new node will also create a connection to a previously active node, and the connection encapsulates the attitude transition between nodes based on mileage information; when the drone re-enters a known area, the LV, as previously experienced, will be The empirical position and orientation stored in the EP is aligned with the transformation information of the current position to perform map correction, as shown in the EP module in Figure 1.
一种仿信鸽脑-海马的无人机同时定位与建图导航方法,实现步骤如下:A method for simultaneous positioning and map building and navigation of an unmanned aerial vehicle imitating the pigeon brain-hippocampus. The implementation steps are as follows:
步骤一:建立3D位姿细胞(Pose cells,PC)连续吸引子神经网络动力学模型,并初始化。Step 1: Establish and initialize the 3D pose cells (Pose cells, PC) continuous attractor neural network dynamics model.
PC是一个3D连续的吸引子网络(Continuous attractor network,CAN)细胞集合,通过激活性和抑制性连接相连,其特征类似于哺乳动物中发现的导航神经元,即网格细胞(Grid cells)。PC是一种抽象的细胞,它通过替换活动包来更新其活动,用来编码无人机导航任务的大规模环境中的位置。PC网络动力学使得稳定状态是单个激活单元簇,称为活动包。这个活动包的质心是编码无人机对其当前姿态的最佳内部估计,这种动力学行为是通过局部激活性、全局抑制性连接性实现的,激活权重矩阵εa,b,c描述为下式:The PC is a 3D continuous attractor network (CAN) collection of cells, connected by activating and inhibitory connections, with characteristics similar to the navigation neurons found in mammals, namely Grid cells. A PC is an abstract cell that updates its activity by replacing activity packs to encode positions in a large-scale environment for UAV navigation tasks. The PC network dynamics are such that the steady state is a cluster of individual activated units, known as active packets. The centroid of this activity bag encodes the UAV's best internal estimate of its current pose. This dynamic behavior is achieved through local activation, global inhibitory connectivity, and the activation weight matrix ε a, b, c is described as The following formula:
其中kp,kd是位置和方向方差常数;a,b,c分别表示激活单元与x',y',θ'联合坐标系原点间的距离。抑制的权重矩阵与之相同,只是方差常数和抑制范围有区别。where k p , k d are the position and direction variance constants; a, b, c represent the distance between the activation unit and the origin of the x', y', θ' joint coordinate system, respectively. The weight matrix for suppression is the same, but the variance constant and suppression range are different.
步骤二:建立2D网格细胞(Grid cells,GC)连续吸引子神经网络动力学模型,并初始化。Step 2: Establish and initialize the 2D grid cells (Grid cells, GC) continuous attractor neural network dynamics model.
GC对动物在2D空间的位置显示出明显的规则性放电反应,如果与速度输入耦合,可以驱动GC的连续吸引子网络模型沿着流形传播其网络状态,执行精确的路径积分,从而编码动物在空间的位置。本发明采用如下连续吸引子模型建模Grid cells动力学特性GCs display a pronounced regular firing response to the animal's position in 2D space, and if coupled with a velocity input, can drive the GC's continuous attractor network model to propagate its network state along the manifold, perform precise path integration, and thus encode the animal position in space. The present invention adopts the following continuous attractor model to model the dynamic characteristics of Grid cells
其中f是非线性函数,神经元i的突触激活为si,Wij是神经元j到神经元i的突触权重矩阵,τ为神经反应的时间常数,神经元i的前馈输入Bi。where f is a nonlinear function, the synaptic activation of neuron i is s i , W ij is the synaptic weight matrix from neuron j to neuron i, τ is the time constant of the neural response, and the feedforward input B i of neuron i .
步骤三:图像获取和图像预处理。Step 3: Image acquisition and image preprocessing.
通过摄像机获取实时图像,并对图像灰度化处理。然后,采用基于拉普拉斯算子的图像增强算法增强图像对比度。拉普拉斯算子定义为:The real-time image is acquired by the camera, and the image is grayscaled. Then, an image enhancement algorithm based on Laplacian is used to enhance the image contrast. The Laplace operator is defined as:
拉普拉斯算子可以增强图像,最后得到的锐化公式为:The Laplacian operator can enhance the image, and the final sharpening formula is:
其中g是输出,f为原始图像,c是系数。where g is the output, f is the original image, and c is the coefficient.
最后,将图像裁剪为3个不同的区域A、B、C,如图3所示,分别用于计算里程计旋转、平移,以及图像匹配。Finally, the image is cropped into 3 different regions A, B, C, as shown in Figure 3, which are used to calculate odometry rotation, translation, and image matching, respectively.
步骤四:基于扫描线强度轮廓(Scanline intensity profile)的视觉里程计运动估计。Step 4: Visual odometry motion estimation based on the Scanline intensity profile.
里程计是基于扫描线强度轮廓实现的,扫描线强度轮廓通过对图像像素列强度值求和,然后对得到行向量进行归一化而形成一维向量即profile,如图2所示LV模块,为不同图像得到的profiles。profile用于估计用于里程计的图像之间的旋转和前进速度,并将当前图像与先前看到的图像进行比较以执行局部视图校准。The odometer is implemented based on the scan line intensity profile. The scan line intensity profile is formed by summing the intensity values of the image pixel columns and then normalizing the obtained row vector to form a one-dimensional vector or profile, as shown in the LV module in Figure 2, Profiles obtained for different images. The profile is used to estimate rotational and forward speeds between images used for odometry, and to compare the current image with previously seen images to perform local view calibration.
然后通过确定两个连续扫描线分布的相对水平偏移使两个剖面之间的绝对差之和最小来估计旋转速度,通过将最小差异乘以比例因子来估计平动速度,并将其限制在最大值内,以防止由于照明的巨大变化而产生虚假结果。所述的当前图像与先前看到的图像进行比较,是通过计算两个扫描线强度轮廓之间的平均绝对差来进行的,如式(5):The rotational velocity is then estimated by determining the relative horizontal offset of the two consecutive scanline distributions such that the sum of the absolute differences between the two profiles is minimized, the translational velocity is estimated by multiplying the minimum difference by a scaling factor, and is limited to within the maximum value to prevent spurious results due to large changes in lighting. The comparison of the current image with the previously seen image is performed by calculating the mean absolute difference between the intensity profiles of the two scan lines, as in equation (5):
其中Ij和Ik是要比较的扫描线强度剖面,s是剖面偏移,w是图像宽度。使f(s)取得最小值的s值即连续图像Ij和Ik相对移动:where I j and I k are the scanline intensity profiles to be compared, s is the profile offset, and w is the image width. The value of s that makes f(s) the minimum value is the relative movement of successive images I j and I k :
其中偏移量ρ确保剖面之间有足够的重叠,则角度和速度分别与s和f(s)相关,其计算公式为:Where the offset ρ ensures sufficient overlap between the profiles, then the angle and velocity are related to s and f(s) respectively, and the calculation formula is:
Δθ=σsm (7)Δθ=σs m (7)
v=min[vcalf(sm,Ij,Ij-1),vmax] (8)v=min[v cal f(s m ,I j ,I j-1 ),v max ] (8)
步骤五:头朝向细胞(Head dirction cells,HD)获取方向。Step 5: Get the direction of the head direction cells (HD).
这一步与步骤四是同步的,外部自运动信息可以通过速度和方向传感器获得。HD是与动物相对于环境的方向性航向感知有关的神经元,每个细胞只有一个最大放电方向,只要动物的头部朝向适当的方向,就可以在任何位置放电。与指南针不同,HD并不依赖于地球的磁场,而是依赖于地标和自我运动信号。无论动物是在运动还是静止,HD都会放电,并且为独立于动物的持续行为。HD被认为代表了生物体在环境中感知到的定向航向的神经基质,能够实现精确导航。This step is synchronized with the fourth step, and the external self-motion information can be obtained through the speed and direction sensors. HD is a neuron involved in an animal's directional heading perception relative to its environment, each cell has only one maximum firing direction, and can fire anywhere as long as the animal's head is pointed in the appropriate direction. Unlike a compass, HD does not rely on Earth's magnetic field, but on landmarks and ego-motion signals. The HD discharges regardless of whether the animal is moving or stationary, and is independent of the animal's ongoing behavior. HD is thought to represent the neural substrate for the directional heading that an organism perceives in the environment, enabling precise navigation.
使用HD产生一个速度调节信号,即放电率与当前头朝向和动物运动的速度成比例。第i个HD的优先方向可以表示为一个相对于主要朝向θ0的角偏移量θi,那么给定HD的调整内核:A velocity modulation signal was generated using the HD, i.e. the firing rate was proportional to the current head orientation and the velocity of the animal's movement. The preferred direction of the i-th HD can be expressed as an angular offset θ i with respect to the main orientation θ 0 , then the adjustment kernel for a given HD is:
以及动物的瞬时角速度w(t),那么头朝向信号为:and the animal's instantaneous angular velocity w(t), then the head orientation signal is:
其中,m为HD的个数,di(t)是HD中优先朝向为θi的第i个细胞在t时刻的头朝向信号。Among them, m is the number of HDs, and d i (t) is the head orientation signal of the i-th cell whose preferential orientation is θ i in HD at time t.
步骤六:基于网格细胞(Grid cells,GC)流的路径积分。Step 6: Path integration based on grid cell (GC) flow.
接收步骤五的速度和方向信息,按照步骤二的GC神经网络动力学模型,更新细胞的激活响应,并计算相对运动信息。Receive the speed and direction information in
GC的激活模式将由速度输入驱动,而运动位移可以用激活细胞的流动来计算,这意味着可以根据运动信息来实现航位推算。对于移动无人机,根据从步骤五估计出的无人机的运动信息,如平移速度和角速度,就可以计算出无人机的速度矢量v和航向方向θ,然后利用它们驱动周期性网络生成点阵图流。在基于流估计无人机位置的基础上,就可以实现基于GC流的视觉里程测量。图6显示了两个典型的连续晶格模式,两组典型的活动神经元被红色圆圈标记,从中可以发现晶格模式流动和方向。式(2)中的权重矩阵Wij为The activation pattern of the GC will be driven by the velocity input, while the motion displacement can be calculated from the flow of activated cells, which means that dead reckoning can be achieved based on the motion information. For mobile UAV, according to the motion information of UAV estimated from
其中in
其中l,a,γ,β是相关仿真参数。前馈输入为:where l, a, γ, β are the relevant simulation parameters. The feedforward input is:
其中是指向θj的单位向量,v是速度向量,A是包络函数,调节神经元输入的强度,如果l,α非0,则速度与网络动力学耦合,并驱动形成模式流。in is the unit vector pointing to θ j , v is the velocity vector, A is the envelope function that regulates the strength of the neuron input, if l, α are non-zero, the velocity is coupled with the network dynamics and drives the formation of pattern flow.
步骤七:局部视场细胞(Local view cells,LV)生成局部视图模板。Step 7: Local view cells (LV) generate a local view template.
LV提取图像信息,生成视图模板。该步骤采用图3中A区域作为子图像,然后同样是计算该子图像的扫描线强度轮廓,生成的一维向量即图像模板。同时,将该活动信息传递给位姿细胞并创建连接,执行步骤八。LV extracts image information and generates view templates. In this step, the area A in FIG. 3 is used as the sub-image, and then the scanning line intensity profile of the sub-image is also calculated, and the generated one-dimensional vector is the image template. At the same time, pass the activity information to the pose cells and create a connection, and perform step 8.
LV计算时采用使用式(5)中的平均绝对强度差函数对当前轮廓(即A区域的扫描线强度轮廓)和模板进行比较。在小范围的像素偏移量ψ上进行比较,如式(14)During LV calculation, the current contour (ie, the intensity contour of the scan line in the A region) is compared with the template using the mean absolute intensity difference function in formula (5). The comparison is made over a small range of pixel offsets ψ, as in Eq. (14)
其中表示Ij当前profile,为所有存储的profiles。根据差值与阈值进行比较,确定当前视图是否与已有场景足够相似,或者是新的场景。Which represents the current profile of I j , which is all stored profiles. Comparing the difference with a threshold determines whether the current view is sufficiently similar to an existing scene, or is a new scene.
步骤八:位姿细胞(Pose cells,PC)激活和更新。Step 8: Pose cells (PC) are activated and updated.
PC采用步骤一建立的动力学模型,表示无人机的三自由度位姿,由吸引子动力学、路径积分信息和LV信息更新和校准。PC以三维方式排列,二维(x',y')表示绝对位置(x,y)上的流形,一维θ'表示绝对头朝向θ上的流形。步骤四的运动估计用于驱动姿态单元中的路径积分,步骤七的局部视图模板,通过关联学习触发与PC相关联的LV,按照步骤一的动力学模型更新PC的激活效果。PC由于局部激励而产生的活动变化由下式给出:The PC adopts the dynamics model established in
其中nx',ny',nθ'是PC矩阵在(x',y',θ')空间上的维度。where n x' , n y' , n θ' are the dimensions of the PC matrix in (x', y', θ') space.
步骤九:计算相对运动,创建经验地图(Experience map,EP)为拓扑连接做准备。Step 9: Calculate relative motion and create an experience map (EP) to prepare for topological connections.
EP是由许多个体经验e通过转换连接而组成的细粒度拓扑图,作用是将来自PC、LV和自运动信息流组织成一组相关的空间经验。EP中的单个经验由PC中的活动模式Pi和LV中的活动模式Vi的连接定义,同时被定位在位置pi。因此可以定义为一个三元组:EP is a fine-grained topology map composed of many individual experiences e connected by transformations, and its role is to organize information flow from PC, LV and self-motion into a set of related spatial experiences. A single experience in the EP is defined by the connection of the activity pattern Pi in the PC and the activity pattern Vi in the LV, while being located at the position pi . So it can be defined as a triple:
ei={Pi,Vi,pi} (16)e i ={P i ,V i , pi } (16)
第一个经验是在一个任意起点上创建的,随后的经验是在第一次经验的基础上建立起来的。当Pi或Vi中的一个或两个发生变化时,将形成一个新的经验,并通过对从自运动提示中找到的距离进行编码的转换来链接到先前的经验。随着无人机进入新的区域,新的经验和转换将继续形成。The first experience is created at an arbitrary starting point, and subsequent experiences are built on the basis of the first experience. When one or both of Pi or Vi changes, a new experience is formed and linked to the previous experience through a transformation that encodes distances found from self-motion cues. New experiences and transformations will continue to form as drones move into new areas.
本步骤获取步骤四和步骤六的自运动信息和步骤七的局部视图模板,计算并累加相对运动。然后判断当前经验是否为新的场景或者已有的场景,需要将当前的Pi和Vi与所有的经验进行比较,通过式(17)的评分标准S:In this step, the self-motion information of step 4 and step 6 and the partial view template of step 7 are obtained, and the relative motion is calculated and accumulated. Then to judge whether the current experience is a new scene or an existing scene, it is necessary to compare the current P i and V i with all the experiences, and pass the scoring standard S of formula (17):
S=μp|Pi-P|+μv|Vi-V| (17)S=μ p |P i -P|+μ v |V i -V| (17)
其中μp,μv是pose code和local view code的权重。如果是新的经验则执行步骤十;否则执行步骤十一。where μ p , μ v are the weights of pose code and local view code. If it is new experience, go to step ten; otherwise, go to step eleven.
步骤十:创建新的经验,更新经验地图(Experience map,EP)。Step 10: Create a new experience and update the experience map (EP).
当式(17)的分数超过最大阈值,则认定是新场景,并且累计相对运动增量大于阈值,则创建新经验,最大阈值通过经验设定。通过对从自运动信息进行位姿关联变换,来自里程计测量的位姿变化为tij:When the score of formula (17) exceeds the maximum threshold, it is considered a new scene, and the cumulative relative motion increment is greater than the threshold, a new experience is created, and the maximum threshold is set by experience. By performing pose-related transformation from ego-motion information, the pose change from odometry measurements is t ij :
tij={Δpij} (18)t ij = {Δp ij } (18)
Δpij是根据里程计得到的位姿变化,tij建立了先前经验ei和新经验ej之间的联系,则新的经验为:Δp ij is the pose change obtained according to the odometer, and t ij establishes the connection between the previous experience e i and the new experience e j , then the new experience is:
ej={Pj,Vj,pi+Δpij} (19)e j = {P j , V j , p i +Δp ij } (19)
这样通过编码的转换来链接到先前的经验,形成新的地图。This way a new map is formed by linking to previous experience through the transformation of the code.
步骤十一:执行局部视图校准,判断回环,滤除虚假信息,执行地图修正。如果判定局部视图是已经遇到过的场景,则活动的LV通过学习到的连接将活动注入与该场景相关联的PC,由网络的吸引子动力学过滤来自LV的注入活动。如果LV的活动被注入到PC网络的非活动区域,则出现矛盾,全局抑制连接倾向于抑制新分组,返回步骤十;否则,PC和LV活动位置相同,将活动位置和方向与EP存储的转换信息对齐来执行地图校正。Step 11: Perform local view calibration, determine loopback, filter out false information, and perform map correction. If the local view is determined to be an already encountered scene, the active LV injects activity into the PC associated with that scene through the learned connections, with the injected activity from the LV filtered by the network's attractor dynamics. If the activity of the LV is injected into the inactive area of the PC network, there is a contradiction, the global suppression connection tends to suppress the new packet, and return to step ten; otherwise, the PC and LV active positions are the same, the conversion of the active position and direction with the EP storage Information alignment to perform map correction.
由于基于图像匹配的里程计运动估计不可避免存在累计误差,当回环发生时,累计变换位置变化导致回环处的经验不太可能匹配到先前的同一位置。因此,为了实现匹配,所有体验的位置都需要更新,即地图修正:Since the odometry motion estimation based on image matching inevitably has accumulated errors, when the loop closure occurs, the accumulated transformation position changes make the experience at the loopback less likely to match the previous same position. Therefore, in order to achieve a match, the location of all experiences needs to be updated, i.e. a map correction:
其中α0是校正率常数,Nf是从经验ei到其他经验的连接数,Nt是从其他经验到经验ei的连接数。where α0 is the correction rate constant, Nf is the number of connections from experience ei to other experiences, and Nt is the number of connections from other experiences to experience ei .
步骤十二:重复步骤三到十一,实时输出两个独立的视觉里程计位姿估计;位姿细胞激活图和经验地图,实现同时定位与导航。Step 12: Repeat steps 3 to 11, output two independent visual odometry pose estimates in real time; pose cell activation map and experience map to achieve simultaneous positioning and navigation.
本发明一种仿信鸽脑-海马的无人机同时定位与建图导航系统及方法,其优点及功效在于:模拟鸽脑海马具有两种独立导航机制的特点,基于连续吸引子神经网络模型,分别建立了两种类脑导航路径积分算法,并结合局部视图细胞、位姿细胞和经验地图等模块,实现冗余、鲁棒的定位和导航,弥补了原始RatSLAM算法不具备冗余导航的缺点,可以辅助无人机实现导航与定位,提高无人机智能水平。The present invention is a simultaneous positioning and mapping navigation system and method for an unmanned aerial vehicle imitating the brain-hippocampus of a carrier pigeon. Two brain-like navigation path integration algorithms are established respectively, and combined with modules such as local view cells, pose cells and experience maps, to achieve redundant and robust positioning and navigation, which makes up for the shortcomings of the original RatSLAM algorithm that does not have redundant navigation. It can assist UAVs to achieve navigation and positioning, and improve the intelligence level of UAVs.
附图说明Description of drawings
图1仿信鸽脑-海马的无人机SLAM导航系统结构图Figure 1. Structure diagram of the UAV SLAM navigation system imitating the pigeon brain-hippocampus
图2a、b、c局部视图生成模板示意图Figure 2a, b, c partial view generation template schematic diagram
图3图像预处理分块效果图Figure 3 Image preprocessing block rendering
图4a、b 3D位姿细胞激活及空间分布效果图Figure 4a,b 3D pose cell activation and spatial distribution effect diagram
图5a、b头朝向细胞放电及偏好方向示意图Figure 5a and b are schematic diagrams of cell discharge and preference direction
图6a、b运动信息驱动的周期网络的2D网格细胞流动图Figure 6a,b 2D grid cell flow diagrams of periodic networks driven by motion information
图7仿信鸽脑-海马的无人机SLAM导航系统流程图Figure 7 Flow chart of the UAV SLAM navigation system imitating the pigeon brain-hippocampus
图8a、b、c、d视觉里程计运动估计结果、位姿细胞激活图和经验地图Figure 8a,b,c,d Visual odometry motion estimation results, pose cell activation maps and experience maps
图中标号及符号说明如下:The labels and symbols in the figure are explained as follows:
LV1,LV2,LV3:局部视图模板LV1, LV2, LV3: partial view templates
Visual Odometry1:第一个视觉里程计模块Visual Odometry1: The first visual odometry module
Visual Odometry2:第二个视觉里程计模块Visual Odometry2: Second Visual Odometry Module
Bundle Adjustment:光束平差法Bundle Adjustment: beam adjustment method
x',y',θ':三维坐标系,x',y'表示位移,θ'表示方向x', y', θ': three-dimensional coordinate system, x', y' represents displacement, θ' represents direction
E,N,W,S:头朝向细胞偏好方向,对应(0°,90°,180°,270°)E, N, W, S: The head is facing the cell's preferred direction, corresponding to (0°, 90°, 180°, 270°)
具体实施方式Detailed ways
见图1至图8,下面通过一个具体的仿信鸽脑-海马的无人机SLAM导航方法实例来验证本发明所提出的系统及其方法的有效性,采用的数据集为德国卡尔斯鲁厄理工学院和丰田美国技术研究院联合创立的KITTI数据集。仿信鸽脑-海马的无人机SLAM导航系统及方法,其实现流程如图7所示,该方法具体步骤如下:Referring to Figures 1 to 8, the following is a specific example of a SLAM navigation method for unmanned aerial vehicles imitating the brain-hippocampus of a carrier pigeon to verify the effectiveness of the proposed system and the method thereof, and the data set used is Karlsruhe, Germany The KITTI dataset co-founded by the Polytechnic Institute and Toyota American Institute of Technology. The UAV SLAM navigation system and method imitating the pigeon brain-hippocampus, the implementation process is shown in Figure 7, and the specific steps of the method are as follows:
步骤一:建立3D位姿细胞(pose cells,PC)连续吸引子神经网络动力学模型,并初始化。Step 1: Establish and initialize the 3D pose cells (PC) continuous attractor neural network dynamics model.
PC网络动力学动力学行为是通过局部兴奋性、全局抑制性连接性实现的,由式(1)描述。本发明中激活的位置和方向方差常数为1,激活范围大小为7,即激活参数如下:The dynamical behavior of PC network dynamics is achieved through local excitatory, global inhibitory connectivity, described by Equation (1). In the present invention, the activated position and direction variance constant is 1, and the activation range is 7, that is, the activation parameters are as follows:
同理,抑制连接的参数取值为:In the same way, the parameter value of suppressing the connection is:
步骤二:建立2D网格细胞(grid cells,GC)连续吸引子神经网络动力学模型,并初始化。Step 2: Establish and initialize the 2D grid cells (GC) continuous attractor neural network dynamics model.
GC对动物在2D空间的位置显示出明显的规则性放电反应,如果与速度输入耦合,可以驱动GC的连续吸引子网络模型沿着流形传播其网络状态,执行精确的路径积分,从而编码动物在空间的位置。本发明采用如式(2)的连续吸引子模型建模grid cells动力学特性,f采用非线性函数GCs display a pronounced regular firing response to the animal's position in 2D space, and if coupled with a velocity input, can drive the GC's continuous attractor network model to propagate its network state along the manifold, perform precise path integration, and thus encode the animal position in space. The present invention adopts the continuous attractor model such as formula (2) to model the dynamic characteristics of grid cells, and f adopts a nonlinear function
τ为神经反应的时间常数,仿真中取值为0.5ms。τ is the time constant of the neural response, which is 0.5ms in the simulation.
步骤三:图像获取和图像预处理。Step 3: Image acquisition and image preprocessing.
通过摄像机获取实时图像,并对图像灰度化处理。然后,采用基于拉普拉斯算子的图像增强算法增强图像对比度。采用拉普拉斯算子可以增强图像,最后得到的锐化公式为如式(4)所述。最后,将图像被裁剪为3个不同的区域A、B、C,如图3所示,分别用于计算里程计旋转、平移,以及图像匹配。The real-time image is acquired by the camera, and the image is grayscaled. Then, an image enhancement algorithm based on Laplacian is used to enhance the image contrast. The Laplacian operator can be used to enhance the image, and the final sharpening formula is as described in Equation (4). Finally, the image is cropped into 3 different regions A, B, and C, as shown in Figure 3, which are used to calculate odometry rotation, translation, and image matching, respectively.
步骤四:基于基于扫描线强度轮廓(Scanline intensity profile)的视觉里程计运动估计。Step 4: Motion estimation based on visual odometry based on scanline intensity profile.
根据式(5)-(6),通过使连续图像的扫描线强度轮廓之间的平均绝对差取最小值,得到差的最小值和对应的位移;然后根据式(7)-(8)求解角度和速度变化。According to equations (5)-(6), the minimum value of the difference and the corresponding displacement are obtained by taking the minimum value of the average absolute difference between the scanning line intensity profiles of consecutive images; then the solution is solved according to equations (7)-(8). Angle and speed changes.
步骤五:头朝向细胞(Head dirction cells,HD)获取方向。Step 5: Get the direction of the head direction cells (HD).
获取连续图像,采用基于特征点的视觉里程计估计算法获取位姿估计,然后求解方向和速度信息。分别传给HD和GC。本发明假设整个神经网络有N=1282个神经元,每个神经元i的偏好方向为(0°,90°,180°,270°),如图5a、b所示,用于确定其输出权重移动的方向,并确定其接收到的速度输入。偏好方向由θi指定,每个2×2块包含每个首选方向的一个神经元,如式(9)。然后根据里程计得到的方向信息代入式(10)求解HD的头朝向信号。Acquire continuous images, use the visual odometry estimation algorithm based on feature points to obtain pose estimation, and then solve the direction and velocity information. Passed to HD and GC respectively. The present invention assumes that the entire neural network has N=128 2 neurons, and the preference direction of each neuron i is (0°, 90°, 180°, 270°), as shown in Figure 5a and b, used to determine its Outputs the direction the weight is moving, and determines the velocity input it receives. The preferred direction is specified by θi , and each 2×2 block contains one neuron for each preferred direction, as in Eq. (9). Then, according to the direction information obtained by the odometer, it is substituted into equation (10) to solve the head orientation signal of HD.
步骤六:基于网格细胞(Grid cells)流的路径积分。Step 6: Path integration based on grid cell flow.
对于移动无人机,根据从步骤五估计出的无人机的运动信息,如平移速度和角速度,就可以计算出无人机的速度矢量v和航向方向θ;接收步骤五的速度和方向信息,按照步骤二所述计算权重矩阵,结合公式式(11)-(13)更新细胞的激活响应。然后利用它们驱动周期性网络生成点阵图流。在基于流估计无人机位置的基础上,就可以实现基于GC流的视觉里程测量。其中l,a,γ,β是相关仿真参数,取值为l=2,a=1,γ=1.05β,其中λnet=13为网格的周期,l为输出权重的位移。本次仿真采用具有周期边界条件的网络,因此包络函数处处为1:For mobile drones, the speed vector v and heading direction θ of the drone can be calculated according to the motion information of the drone estimated from
A(x)=1 (24)A(x)=1 (24)
步骤七:局部视场细胞(Local view cells,LV)生成局部视图模板。Step 7: Local view cells (LV) generate a local view template.
提取图像信息,计算子图像的scanline intensity profile,生成视图模板。同时,将该活动信息传递给位姿细胞并创建连接。The image information is extracted, the scanline intensity profile of the sub-image is calculated, and the view template is generated. At the same time, this activity information is passed to the pose cells and connections are created.
步骤八:位姿细胞(pose cells,PC)激活和更新。Step 8: Pose cells (PC) activation and update.
pose cells采用步骤一建立的动力学模型,表示无人机的三自由度位姿,并由吸引子动力学、路径积分和LV过程更新和校准。步骤四的运动估计用于驱动姿态单元中的路径积分,可以形成冗余导航机制,是模拟信鸽脑-海马的主要创新点之一,然后二者融合用于驱动姿态单元中的路径积分。再结合步骤七的局部视图模版,通过关联学习触发与PC相关联的LV,按照步骤一的动力学模型更新PC的激活效果。PC由于局部激励而产生的活动变化由式(15)给出。The pose cells adopt the dynamics model established in
步骤九:计算3D相对运动,创建经验地图(experience map,EP)为拓扑连接做准备。Step 9: Calculate 3D relative motion and create an experience map (EP) to prepare for topological connections.
本步骤获取连续自运动信息和局部视图信息,计算并累加相对运动。然后判断当前经验是否为新的场景或者已有的场景,需要将当前的Pi和Vi与所有的经验进行比较,通过式(17)的求解得分S,如果新的experience则执行步骤十;否则执行步骤十一。In this step, continuous self-motion information and partial view information are acquired, and relative motion is calculated and accumulated. Then to judge whether the current experience is a new scene or an existing scene, it is necessary to compare the current P i and V i with all the experiences, and obtain the score S by formula (17), and if the new experience is new, perform
步骤十:创建新的经验,更新经验地图(Experience map,EP)。Step 10: Create a new experience and update the experience map (EP).
当式(17)的分数超过最大阈值,则认定式新场景,并且累计相对运动增量大于阈值,则创建经验。通过对从自运动信息进行位姿关联变换,来自里程计测量的位姿变化为tij,按照式(19)通过编码的转换来链接到先前的经验,形成新的地图。When the score of equation (17) exceeds the maximum threshold, a new scene of the equation is identified, and the cumulative relative motion increment is greater than the threshold, then an experience is created. By performing pose correlation transformation from self-motion information, the pose change from odometer measurement is t ij , which is linked to previous experience through coded transformation according to equation (19) to form a new map.
步骤十一:执行局部视图校准,判断回环,滤除虚假信息,执行地图修正。Step 11: Perform local view calibration, determine loopback, filter out false information, and perform map correction.
如果判定局部视图是已经遇到过的场景,则活动的LV通过学习到的连接将活动注入与该场景相关联的PC,由网络的吸引子动力学过滤来自LV的注入活动。如果LV的活动被注入到PC网络的非活动区域,则出现矛盾,全局抑制连接倾向于抑制新分组,返回步骤十;否则,PC和LV活动位置相同,将活动位置和方向与EP存储的转换信息对齐来执行地图校正。If the local view is determined to be an already encountered scene, the active LV injects activity into the PC associated with that scene through the learned connections, with the injected activity from the LV filtered by the network's attractor dynamics. If the activity of the LV is injected into the inactive area of the PC network, there is a contradiction, the global suppression connection tends to suppress the new packet, and return to step ten; otherwise, the PC and LV active positions are the same, the conversion of the active position and direction with the EP storage Information alignment to perform map correction.
由于基于图像匹配的里程计运动估计不可避免存在累计误差,当回环发生时,累计变换位置变化导致回环处的经验不太可能匹配到先前的同一位置。因此,为了实现匹配,所有体验的位置都需要更新,按照式(20)实现松弛的地图修正。式中α0是校正率常数,Nf是从经验ei到其他经验的连接数,Nt是从其他经验到经验ei的连接数。在仿真中,α被设置为0.5(较大的值可能导致map不稳定)。Since the odometry motion estimation based on image matching inevitably has accumulated errors, when the loop closure occurs, the accumulated transformation position changes make the experience at the loopback less likely to match the previous same position. Therefore, in order to achieve matching, the locations of all experiences need to be updated, and loose map correction is implemented according to equation (20). where α 0 is the correction rate constant, N f is the number of connections from experience e i to other experiences, and N t is the number of connections from other experiences to experience e i . In the simulation, alpha was set to 0.5 (larger values may cause map instability).
步骤十二:重复步骤三到十一,实时输出两个独立的视觉里程计位姿估计轨迹如图8a、b,可以实现冗余导航:当一种导航系统失效时(例如磁场干扰),可以切换到另一种导航系统。输出位姿细胞激活图如图8c所示,经验地图如图8d,实现同时定位与导航。通过里程计运动估计和经验地图的对比可知,两个里程计估计的位置估计存在误差,而通过局部视图校准后,可以正确地判断无人机回到了已知区域,将经验地图中的经验位置和方向与经验之间存储的转换信息对齐来执行地图校正,得到了一致的认知地图。Step 12: Repeat steps 3 to 11, output two independent visual odometry pose estimation trajectories in real time as shown in Figure 8a, b, which can realize redundant navigation: when a navigation system fails (such as magnetic field interference), it can be Switch to another navigation system. The output pose cell activation map is shown in Figure 8c, and the experience map is shown in Figure 8d, enabling simultaneous localization and navigation. Through the comparison of the odometer motion estimation and the experience map, it can be seen that there is an error in the position estimates of the two odometer estimates, and after the partial view calibration, it can be correctly judged that the drone has returned to the known area, and the experience position in the experience map can be correctly determined. Aligning with the stored translation information between orientation and experience to perform map correction, a consistent cognitive map is obtained.
Claims (2)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202010776277.9A CN112097769B (en) | 2020-08-05 | 2020-08-05 | Homing pigeon brain-hippocampus-imitated unmanned aerial vehicle simultaneous positioning and mapping navigation system and method |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202010776277.9A CN112097769B (en) | 2020-08-05 | 2020-08-05 | Homing pigeon brain-hippocampus-imitated unmanned aerial vehicle simultaneous positioning and mapping navigation system and method |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN112097769A CN112097769A (en) | 2020-12-18 |
| CN112097769B true CN112097769B (en) | 2022-06-10 |
Family
ID=73749604
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202010776277.9A Active CN112097769B (en) | 2020-08-05 | 2020-08-05 | Homing pigeon brain-hippocampus-imitated unmanned aerial vehicle simultaneous positioning and mapping navigation system and method |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN112097769B (en) |
Families Citing this family (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN113110492B (en) * | 2021-05-07 | 2022-06-21 | 苏州大学 | Path planning method |
| CN113297506B (en) * | 2021-06-08 | 2024-10-29 | 南京航空航天大学 | Brain-like relative navigation method based on social position cells/grid cells |
| CN113223099B (en) * | 2021-06-11 | 2023-04-18 | 苏州大学 | RatSLAM environmental adaptability improving method and system based on biological vision model |
| CN113703322B (en) * | 2021-08-28 | 2024-02-06 | 北京工业大学 | Method for constructing scene memory model imitating mouse brain vision pathway and entorhinal-hippocampal structure |
| CN114152259B (en) * | 2021-12-01 | 2023-09-05 | 中北大学 | A brain-inspired visual UAV navigation method based on hippocampus and entorhinal cortex |
| CN114612560A (en) * | 2022-03-17 | 2022-06-10 | 中山大学 | Method and device for constructing three-dimensional topological empirical map and related equipment |
| CN114894191B (en) * | 2022-04-14 | 2024-04-26 | 河南大学 | A UAV navigation method suitable for dynamic and complex environments |
| CN114812565B (en) * | 2022-06-23 | 2022-10-18 | 北京航空航天大学 | A dynamic navigation method based on artificial intelligence network |
| CN115391516B (en) * | 2022-10-31 | 2023-04-07 | 成都飞机工业(集团)有限责任公司 | Unstructured document extraction method, device, equipment and medium |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2015063119A1 (en) * | 2013-09-12 | 2015-05-07 | Partnering 3.0 | Autonomous multi-modal neuro-inspired mobile robot for monitoring and restoring an environment |
| CN106125730A (en) * | 2016-07-10 | 2016-11-16 | 北京工业大学 | A kind of robot navigation's map constructing method based on Mus cerebral hippocampal spatial cell |
| CN106814737A (en) * | 2017-01-20 | 2017-06-09 | 安徽工程大学 | A kind of SLAM methods based on rodent models and RTAB Map closed loop detection algorithms |
| CN110672088A (en) * | 2019-09-09 | 2020-01-10 | 北京航空航天大学 | A UAV autonomous navigation method imitating the homing mechanism of carrier pigeon landform perception |
| CN111376273A (en) * | 2020-04-23 | 2020-07-07 | 大连理工大学 | A brain-like-inspired method for constructing cognitive maps for robots |
Family Cites Families (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| IL250382B (en) * | 2017-01-31 | 2021-01-31 | Arbe Robotics Ltd | A radar-based system and method for real-time simultaneous localization and mapping |
-
2020
- 2020-08-05 CN CN202010776277.9A patent/CN112097769B/en active Active
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2015063119A1 (en) * | 2013-09-12 | 2015-05-07 | Partnering 3.0 | Autonomous multi-modal neuro-inspired mobile robot for monitoring and restoring an environment |
| CN106125730A (en) * | 2016-07-10 | 2016-11-16 | 北京工业大学 | A kind of robot navigation's map constructing method based on Mus cerebral hippocampal spatial cell |
| CN106814737A (en) * | 2017-01-20 | 2017-06-09 | 安徽工程大学 | A kind of SLAM methods based on rodent models and RTAB Map closed loop detection algorithms |
| CN110672088A (en) * | 2019-09-09 | 2020-01-10 | 北京航空航天大学 | A UAV autonomous navigation method imitating the homing mechanism of carrier pigeon landform perception |
| CN111376273A (en) * | 2020-04-23 | 2020-07-07 | 大连理工大学 | A brain-like-inspired method for constructing cognitive maps for robots |
Non-Patent Citations (4)
| Title |
|---|
| Bilateral participation of the hippocampus in familiar landmark navigation by homing pigeons;AnnaGagliardo 等;《Behavioural Brain Research》;20021017;第136卷(第01期);第201-209页 * |
| Local transformations of the hippocampal cognitive map;Krupic J等;《Science》;20180309;第359卷(第6380期);第1143-1146页 * |
| 一种基于海马认知机理的仿生机器人认知地图构建方法;于乃功等;《自动化学报》;20180131;第44卷(第01期);第52-73页 * |
| 基于鼠脑海马的情景认知地图构建及导航方法研究;蒋晓军;《中国优秀硕士学位论文全文数据库信息科技辑》;20190531(第05期);第I138-1360页 * |
Also Published As
| Publication number | Publication date |
|---|---|
| CN112097769A (en) | 2020-12-18 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN112097769B (en) | Homing pigeon brain-hippocampus-imitated unmanned aerial vehicle simultaneous positioning and mapping navigation system and method | |
| Sun et al. | Motion planning for mobile robots—Focusing on deep reinforcement learning: A systematic review | |
| Yu et al. | NeuroSLAM: A brain-inspired SLAM system for 3D environments | |
| US11561544B2 (en) | Indoor monocular navigation method based on cross-sensor transfer learning and system thereof | |
| Ross et al. | Learning monocular reactive uav control in cluttered natural environments | |
| Bipin et al. | Autonomous navigation of generic monocular quadcopter in natural environment | |
| CN108362284A (en) | A kind of air navigation aid based on bionical hippocampus cognitive map | |
| CN115454096B (en) | A robot strategy training system and training method based on curriculum reinforcement learning | |
| Xu et al. | Monocular vision based autonomous landing of quadrotor through deep reinforcement learning | |
| Sun et al. | Autonomous state estimation and mapping in unknown environments with onboard stereo camera for micro aerial vehicles | |
| Xiao et al. | Vision-based learning for drones: A survey | |
| CN112506210A (en) | Unmanned aerial vehicle control method for autonomous target tracking | |
| Aburaya et al. | Review of vision-based reinforcement learning for drone navigation: A. Aburaya et al. | |
| Cao et al. | Unsupervised visual odometry and action integration for PointGoal navigation in indoor environment | |
| CN111611869B (en) | End-to-end monocular vision obstacle avoidance method based on serial deep neural network | |
| Xu et al. | An unstructured terrain traversability mapping method fusing semantic and geometric features | |
| Tao et al. | Fast and robust training and deployment of deep reinforcement learning based navigation policy | |
| Zhu et al. | Design of multimodal obstacle avoidance algorithm based on deep reinforcement learning | |
| Atsuzawa et al. | Robot navigation in outdoor environments using odometry and convolutional neural network | |
| Patel et al. | Overriding learning-based perception systems for control of autonomous unmanned aerial vehicles | |
| Zhao et al. | 3D indoor map building with monte carlo localization in 2D map | |
| Klaser et al. | Simulation of an autonomous vehicle with a vision-based navigation system in unstructured terrains using OctoMap | |
| Klaser et al. | Vision-based autonomous navigation with a probabilistic occupancy map on unstructured scenarios | |
| Saucedo et al. | Memory enabled segmentation of terrain for traversability based reactive navigation | |
| Choi et al. | Robust modeling and prediction in dynamic environments using recurrent flow networks |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |


































