APP下载

视觉SLAM技术的进展与应用

2018-06-25邸凯昌万文辉赵红颖刘召芹王润之张飞舟

测绘学报 2018年6期
关键词:闭环定位特征

邸凯昌,万文辉,赵红颖,刘召芹,王润之,张飞舟

1. 中国科学院遥感与数字地球研究所遥感科学国家重点实验室,北京 100101; 2. 北京大学遥感与地理信息系统研究所,北京 100871

同时定位与制图(simultaneous localization and mapping,SLAM)是机器人在未知环境下自主作业的核心关键技术,是机器人自动化领域的研究重点[1]。未知环境下,基于机器人外部传感器获取的环境感知数据,SLAM为机器人构建周围环境图,同时提供机器人在环境图中的位置,并随着机器人的移动而进行环境图的增量式构建与机器人的连续定位,是实现机器人环境感知与自动化作业的基础。SLAM中一般采用距离传感器作为环境感知的数据源[2]。相比雷达、声呐等测距仪器,视觉传感器具有体积小、功耗低、信息获取丰富等特点,可为各型机器人提供丰富的外部环境纹理信息,因而基于视觉的SLAM成为当前研究的热点[3]。由于相机获取的视觉信息容易受到环境干扰,存在较大噪声,视觉SLAM处理难度大、复杂度高。当前,获取随着计算机视觉技术的不断发展,视觉SLAM的技术水平也随之提高,并在室内自主导航、VR/AR等领域得到初步的应用[4-7]。

1 视觉SLAM关键技术

视觉SLAM以相机获取的序列图像数据为基础,依据图像信息结合成像模型恢复环境与相机间的关系,随着相机运动递增式地确定周围环境图,并输出相机在环境图中的位置。依照视觉SLAM的一般处理流程,可分为前端处理、后端处理及闭环检测[8-9],如图1。前端处理负责序列图像数据同环境地标物的数据关联与参数的初始化,目前主流的方式是通过序列图像的特征提取与匹配,实现序列图像上的同名特征点追踪,进而将序列图像上的同名观测与环境地标点进行关联,并初始化系统的状态参数,是地图递增式构建与自主连续定位的必要前提。前端处理算法的适应性直接决定了视觉SLAM方法的稳健性[10]。后端处理负责对观测数据进行环境图结果与定位参数的最优估计,以获取高精度的定位与制图结果[11-12]。闭环检测是SLAM系统中判别当前观测到的环境地标是否已被观测过的处理,是为消除长距离运动后的误差累计而构建闭环约束的基础[13-14]。以上三部分处理依次完成了视觉SLAM中的数据关联、环境图与定位参数估计及闭环优化。下面就此展开论述。

图1 视觉SLAM方法基本流程Fig.1 The flowchart of general visual SLAM

1.1 特征点提取与追踪

视觉SLAM中需要将图像观测信息同环境进行关联,即确定序列图像内容与真实环境的对应关系。当前视觉SLAM中,角点特征经常被用于序列图像间的关联。通过图像间特征点的提取与追踪,在多帧图像间形成空间物方点与同名像方点的对应关系。由于序列图像获取时相机的位置和视角不同,加之环境光照的变化,同名点在序列图像上的外观必然改变,这就要求特征点表达能够不受图像几何变化如旋转、缩放、倾斜及光照亮度等变化的影响。早期,图像特征点的提取基本以Harris[15]、Förstner[16]等局部角点特征提取算子为主,特征追踪则采用模板匹配[17-20]或是光流追踪方法[21-22]。以上方法在相邻图像视角变化不大时颇为有效,若相机不规则运动导致视角变化剧烈,则难以获得稳健的追踪结果,甚至导致追踪失败。随着以SIFT为代表的图像局部不变特征描述方法的兴起,图像特征点的提取与匹配也能适应一定程度的图像形变及光照变化,提高了视觉SLAM在复杂环境中的适用性[23-25]。然而,SIFT算法较大的计算量限制了定位与制图的效率,难以满足实时性要求。为了提高处理效率,研究人员们陆续开发了SURF[26-27]、CensurE[28]、BRISK[29]、ORB[30-31]等算子。这些算子尽管性能有不同程度降低,但其效率却得到了数倍至数十倍的提升[32-35],使得实时视觉SLAM成为现实[31]。

1.2 环境图与定位参数估计

环境感知数据中存在一定的噪声与误差。为获得高精度的结果,需要从带有噪声的数据中估计整个系统状态及其不确定性。研究者们将概率论原理引入研究中,分析机器人所处环境与自身位置信息中的不确定性分布,以贝叶斯规则为基础,构建系统状态概率模型,结合机器人运动信息与环境观测数据,实现环境信息与位姿参数的最优估计[36-39]。卡尔曼滤波(KF)作为一种以最小均方根误差为准则实现线性系统最优估计的方法[40],被最早引入SLAM当中用于系统状态的最优估计[41-42]。由于SLAM系统的非线性特点,需要基于Extend KF(EKF)方法用泰勒级数展开方式实现系统的线性化近似[43]。早期的SLAM研究中,基于EKF的参数估计方法占据了主流[43-44]。然而,如果系统的非线性较强,可能造成线性化近似的误差较大,导致EKF滤波性能的急剧降低。随后,提出了一些改进的滤波方法如Unscented KF[46]、Cubature KF[47]、Central Difference KF[48]等,这些滤波方法可以获得比EKF更好的逼近精度,但仍需进行线性化计算。粒子滤波基于蒙特卡洛方法,通过状态空间传播的随机样本用以近似表示概率密度函数[49],突破了高斯分布假设限制,在视觉SLAM中得到了成功的应用[50-51]。然而这类基于滤波的参数估计方法都是基于系统的马尔可夫性假设,未能更好地利用全体观测数据。近年来,图优化方法在SLAM研究中得到了广泛的关注。图优化方法针对全体观测数据进行估计,并在系统估计改变时重新进行线性化近似,减小了线性化误差,可适应更多的非线性系统,因而成为当前主流的环境图与定位参数估计方法[92-93]。由于图优化方法每次优化均对所有状态量重新进行估计,在大场景环境下过多的待估参数导致计算量过大。现有研究中主要通过尽量减少优化迭代的次数[54-56]以及分析状态矩阵的稀疏性[57-59]来降低相应的计算量。

1.3 闭环检测

视觉SLAM系统在缺少外部控制信息的约束时,系统不确定性逐渐增大,长时间运动后定位误差累计较大。实际应用中,相机可能运动至之前经过的地方从而形成闭环约束,将该约束引入图优化中可获得全局一致的定位结果,有效降低定位误差[60]。因而,闭环检测对于长时间远距离运动的视觉SLAM有着重要意义。闭环检测的实质是判断当前图像内容同较早的历史图像内容的对应关系。早期的闭环检测主要是通过随机抽取历史图像数据同当前图像进行特征匹配,依据特征的正确匹配数量确定是否存在关联。这类方法认为所有历史数据均可能同当前图像存在关联,因而随着历史数据的增多,计算量显著增大,检测效率较低[61]。后续的研究通过对历史数据存在关联的可能性做初步的判断进而提高检测效率。其中,一类方法是依靠定位结果来判断相机是否回到之前的位置,据此确定可能存在的闭环情况[62]。由于长时间下的定位结果误差累计较大,该类方法实际效果不佳。另一类方法是基于图像内容构建图像描述词袋模型(Bag of Words),通过图像上出现的“单词”组合形成描述整幅图像的单词向量,进而通过判断图像相似程度而确定可能的关联关系[62-63]。由于此类方法判别准则是图像中是否出现了的相应的特征,并不考虑特征的位置与顺序,具备更强的环境适应性。

2 视觉SLAM研究新趋势

目前视觉SLAM方法研究已取得较大的进步,在定位制图精度与效率上可满足部分简单场景的应用需求。但视觉信息的获取始终受环境纹理丰富度的影响,环境光照及不稳定的运动模式更是加大了视觉信息的处理难度,视觉SLAM在复杂环境下的运行稳健性仍待提高。当前,研究人员通过进一步挖掘图像信息可用性,并融合其他类型传感器数据,试图实现更为稳健的SLAM方法,提高复杂环境下的定位与制图适应性。

2.1 视觉多类别特征提取与追踪

在室内等复杂环境中,人工目标较多,图像中具有更多的直线特征,仅依靠点特征提取方法不能完全利用图像视觉信息。因而,研究人员尝试进一步提炼图像中的信息,提取图像中多类别特征,通过多类特征集合的追踪实现更为稳健的定位与制图。除点特征外,直线特征被较多地引入视觉SLAM中[66-69]。由于直线特征的方向约束性强,可有效改善相机在转弯等剧烈运动导致相邻图像重叠区减少而带来的误差快速累计[70]。若考虑多直线形成的空间平面特征,将空间平面约束加入系统最优估计,则可带来更高的定位制图精度[71]。由于新类别特征的加入,需要构建新的多特征追踪及优化模型,进而准确估计这些特征结果的不确定性。

2.2 直接法SLAM

视觉SLAM中的特征提取与追踪算法复杂,计算耗时,且只能利用部分图像信息,更多的纹理信息被丢弃。直接法SLAM是直接利用图像本身的明暗变化信息,跳过特征提取与匹配步骤,以图像间最小光度误差为准则实现状态参数的最优估计,可充分利用图像信息实现定位与制图,在部分缺少明显纹理的区域也能获得较好的定位制图结果[72]。不同于基于特征的视觉SLAM方法,直接法SLAM不仅可获得稀疏地图[73],也可获得半稠密[74]和稠密地图[75]。由于减少了特征提取与追踪的过程,直接法具备较高的运行效率,可用于实时性较高且计算资源有限的场景。然而,直接法SLAM是基于图像同名区域灰度不变的假设,受环境光照、相机曝光等因素影响,在图像光度变化剧烈的场景中容易导致处理失败,一定程度上限制了其应用范围。

2.3 多传感器融合的SLAM

视觉SLAM方法依靠相机获取的环境纹理信息作为数据源,其处理效果直接受环境纹理条件影响,难以在纹理贫乏区域中可靠地工作。为了完成复杂环境下的稳健定位与制图,提升算法在实际应用中的性能,研究人员通过多类型的传感器的集成,融合各类别传感器数据,进而弥补各自数据缺陷以提高视觉SLAM的稳健性与精度。IMU具备完全自主的测量特性,可稳定获得定位定姿数据,能有效弥补视觉SLAM在贫纹理区域的环境信息缺失,同时高精度的视觉定位定姿结果可有效改正IMU快速漂移,提升系统定位与制图的精度[76],因而成为常见的用于同相机进行融合的传感器[77-80]。激光测距仪同样具备体积小、功耗低等优点[81],同单相机集成进行SLAM能有效确定环境图模型尺度,并修正连续定位中的尺度漂移,可形成便携穿戴式的定位导航系统,满足救护员、宇航员等人员的自主定位需要[82-84]。近年来,以Kinect为代表的深度相机以其具备直接获取丰富的空间三维信息的能力而受到广泛关注,该类传感器基于结构光、光编码或飞行时等测量原理,以主动方式测量空间三维信息,可直接获得局部三维地图,成像受环境条件影响小[85]。通过深度相机同可见光相机的集成形成RGB-D相机用于SLAM中,可同时获得环境纹理与空间稠密几何信息,进而处理得到稠密环境图数据,且空间三维信息的引入提高了SLAM观测数据质量,有效提升了SLAM的稳健性与精度[86-89]。多类别传感器的引入加强了环境感知数据的丰富度,但同时也意味着加入了新的误差源,若要获得最优估计结果需要在分析各自数据源误差特性基础上构建多传感器最优估计模型,以实现多传感器融合的SLAM。

2.4 基于深度学习的视觉SLAM

当前人工智能领域,深度学习以超越传统机器学习方法的识别性能而在图像目标识别分类、语音识别等方面得到了飞速的应用[90-92]。目前已有将深度学习引入视觉SLAM的相关研究,这些研究利用深度学习在深度神经网络中学习到的高层次特征用于图像帧间的位姿估计[93-95]与闭环检测[96-98]。在位姿估计方面,通过引入端对端的深度学习方法,使得视觉图像帧间的位姿参数解算无须特征匹配与复杂的几何运算,可直接输入相邻帧图像快速得到帧间相对位姿参数。闭环检测方面,利用深度学习的强大识别能力,提取图像更高层次的稳健特征,使得系统能对视角、光照等图像变化具备更强的适应能力,提高闭环图像识别能力。当前,深度学习在提升视觉SLAM算法的稳健性方面展现了巨大的潜力,一定程度可改善传统方法中人工特征设计带来的应用局限性。但是,深度学习需要数量巨大的训练样本库,且工作场景需同样本库接近,否则性能下降明显。这些问题使得基于深度学习的视觉SLAM的总体表现还未能超越传统方法。未来随着深度学习理论与方法的不断发展,深度学习必将在视觉SLAM中发挥更为重要的作用。

3 视觉SLAM典型应用

当前,随着卫星定位导航技术的普及与应用,相应的定位导航服务可通过导航卫星接收机而获得。然而,在太空、地下、室内等无导航卫星信号的环境中,机器人的定位导航与环境感知面临更大挑战。视觉SLAM作为一种自主定位与环境感知的重要手段,可为机器人的自主作业提供基础数据。本节针对视觉SLAM在深空探测、室内定位导航及大场景下的自主导航现状进行总结和讨论。

3.1 深空探测

20世纪50年代末,随着以美国和前苏联为代表的航天大国开启了与深空探测的征程,先后发射了大量的探测器用于对各类天体的探测。到目前为止,人类已向月球与火星发射了月球车与火星车用于着陆巡视探测。由于行星表明环境复杂,与地球迥然不同,这些探测车需要一定程度的自动化与智能化,以面对恶劣作业环境的挑战。早期的探测车技术尚不成熟,只能小范围内执行简单的观测任务。随着探测范围的扩大与任务复杂度的提升,要求探测车具备较强的自主定位与环境感知能力,以满足安全避障、任务规划的需要。2003年发射的美国 “机遇号”与“勇气号”火星探测车,以及2011年发射的“好奇号”火星车均配备了双目立体导航相机,通过序列图像视觉测程方法修正了航迹推算方法的定位误差,并为探测车提供了安全避障与路径规划所需的基础地图结果,在探测车穿越崎岖困难区域时提供了高精度定位与制图结果,为火星车探测任务的执行提供了不可替代的重要作用[99-102]。我国于2013年发射的“嫦娥三号”着陆器携带 “月兔号”月球车实现了在月面的着陆巡视探测。着陆过程中着陆器上降落相机获取的序列图像被用于着陆点位置评估[103]及着陆轨迹的恢复[104],所生成着陆区的高精度地图[105],成为嫦娥三号任务规划的重要基础数据。“月兔号”同样配置了一对用于环境感知的立体导航相机,通过站点间立体图像的匹配实现了月球车在探测站点上的高精度视觉定位(如图2),修正了GNC系统航迹推算定位结果误差,将定位精度由7%提升至4%[105-106],如图3。由于深空探测任务寿命有限,探测车几乎不会回到已行驶的路线上,因而难以构成闭环约束修正累计误差,未来随着探测任务要求的不断提高,须融入其他观测信息,提升定位制图处理的稳健性与精度,以满足新一代自动化与智能化探测的需要。

图2 玉兔号导航相机站间图像匹配Fig.2 Matching results in cross-site navcam images of Yutu Rover

3.2 室内定位导航

目前的室内定位主要是通过在室内中布设WiFi或蓝牙基站,依据接收机接收的多基站无线信号的强度分布来估计当前的室内位置[107-108]。由于室内环境十分复杂,存在较多干扰无线信号的因素,使得基于无线信号的室内定位结果精度不够准确,容易出现楼层定位错误等问题,难以满足高精度的室内定位需要。相比无线信号,室内视觉图像信息丰富、直观,在无线信号无法覆盖区域视觉SLAM能够进行自主定位导航,不受外部基站布设的限制,具备更强的环境适用性。尽管已有不少室内视觉定位的相关研究,但目前实际应用于室内定位的视觉SLAM系统还比较少。2014年Google发布的Tango是首款面向普通用户的、用于未知室内环境感知的产品。Tango通过集成RGB-D相机与IMU,在实现定位的同时完成对未知环境的稠密三维重建[109],可用于室内的导航定位及VR、AR等应用[110-112]。随后微软发布的HoloLens混合显示头盔采用类似的技术路线,同时加入手势识别与语音识别,实现人与周围环境的全方位交互[113-114]。尽管视觉定位具备满足室内高精度定位需求的潜力,但当前的视觉定位技术还难以完全应对室内环境的高动态性与复杂性,同时要实现高精度的长距离室内定位还需借助高精度室内地图进行定位校正,以至目前还没有相应成熟的应用。未来,随着视觉图像处理水平的提高,视觉SLAM技术必将在室内定位领域发挥越来越重要的作用。

图3 玉兔号月球车视觉定位轨迹 Fig.3 Traverse of Yutu Rover based on visual localization

3.3 大场景下的自主导航

近来十分热门的无人机以其机动性好、生存能力强、可远距离操作等特点而在电力巡查、地质勘探、森林防火等领域迅速得到应用。这些无人机在GPS、北斗等卫星导航信号支持下以预设任务航线进行作业,实现了远距离大场景下的高效作业。然而,面对复杂的未知场景,这些无人机大部分尚不具备成熟的自主避障与路径规划能力,只能依靠操作员的人工控制完成飞行任务,自动化与智能化程度还不高。大疆创新将双目视觉传感器引入“精灵”系列无人机,利用视觉SLAM的环境图结果,实现了无人机的空中避障与路径自主规划,提升了无人机在复杂环境下的生存与自主作业能力[115]。当前,随着卫星导航信号干扰与诱骗技术的出现,军事飞行器如军用无人机、导弹等要求具备完全自主定位导航能力,以摆脱对卫星导航信号的依赖[116-117]。视觉SLAM技术的迅速发展为飞行器在大场景下的自主导航提供了新的技术支撑,然而,飞行器的快速运动给视觉图像带来了不同程度的质量退化,也对定位算法的实时性与稳健性提出了更高的要求。

4 结 论

当前,随着计算机视觉、数字图像处理、人工智能等技术的进步,视觉SLAM的研究和应用得到迅速发展。然而,视觉SLAM始终受环境纹理与光照条件的影响,复杂环境下的定位与制图的稳健性仍是挑战。目前的研究中,一方面充分利用图像信息,尽可能多的提取图像特征;另一方面通过融合深度相机、IMU等其他类别传感器,满足视觉SLAM在困难条件下的稳健定位制图需要。尽管目前的视觉SLAM算法在复杂场景下处理的稳健性还有待提高,但已经展露了巨大的潜力,未来随着技术水平的提升,必将在机器人自动化、智能化领域发挥重要的作用。

参考文献:

[1] 卢韶芳,刘大维.自主式移动机器人导航研究现状及其相关技术[J].农业机械学报,2002,33(2):112-116.

LU Shaofang,LIU Dawei.A Survey of Research Situation on Navigation by Autonomous Mobile Robot and Its Related Techniques[J].Transactions of the Chinese Society for Agricultural Machinery,2002,33(2):112-116.

[2] ALBRECHT S.An Analysis of Visual Mono-SLAM[D].Canada:Universität Osnabrück,2009:1-4.

[3] FUENTES-PACHECO J,RUIZ-ASCENCIO J,RENDN-MANCHA J M.Visual Simultaneous Localization and Mapping:A Survey[J].Artificial Intelligence Review,2015,43(1):55-81.

[4] IDO J,SHIMIZU Y,MATSUMOTO Y,et al.Indoor Navigation for a Humanoid Robot Using a View Sequence[J].The International Journal of Robotics Research,2009,28(2):315-325.

[5] ÇELIK K,SOMANI A K.Monocular Vision SLAM for Indoor Aerial Vehicles[J].Journal of Electrical and Computer Engineering,2013,2013:374165.

[6] COMPORT A I,MARCHAND E,PRESSIGOUT M.Real-time Markerless Tracking for Augmented Reality:The Virtual Visual Servoing Framework[J].IEEE Transactions on Visualization and Computer Graphics,2006,12(4):615-628.

[7] CHEKHLOV D,GEE A P,CALWAY a,et al.Ninja on a Plane:Automatic Discovery of Physical Planes for Augmented Reality Using Visual SLAM[C]∥Proceedings of the 6th IEEE and ACM International Symposium on Mixed and Augmented Reality.Nara,Japan:IEEE,2007:13-16.

[8] CADENA C,CARLONE L,CARRILLO H,et al.Past,Present,and Future of Simultaneous Localization and Mapping:Toward the Robust-perception Age[J].IEEE Transactions on Robotics,2016,32(6):1309-1332.

[9] 高翔,张涛,刘毅,等.视觉SLAM十四讲——从理论到实践[M].北京:电子工业出版社,2017:17-22.

GAO Xiang,ZHANG Tao,LIU Yi,et al.Visual SLAM Fourteen Lectures—From Theory to Practice[M].Bejing:China Machine Press,2017:17-22.

[10] KARLSSON N,DI BERNARDO E,OSTROWSKI J,et al.The vSLAM Algorithm for Robust Localization and Mapping[C]∥Proceedings of 2005 IEEE International Conference on Robotics and Automation.Barcelona,Spain:IEEE,2005.

[11] SÜNDERHAUF N,PROTZEL P.Towards A Robust Back-end for Pose Graph SLAM[C]∥Proceedings of 2012 IEEE International Conference on Robotics and Automation.Saint Paul,MN:IEEE,2012.

[12] HU G,KHOSOUSSI K,HUANG Shoudong.Towards a Reliable SLAM Back-end[C]∥Proceedings of 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems.Tokyo,Japan:IEEE,2013.

[13] NEWMAN P,HO K.SLAM-loop Closing with Visually Salient Features[C]∥Proceedings of 2005 IEEE International Conference on Robotics and Automation.Barcelona,Spain:IEEE,2005.

[14] HO K L,NEWMAN P.Loop Closure Detection in SLAM by Combining Visual and Spatial Appearance[J].Robotics and Autonomous Systems,2006,54(9):740-749.

[15] HARRIS C,STEPHENS M.A Combined Corner and Edge Detector[C]∥Proceedings of the 4th Alvey Vision Conference.Manchester,UK:Alvety Vision Club,1988.

[16] FÖRSTNER W,GÜLCH E.A Fast Operator for Detection and Precise Location of Distinct Points,Corners and Centers of Circular Features[C]∥Proceedings of the ISPRS Intercommission Conference on Fast Processing of Photogrammetric Data.Interlaken,Switzerland:ISPRS,1987.

[17] OLSON C F,MATTHIES L H,SCHOPPERS M,et al.Rover Navigation using Stereo Ego-motion[J].Robotics and Autonomous Systems,2003,43(4):215-229.

[18] NISTÉR D,NARODITSKY O,BERGEN J.Visual Odometry for Ground Vehicle Applications[J].Journal of Field Robotics,2006,23(1):3-20.

[19] KIM J,KWEON I S.Robust Feature Matching for Loop Closing and Localization[C]∥Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems.San Diego,CA:IEEE,2007.

[20] AGRAWAL M,KONOLIGE K.Real-time Localization in Outdoor Environments using Stereo Vision and Inexpensive GPS[C]∥Proceedings of the 18th International Conference on Pattern Recognition.Hong Kong,China:IEEE,2006.

[21] SHI Jianbo,TOMASI C.Good Features to Track[C]∥Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.Seattle,WA:IEEE,1994.

[22] HERATH D C,KODAGODA S,DISSANAYAKE G.Simultaneous Localisation and Mapping:A Stereo Vision Based Approach[C]∥Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems.Beijing,China:IEEE,2006.

[23] LOWE D G.Object Recognition from Local Scale-Invariant Features[C]∥Proceedings of the 7th IEEE International Conference on Computer Vision.Kerkyra,Greece:IEEE,1999.1150-1157.

[24] LOWE D G.Distinctive Image Features from Scale-invariant Keypoints[J].International Journal of Computer Vision,2004,60(2):91-110.

[25] MOREL J M,YU Guoshen.Is SIFT Scale Invariant?[J].Inverse Problems and Imaging,2011,5(1):1-22.

[26] BAY H,TUYTELAARS T,GOOL V L.SURF:Speeded Up Robust Features[C]∥Proceedings of the 9th European Conference on Compute Vision.Graz,Austria:Springer,2006:404-417.

[27] BAY H,ESS A,TUYTELAARS T,et al.Speeded-Up Robust Features (SURF)[J].Computer Vision and Image Understanding,2008,110(3):346-359.

[28] AGRAWAL M,KONOLIGE K,BLAS M R.CenSurE:Center Surround Extremas for Realtime Feature Detection and Matching[C]∥Proceedings of European Conference on Computer Vision.Marseille,France:INRIA Grenoble,2008:102-115.

[29] LEUTENEGGER S,CHLI M,SIEGWART R Y.BRISK:Binary Robust Invariant Scalable Keypoints[C]∥Proceedings of IEEE International Conference on Computer Vision.Barcelona,Spain:IEEE,2011.

[30] RUBLEE E,RABAUD V,KONOLIGE K,et al.ORB:An Efficient Alternative to SIFT or SURF[C]∥Proceedings of IEEE International Conference on Computer Vision.Barcelona,Spain:IEEE,2011.

[31] MUR-ARTAL R,MONTIEL J M M,TARDS J D.ORB-SLAM:A Versatile and Accurate Monocular SLAM System[J].IEEE Transactions on Robotics,2015,31(5):1147-1163.

[32] GIL A,MOZOS O M,BALLESTA M,et al.A Comparative Evaluation of Interest Point Detectors and Local Descriptors for Visual SLAM[J].Machine Vision and Applications,2010,21(6):905-920.

[33] HARTMANN J,KLUSSENDORFF J H,MAEHLE E.A Comparison of Feature Descriptors for Visual SLAM[C]∥Proceedings of 2013 European Conference on Mobile Robots.Barcelona,Spain:IEEE,2013.

[34] GAUGLITZ S,HÖLLERER T,TURK M.Evaluation of Interest Point Detectors and Feature Descriptors for Visual Tracking[J].International Journal of Computer Vision,2011,94(3):335-360.

[35] MIKSIK O,MIKOLAJCZYK K.Evaluation of Local Detectors and Descriptors for Fast Feature Matching[C]∥Proceedings of the 21st International Conference on Pattern Recognition.Tsukuba,Japan:IEEE,2012.

[36] SMITH R C,CHEESEMAN P.On the Representation of Spatial Uncertainty[J].International Journal of Robotics Research,1986,5(4):56-68.

[37] SMITH R,SELF M,CHEESEMAN P.Estimating Uncertain Spatial Relationships in Robotics[M]∥COX I J,WILFONG G T.Autonomous Robot Vehicles.New York:Springer,1990: 167-198.

[38] THRUN S,BURGARD W,FOX D.Probabilistic Robotics[M].Massachusetts:MIT Press,2005:1-5.

[39] AULINAS J,PETILLOT Y R,SALVI J,et al.The SLAM Problem:A Survey[C]∥ Proceedings of the 2008 conference on Artificial Intelligence Research and Development:Proceedings of the 11th International Conference of the Catalan Association for Artificial Intelligence.Amsterdam,The Netherlands:IOS Press,2008.

[40] KALMAN R E.A New Approach to Linear Filtering and Prediction Problems[J].Journal of Basic Engineering,1960,82(1):35-45.DOI:10.1115/1.3662552.

[41] CHATILA R,LAUMOND J.Position Referencing and Consistent World Modeling for Mobile Robots[C]∥Proceedings of IEEE International Conference on Robotics and Automation.St.Louis,MO:IEEE,1985.

[42] CROWLEY J L.World Modeling and Position Estimation for A Mobile Robot using Ultrasonic Ranging[C]∥Proceedings of IEEE International Conference on Robotics and Automation.Scottsdale,AZ:IEEE,1989.

[43] KALMAN R E,BUCY R S.New Results in Linear Filtering and Prediction Theory[J].Journal of Basic Engineering,1961,83(1):95-108.

[44] DURRANT-WHYTE H,BAILEY T.Simultaneous Localization and Mapping:Part I[J].IEEE Robotics & Automation Magazine,2006,13(2):99-110.

[45] THRUN S,BURGARD W,FOX D.A Probabilistic Approach to Concurrent Mapping and Localization for Mobile Robots[J].Machine Learning,1998,31(1-3):29-53.

[46] WAN E A,VAN DER MERWE R.The Unscented Kalman Filter for Nonlinear Estimation[C]∥Proceedings of IEEE Adaptive Systems for Signal Processing,Communications,and Control Symposium 2000,Lake Louise,Alberta,Canada:IEEE,2000.

[47] ARASARATNAM I,HAYKIN S.Cubature Kalman Filters[J].IEEE Transactions on Automatic Control,2009,54(6):1254-1269.

[48] ITO K,XIONG K.Gaussian Filters for Nonlinear Filtering Problems[J].IEEE Transactions on Automatic Control,2000,45(5):910-927.

[49] DOUCET A,GODSILL S,ANDRIEU C.On Sequential Monte Carlo Sampling Methods for Bayesian Filtering[J].Statistics and Computing,2000,10(3):197-208.

[50] MONTEMERLO M,THRUN S,KOLLER D,et al.FastSLAM:A Factored Solution to the Simultaneous Localization and Mapping Problem[C]∥Proceedings of AAAI National Conference on Artificial Intelligence.Edmonton,Canada:AAAI,2002:593-598.

[51] MONTEMERLO M,THRUN S,ROLLER D,et al.FastSLAM 2.0:An improved Particle Filtering Algorithm for Simultaneous Localization and Mapping that Provably Converges[C]∥Proceedings of the 18th International Joint Conference on Artificial Intelligence.Acapulco,Mexico:Morgan Kaufmann Publishers Inc.,2003:1151-1156.

[52] LU F,MILIOS E.Globally Consistent Range Scan Alignment for Environment Mapping[J].Autonomous Robots,1997,4(4):333-349.

[53] GUTMANN J S,KONOLIGE K.Incremental Mapping of Large Cyclic Environments[C] ∥Proceedings of IEEE International Symposium on Computational Intelligence in Robotics and Automation.Monterey,CA:IEEE,1999.

[54] FRESE U,LARSSON P,DUCKETT T.A Multilevel Relaxation Algorithm for Simultaneous Localization and Mapping[J].IEEE Transactions on Robotics,2005,21(2):196-207.

[55] OLSON E,LEONARD J,TELLER S.Fast Iterative Alignment of Pose Graphs with Poor Initial Estimates[C]∥Proceedings of IEEE International Conference on Robotics and Automation.Orlando,FL:IEEE,2006.

[56] 赵亮.MonoSLAM:参数化、光束法平差与子图融合模型理论[D].北京:北京大学,2012.

ZHAO Liang.MonoSLAM:Theories of Parameterization,Bundle Adjustment and Subgraph Fusion[D].Beijing:Peking University,2012.

[57] DELLAERT F,KAESS M.Square Root SAM:Simultaneous Localization and Mapping via Square Root Information Smoothing[J].International Journal of Robotics Research,2006,25(12):1181-1203.

[58] KAESS M,RANGANATHAN A,DELLAERT F.iSAM:Fast Incremental Smoothing and Mapping with Efficient Data Association[C]∥Proceedings of IEEE International Conference on Robotics and Automation.Roma,Italy:IEEE,2007.

[59] KAESS M,JOHANNSSON H,ROBERTS R,et al.iSAM2:Incremental Smoothing and Mapping Using the Bayes Tree[J].International Journal of Robotics Research,2012,31(2):216-235.

[60] BAILEY T,DURRANT-WHYTE H.Simultaneous Localization and Mapping:Part II[J].IEEE Robotics & Automation Magazine,2006,13(3):108-117.

[61] RUBNER Y,TOMASI C,GUIBAS L J.A Metric for Distributions with Applications to Image Databases[C]∥Proceedings of IEEE International Conference on Computer Vision.Bombay,India:IEEE,1998.

[62] BOSSE M,NEWMAN P,LEONARD S J J,et al.SLAM in Large-scale Cyclic Environments Using the Atlas Framework[J].International Journal of Robotics Research,2004,23:1113-1139.

[63] EADE E,DRUMMOND T.Unified Loop Closing and Recovery for Real Time Monocular SLAM[C]∥Proceedings of the British Conference on Machine Vision.Leeds:BMVA Press,2008.

[64] ANGELI A,DONCIEUX S,MEYER J A,et al.Real-time Visual Loop-closure Detection[C]∥Proceedings of IEEE International Conference on Robotics and Automation.Pasadena,CA:IEEE,2008.4300-4305.

[65] SMITH P,REID I D,DAVISON A J.Real-time Monocular SLAM with Straight Lines[C]∥Proceedings of British Conference on Machine Vision.Edinburgh,UK:BMVA Press,2006:17-26.

[66] LEMAIRE T,LACROIX S.Monocular-vision based SLAM Using Line Segments[C]∥Proceedings of IEEE Robotics and Automation.Roma,Italy:IEEE,2007.

[68] PERDICES E,LPEZ L M,CAAS J M.LineSLAM:Visual Real Time Localization Using Lines and UKF[M]∥ARMADA A,SANFELIU A,FERRE M.ROBOT2013:First Iberian Robotics Conference.Cham:Springer,2014.

[69] ZHOU Huazhong,ZOU Danping,PEI Ling,et al.Struct SLAM:Visual SLAM With Building Structure Lines[J].IEEE Transactions on Vehicular Technology,2015,64(4):1364-1375.

[70] PUMAROLA A,VAKHITOV A,AGUDO A,et al.PL-SLAM:Real-time Monocular Visual SLAM with Points and Lines[C]∥Proceedings of IEEE International Conference on Robotics and Automation.Singapore:IEEE,2017.

[71] 李海丰,胡遵河,陈新伟.PLP-SLAM:基于点、线、面特征融合的视觉SLAM方法[J].机器人,2017,39(2):214-220,229.

LI Haifeng,HU Zunhe,CHEN Xinwei.PLP-SLAM:A Visual SLAM Method Based on Point-line-plane Feature Fusion[J].Robot,2017,39(2):214-220,229.

[72] SILVERIA G,MALIS E,RIVES P.An Efficient Direct Approach to Visual SLAM[J].IEEE Transactions on Robotics,2008,24(5):969-979.

[73] ENGEL J,KOLTUN V,CREMERS D.Direct Sparse Odometry[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2018,40(3):611-625.

[74] FORSTER C,PIZZOLI M,SCARAMUZZA D.SVO:Fast Semi-direct Monocular Visual Odometry[C]∥Proceedings of IEEE International Conference on Robotics and Automation.Hong Kong,China:IEEE,2014.

[75] ENGEL J,SCHÖPS T,CREMERS D.LSD-SLAM:Large-scale Direct Monocular SLAM[C]∥Proceedings of the 13th European Conference on Computer Vision.Zurich,Switzerland:Springer,2014:834-849.

[76] CORKE P,LOBO J,DIAS J.An Introduction to Inertial and Visual Sensing[J].International Journal of Robotics Research,2007,26(6):519-535.

[77] LI Mingyang,MOURIKIS A I.High-precision,Consistent EKF-based Visual-inertial Odometry[J].International Journal of Robotics Research,2013,33(6):690-711.

[78] LEUTENEGGER S,LYNEN S,BOSSE M,et al.Keyframe-based Visual-inertial Odometry Using Nonlinear Optimization[J].International Journal of Robotics Research,2014,34(3):314-334.

[79] MU Xufu,CHEN Jing,ZHOU Zixiang,et al.Accurate Initial State Estimation in a Monocular Visual-inertial SLAM System[J].Sensor,2018,18(2):506.

[80] 万文辉.基于立体视觉的深空探测车自主定位理论与方法研究[D].北京:中国科学院研究生院,2012.

WAN Wenhui.Theory and Methods of Stereo Vision Based Autonomous Rover Localization in Deep Space Exploration[D].Beijng:Graduate School of Chinese Academy of Sciences,2012.

[81] 吴凯.集成激光测距仪的月面宇航员视觉导航[D].北京:中国科学院大学,2013.

WU Kai.Monocular Vision Integrated with Laser Distance Meter for Astronaut Navigation on Lunar Surface[D].Beijng:University of Chinese Academy of Sciences,2013.

[82] WU Kai,DI Kaichang,SUN Xun,et al.Enhanced Monocular Visual Odometry Integrated with Laser Distance Meter for Astronaut Navigation[J].Sensors,14(3):4981-5003.

[83] ZHANG Xinzheng,RAD A B,WONG Y K.Sensor Fusion of Monocular Cameras and Laser Rangefinders for Line-based Simultaneous Localization and Mapping (SLAM) Tasks in Autonomous Mobile Robots[J].Sensors,2012,12(1):429-452.

[84] 邸凯昌.登月宇航员导航定位需求分析及技术方案探讨[C]∥中国工程院工程科技论坛(第107场):载人登月与深空探测论文集.北京,2010:213-218.

DI Kaichang.Remand Analysis and Technical Proposal Discussion about Navigation for Lunar Astronauts[C]∥Proceedings of Engineering Science and Technology Forum (107th) of Chinese Academy of Engineering:Manned Lunar Landing and Deep Space Exploration.Beijing:Chinese Accdemy of Engineering 2010:213-218.

[85] KHOSHELHAM K,ELBERINK S O.Accuracy and Resolution of Kinect Depth Data for Indoor Mapping Applications[J].Sensors,2012,12(2):1437-1454.

[86] STURM J,ENGELHARD N,ENDRES F,et al.A Benchmark for the Evaluation of RGB-D SLAM Systems[C]∥Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems.Vilamoura,Portugal:IEEE,2012.

[87] KERL C,STURM J,CREMERS D.Dense Visual SLAM for RGB-D Cameras[C]∥Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems.Tokyo,Japan:IEEE,2013.

[88] WHELAN T,KAESS M,JOHANNSSON H,et al.Real-time Large-scale Dense RGB-D SLAM with Volumetric Fusion[J].International Journal of Robotics Research,2015,34(4-5):598-626.

[89] DI Kaichang,ZHAO Qiang,WAN Wenhui,et al.RGB-D SLAM based on Extended Bundle Adjustment with 2D and 3D Information[J].Sensors,2016,16(8):1285.

[90] ZHU X X,TUIA D,MOU L C,et al.Deep Learning in Remote Sensing:A Comprehensive Review and List of Resources[J].IEEE Transactions on Geoscience and Remote Sensing,2017,5(4):8-36.

[91] ZHANG Zixing,GEIGER J,POHJALAINEN J,et al.Deep Learning for Environmentally Robust Speech Recognition:An Overview of Recent Developments[J].ACM Transactions on Intelligent Systems and Technology,2018,9(5):Article No.49.

[92] GARCIA-GARCIA A,ORTS-ESCOLANO S,OPERA S,et al.A Review on Deep Learning Techniques Applied to Semantic Segmentation[J].arXiv preprint arXiv:1704.06857,2017.

[93] KONDA K,MEMISEVIC R.Learning Visual Odometry with a Convolutional Network[C]∥Proceedings of the 10th International Conference on Computer Vision Theory and Applications.Berlin,Germany:SCITCC Press,2015:486-490.

[94] DOSOVITSKIY A,FISCHER P,ILG E,et al.FlowNet:Learning Optical Flow with Convolutional Networks[C]∥Proceedings of 2015 IEEE International Conference on Computer Vision.Santiago,Chile:IEEE,2015:2758-2766.

[95] COSTANTE G,MANCINI M,VALIGI P,et al.Exploring Representation Learning with CNNs for Frame-to-frame Ego-motion Estimation[J].IEEE Robotics and Automation Letters,2016,1(1):18-25.

[96] BAI Dongdong,WANG Chaoqun,ZHANG Bo,et al.Matching-range-constrained Real-time Loop Closure Detection with CNNs Features[J].Robotics and Biomimetics,2016,3:15.

[97] ZHANG X W,SU Y,ZHU X H.Loop Closure Detection for Visual SLAM Systems Using Convolutional Neural Network[C]∥Proceedings of IEEE International Conference on Automation and Computing.Huddersfield,UK:IEEE,2017.

[98] GAO Xiang,ZHANG Tao.Unsupervised Learning to Detect Loops Using Deep Neural Networks for Visual SLAM System[J].Autonomous Robots,2017,41(1):1-18.

[99] LI Rongxing,SQUYRES S W,ARVIDON R E,et al.Initial Results of Rover Localization and Topographic Mapping for the 2003 Mars Exploration Rover Mission[J].Photogrammetric Engineering & Remote Sensing,2005,71(10):1129-1142.

[100] DI Kaichang,XU Fengliang,WANG Jue,et al.Photogrammetric Processing of Rover Imagery of the 2003 Mars Exploration Rover Mission[J].ISPRS Journal of Photogrammetry and Remote Sensing,2008,63(2):181-201

[101] MARTIN-MUR T J,KRUIZINGA G L,BUKHART P D,et al.Mars Science Laboratory Navigation Results[C]∥Proceedings of International Symposium on Space Flight Dynamics.Washington D.C.:NASA,2012.

[102] CHENG Yang,MAIMONE M W,MATTHIES L.Visual Odometry on the Mars Exploration Rovers—A Tool to Ensure Accurate Driving and Science Imaging[J].IEEE Robotics & Automation Magazine,2006,13(2):54-62.

[103] 万文辉,刘召芹,刘一良,等.基于降落图像匹配的嫦娥三号着陆点位置评估[J].航天器工程,2014,23(4):5-12.

WAN Wenhui,LIU Zhaoqin,LIU Yiliang,et al.Descent Image Matching based Position Evaluation for Chang’e-3 Landing Point[J].Spacecraft Engineering,2014,23(4):5-12.

[104] 刘斌,徐斌,刘召芹,等.基于降落相机图像的嫦娥三号着陆轨迹恢复[J].遥感学报,2014,18(5):988-994.

LIU Bin,XU Bin,LIU Zhaoqin,et al.Descending and Landing Trajectory Recovery of Chang’e-3 Lander using Descent Images[J].Journal of Remote Sensing,2014,18(5):988-994.

[105] LIU Zhaoqin,DI Kaichang,PENG Man,et al.High Precision Landing Site Mapping and Rover Localization for Chang’e-3 Mission[J].Science China-Physics Mechanics & Astronomy,2015,58(1):1-11.

[106] WAN W H,LIU Z Q,DI K C,et al.A Cross-Site Visual Localization Method for Yutu Rover[C]∥Proceedings of ISPRS Technical Commission IV Symposium.Suzhou,China:ISPRS,2014.

[107] HUANG Haosheng,GARTNER G.A Survey of Mobile Indoor Navigation Systems[M]∥GARTNER G,ORTAG F.Cartography for Central and Eastern European.Berlin,Heidelberg:Springer,2009.

[108] FALLAH N,APOSTOLOPOULOS I,BEKRIS K,et al.Indoor Human Navigation Systems:A Survey[J].Interacting with Computers,2013,25(1):21-33.

[109] FROEHLICH M,AZHAR S,VANTURE M.An Investigation of Google Tango® Tablet for Low Cost 3D scanning[C]∥Proceedings of 34th International Symposium on Automation and Robotics in Construction.Hawaii:ICACR,2017.

[110] NGUYEN K A,LUO Zhiyuan.On Assessing the Positioning Accuracy of Google Tango in Challenging Indoor Environments[C]∥Proceedings of International Conference on Indoor Positioning and Indoor Navigation.Sapporo,Japan:IEEE,2017.

[111] WINTERHALTER W,FLECKENSTEIN F,STEDER B,et al.Accurate Indoor Localization for RGB-D Smartphones and Tablets Given 2D Floor Plans[C]∥Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems.Hambury,Germany:IEEE,2015.

[112] LEE J.Mobile AR in Your Pocket with Google Tango[J].Society for Information Display International Symposium Digest of Technical Papers,2017,48(1):17-18.

[113] GARON M,BOULET P O,DOIRONZ J P,et al.Real-Time High Resolution 3D Data on the HoloLens[C]∥Proceedings of IEEE International Symposium on Mixed and Augmented Reality.Merida,Mexico:IEEE,2016.

[114] EVANS G,MILLER J,PENA M I,et al.Evaluating the Microsoft HoloLens through an Augmented Reality Assembly Application[C]∥Proceedings of SPIE,Volume 10197,Degraded Environments:Sensing,Processing,and Display 2017.Anaheim,CA:SPIE,2017.

[115] ZHOU Guyue,FANG Lu,TANG Ketan,et al.Guidance:A Visual Sensing Platform for Robotic Applications[C]∥Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.Boston,MA:IEEE,2015:9-14.

[116] MEJIAS L,CORREA J F,MONDRAGON I,et al.COLIBRI:A Vision-Guided UAV for Surveillance and Visual Inspection[C]∥Proceedings of IEEE International Conference on Robotics and Automation.Roma,Italy:IEEE,2015.

猜你喜欢

闭环定位特征
根据方程特征选解法
《导航定位与授时》征稿简则
Smartrail4.0定位和控制
不忠诚的四个特征
找准定位 砥砺前行
抓住特征巧观察
单周期控制下双输入Buck变换器闭环系统设计
双闭环模糊控制在石化废水处理中的研究
青年择业要有准确定位
家电回收的闭环物流网络选址模型