当前开源的SLAM方案汇总2021.02

2023-05-16

感谢SLAMer前辈们不断的拼搏与进取,才有了现在的丰富的学习资料!

以下是至今SLAM开源代码的资料汇总,后续将会更新主流slam开源代码的注释版本,希望对研究SLAM的同学们有帮助。

PTAM

论文:Klein G, Murray D. Parallel tracking and mapping for small AR workspaces[C]//Mixed and Augmented Reality, 2007. ISMAR 2007. 6th IEEE and ACM International Symposium on. IEEE, 2007: 225-234.

代码:https://github.com/Oxford-PTAM/PTAM-GPL

作者其他研究:http://www.robots.ox.ac.uk/~gk/publications.html

 MonoSLAM

论文:Davison A J, Reid I D, Molton N D, et al. MonoSLAM: Real-time single camera SLAM[J]. IEEE transactions on pattern analysis and machine intelligence, 2007, 29(6): 1052-1067.

代码:https://github.com/hanmekim/SceneLib2

 KinectFusion

论文:Newcombe R A, Izadi S, Hilliges O, et al. KinectFusion: Real-time dense surface mapping and tracking[C]//2011 10th IEEE International Symposium on Mixed and Augmented Reality. IEEE, 2011: 127-136.

代码:https://github.com/chrdiller/KinectFusionApp

DVO-SLAM

论文:Kerl C, Sturm J, Cremers D. Dense visual SLAM for RGB-D cameras[C]//2013 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2013: 2100-2106.

代码 1:https://github.com/tum-vision/dvo_slam

代码 2:https://github.com/tum-vision/dvo

其他论文:

Kerl C, Sturm J, Cremers D. Robust odometry estimation for RGB-D cameras[C]//2013 IEEE international conference on robotics and automation. IEEE, 2013: 3748-3754.

Steinbrücker F, Sturm J, Cremers D. Real-time visual odometry from dense RGB-D images[C]//2011 IEEE international conference on computer vision workshops (ICCV Workshops). IEEE, 2011: 719-722.

LSD-SLAM

论文:Engel J, Schöps T, Cremers D. LSD-SLAM: Large-scale direct monocular SLAM[C]//European conference on computer vision. Springer, Cham, 2014: 834-849.

代码:https://github.com/tum-vision/lsd_slam

SVO

论文:Forster C, Pizzoli M, Scaramuzza D. SVO: Fast semi-direct monocular visual odometry[C]//2014 IEEE international conference on robotics and automation (ICRA). IEEE, 2014: 15-22.

代码:https://github.com/uzh-rpg/rpg_svo

Forster C, Zhang Z, Gassner M, et al. SVO: Semidirect visual odometry for monocular and multicamera systems[J]. IEEE Transactions on Robotics, 2016, 33(2): 249-265.

REMODE(单目概率稠密重建)

论文:Pizzoli M, Forster C, Scaramuzza D. REMODE: Probabilistic, monocular dense reconstruction in real time[C]//2014 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2014: 2609-2616.

原始开源代码:https://github.com/uzh-rpg/rpg_open_remode

与 ORB-SLAM2 结合版本:https://github.com/ayushgaud/ORB_SLAM2 https://github.com/ayushgaud/ORB_SLAM2

ROVIO

论文:Bloesch M, Omari S, Hutter M, et al. Robust visual inertial odometry using a direct EKF-based approach[C]//2015 IEEE/RSJ international conference on intelligent robots and systems (IROS). IEEE, 2015: 298-304.

代码:https://github.com/ethz-asl/rovio

OKVIS

论文:Leutenegger S, Lynen S, Bosse M, et al. Keyframe-based visual–inertial odometry using nonlinear optimization[J]. The International Journal of Robotics Research, 2015, 34(3): 314-334.

代码:https://github.com/ethz-asl/okvis

DynamicFusion

论文:Newcombe R A, Fox D, Seitz S M. Dynamicfusion: Reconstruction and tracking of non-rigid scenes in real-time[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2015: 343-352.

代码:https://github.com/mihaibujanca/dynamicfusion

Kintinuous

论文:Whelan T, Kaess M, Johannsson H, et al. Real-time large-scale dense RGB-D SLAM with volumetric fusion[J]. The International Journal of Robotics Research, 2015, 34(4-5): 598-626.

代码:https://github.com/mp3guy/Kintinuous

ElasticReconstruction

论文:Choi S, Zhou Q Y, Koltun V. Robust reconstruction of indoor scenes[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2015: 5556-5565.

代码:https://github.com/qianyizh/ElasticReconstruction

DPPTAM(单目稠密重建)

论文:Concha Belenguer A, Civera Sancho J. DPPTAM: Dense piecewise planar tracking and mapping from a monocular sequence[C]//Proc. IEEE/RSJ Int. Conf. Intell. Rob. Syst. 2015 (ART-2015-92153).

代码:https://github.com/alejocb/dpptam

ORB-SLAM2 单目半稠密建图

论文:Mur-Artal R, Tardós J D. Probabilistic Semi-Dense Mapping from Highly Accurate Feature-Based Monocular SLAM[C]//Robotics: Science and Systems. 2015, 2015.

代码:https://github.com/HeYijia/ORB_SLAM2

Map2DFusion(单目 SLAM 无人机图像拼接)

论文:Bu S, Zhao Y, Wan G, et al. Map2DFusion: Real-time incremental UAV image mosaicing based on monocular slam[C]//2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2016: 4564-4571.

代码:https://github.com/zdzhaoyong/Map2DFusion

PL-SVO(点线 SVO)

论文:Gomez-Ojeda R, Briales J, Gonzalez-Jimenez J. PL-SVO: Semi-direct Monocular Visual Odometry by combining points and line segments[C]//Intelligent Robots and Systems (IROS), 2016 IEEE/RSJ International Conference on. IEEE, 2016: 4211-4216.

代码:https://github.com/rubengooj/pl-svo

STVO-PL(双目点线 VO)

论文:Gomez-Ojeda R, Gonzalez-Jimenez J. Robust stereo visual odometry through a probabilistic combination of points and line segments[C]//2016 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2016: 2521-2526.

代码:https://github.com/rubengooj/stvo-pl

PlaneSLAM

论文:Wietrzykowski J. On the representation of planes for efficient graph-based slam with high-level features[J]. Journal of Automation Mobile Robotics and Intelligent Systems, 2016, 10.

代码:https://github.com/LRMPUT/PlaneSLAM

Pop-up SLAM

论文:Yang S, Song Y, Kaess M, et al. Pop-up slam: Semantic monocular plane slam for low-texture environments[C]//2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2016: 1222-1229.

代码:https://github.com/shichaoy/pop_up_slam

Object SLAM

论文:Mu B, Liu S Y, Paull L, et al. Slam with objects using a nonparametric pose graph[C]//2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2016: 4602-4609.

代码:https://github.com/BeipengMu/objectSLAM

DynamicSemanticMapping(动态语义建图)

论文:Kochanov D, Ošep A, Stückler J, et al. Scene flow propagation for semantic mapping and object discovery in dynamic street scenes[C]//Intelligent Robots and Systems (IROS), 2016 IEEE/RSJ International Conference on. IEEE, 2016: 1785-1792.

代码:https://github.com/ganlumomo/DynamicSemanticMapping

ElasticFusion

论文:Whelan T, Salas-Moreno R F, Glocker B, et al. ElasticFusion: Real-time dense SLAM and light source estimation[J]. The International Journal of Robotics Research, 2016, 35(14): 1697-1716.

代码:https://github.com/mp3guy/ElasticFusion

S-PTAM(双目 PTAM)

论文:Taihú Pire,Thomas Fischer, Gastón Castro, Pablo De Cristóforis, Javier Civera and Julio Jacobo Berlles. S-PTAM: Stereo Parallel Tracking and Mapping. Robotics and Autonomous Systems, 2017.

代码:https://github.com/lrse/sptam

作者其他论文:Castro G, Nitsche M A, Pire T, et al. Efficient on-board Stereo SLAM through constrained-covisibility strategies[J]. Robotics and Autonomous Systems, 2019.

ORB-SLAM2

论文:Mur-Artal R, Tardós J D. Orb-slam2: An open-source slam system for monocular, stereo, and rgb-d cameras[J]. IEEE Transactions on Robotics, 2017, 33(5): 1255-1262.

代码:https://github.com/raulmur/ORB_SLAM2

作者其他论文:

单目半稠密建图:Mur-Artal R, Tardós J D. Probabilistic Semi-Dense Mapping from Highly Accurate Feature-Based Monocular SLAM[C]//Robotics: Science and Systems. 2015, 2015.

VIORB:Mur-Artal R, Tardós J D. Visual-inertial monocular SLAM with map reuse[J]. IEEE Robotics and Automation Letters, 2017, 2(2): 796-803.

多地图:Elvira R, Tardós J D, Montiel J M M. ORBSLAM-Atlas: a robust and accurate multi-map system[J]. arXiv preprint arXiv:1908.11585, 2019.

 DSO

论文:Engel J, Koltun V, Cremers D. Direct sparse odometry[J]. IEEE transactions on pattern analysis and machine intelligence, 2017, 40(3): 611-625.

代码:https://github.com/JakobEngel/dso

双目 DSO:Wang R, Schworer M, Cremers D. Stereo DSO: Large-scale direct sparse visual odometry with stereo cameras[C]//Proceedings of the IEEE International Conference on Computer Vision. 2017: 3903-3911.

VI-DSO:Von Stumberg L, Usenko V, Cremers D. Direct sparse visual-inertial odometry using dynamic marginalization[C]//2018 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2018: 2510-2517.

SemanticFusion

论文:McCormac J, Handa A, Davison A, et al. Semanticfusion: Dense 3d semantic mapping with convolutional neural networks[C]//2017 IEEE International Conference on Robotics and automation (ICRA). IEEE, 2017: 4628-4635.

代码:https://github.com/seaun163/semanticfusion

Semantic_3d_mapping

论文:Yang S, Huang Y, Scherer S. Semantic 3D occupancy mapping through efficient high order CRFs[C]//2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2017: 590-597.

代码:https://github.com/shichaoy/semantic_3d_mapping

PL-SLAM(点线 SLAM)

论文:Gomez-Ojeda R, Zuñiga-Noël D, Moreno F A, et al. PL-SLAM: a Stereo SLAM System through the Combination of Points and Line Segments[J]. arXiv preprint arXiv:1705.09479, 2017.

代码:https://github.com/rubengooj/pl-slam

VIORB

论文:Mur-Artal R, Tardós J D. Visual-inertial monocular SLAM with map reuse[J]. IEEE Robotics and Automation Letters, 2017, 2(2): 796-803.

代码:https://github.com/jingpang/LearnVIORB

 Co-Fusion(实时分割与跟踪多物体)

论文:Rünz M, Agapito L. Co-fusion: Real-time segmentation, tracking and fusion of multiple objects[C]//2017 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2017: 4471-4478.

代码:https://github.com/martinruenz/co-fusion

InfiniTAM(跨平台 CPU 实时重建)

论文:Prisacariu V A, Kähler O, Golodetz S, et al. Infinitam v3: A framework for large-scale 3d reconstruction with loop closure[J]. arXiv preprint arXiv:1708.00783, 2017.

代码:https://github.com/victorprad/InfiniTAM

BundleFusion

论文:Dai A, Nießner M, Zollhöfer M, et al. Bundlefusion: Real-time globally consistent 3d reconstruction using on-the-fly surface reintegration[J]. ACM Transactions on Graphics (TOG), 2017, 36(4): 76a.

代码:https://github.com/niessner/BundleFusion

VI-MEAN(单目视惯稠密重建)

论文:Yang Z, Gao F, Shen S. Real-time monocular dense mapping on aerial robots using visual-inertial fusion[C]//2017 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2017: 4552-4559.

代码:https://github.com/dvorak0/VI-MEAN

LDSO(高翔在 DSO 上添加闭环的工作)

论文:Gao X, Wang R, Demmel N, et al. LDSO: Direct sparse odometry with loop closure[C]//2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2018: 2198-2204.

代码:https://github.com/tum-vision/LDSO

 LCSD_SLAM(松耦合的半直接法单目 SLAM)

论文:Lee S H, Civera J. Loosely-Coupled semi-direct monocular SLAM[J]. IEEE Robotics and Automation Letters, 2018, 4(2): 399-406.

代码:https://github.com/sunghoon031/LCSD_SLAM

 MsakFusion

论文:Runz M, Buffier M, Agapito L. Maskfusion: Real-time recognition, tracking and reconstruction of multiple moving objects[C]//2018 IEEE International Symposium on Mixed and Augmented Reality (ISMAR). IEEE, 2018: 10-20.

代码:https://github.com/martinruenz/maskfusion

PL-VIO

论文:He Y, Zhao J, Guo Y, et al. PL-VIO: Tightly-coupled monocular visual–inertial odometry using point and line features[J]. Sensors, 2018, 18(4): 1159.

代码:https://github.com/HeYijia/PL-VIO

VINS + 线段:https://github.com/Jichao-Peng/VINS-Mono-Optimization

msckf_vio

论文:Sun K, Mohta K, Pfrommer B, et al. Robust stereo visual inertial odometry for fast autonomous flight[J]. IEEE Robotics and Automation Letters, 2018, 3(2): 965-972.

代码:https://github.com/KumarRobotics/msckf_vio

R-VIO

论文:Huai Z, Huang G. Robocentric visual-inertial odometry[C]//2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2018: 6319-6326.

代码:https://github.com/rpng/R-VIO

VINS-mono

论文:Qin T, Li P, Shen S. Vins-mono: A robust and versatile monocular visual-inertial state estimator[J]. IEEE Transactions on Robotics, 2018, 34(4): 1004-1020.

代码:https://github.com/HKUST-Aerial-Robotics/VINS-Mono

双目版 VINS-Fusion:https://github.com/HKUST-Aerial-Robotics/VINS-Fusion

移动段 VINS-mobile:https://github.com/HKUST-Aerial-Robotics/VINS-Mobile

CPI(视惯融合的封闭式预积分)

论文:Eckenhoff K, Geneva P, Huang G. Closed-form preintegration methods for graph-based visual–inertial navigation[J]. The International Journal of Robotics Research, 2018.

代码:https://github.com/rpng/cpi

Limo(激光单目视觉里程计)

论文:Graeter J, Wilczynski A, Lauer M. Limo: Lidar-monocular visual odometry[C]//2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2018: 7872-7879.

代码:https://github.com/johannes-graeter/limo

maplab(视惯建图框架,多会话建图,地图合并,视觉惯性批处理优化和闭环)

论文:Schneider T, Dymczyk M, Fehr M, et al. maplab: An open framework for research in visual-inertial mapping and localization[J]. IEEE Robotics and Automation Letters, 2018, 3(3): 1418-1425.

代码:https://github.com/ethz-asl/maplab

DS-SLAM(动态语义 SLAM)

论文:Yu C, Liu Z, Liu X J, et al. DS-SLAM: A semantic visual SLAM towards dynamic environments[C]//2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2018: 1168-1174.

代码:https://github.com/ivipsourcecode/DS-SLAM

DynSLAM(室外大规模稠密重建)

论文:Bârsan I A, Liu P, Pollefeys M, et al. Robust dense mapping for large-scale dynamic environments[C]//2018 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2018: 7510-7517.

代码:https://github.com/AndreiBarsan/DynSLAM

FlashFusion

论文:Han L, Fang L. FlashFusion: Real-time Globally Consistent Dense 3D Reconstruction using CPU Computing[C]. RSS, 2018.

代码:https://github.com/lhanaf/FlashFusion

probabilistic_mapping(单目概率稠密重建)

论文:Ling Y, Wang K, Shen S. Probabilistic dense reconstruction from a moving camera[C]//2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2018: 6364-6371.

代码:https://github.com/ygling2008/probabilistic_mapping

另外一篇稠密重建文章的代码一直没放出来 Github :Ling Y, Shen S. Real‐time dense mapping for online processing and navigation[J]. Journal of Field Robotics, 2019, 36(5): 1004-1036.

SegMap(三维分割建图)

论文:Dubé R, Cramariuc A, Dugas D, et al. SegMap: 3d segment mapping using data-driven descriptors[J]. arXiv preprint arXiv:1804.09557, 2018.

代码:https://github.com/ethz-asl/segmap

 ICE-BA (代码可读性太差了)

论文:Liu H, Chen M, Zhang G, et al. Ice-ba: Incremental, consistent and efficient bundle adjustment for visual-inertial slam[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018: 1974-1982.

代码:https://github.com/baidu/ICE-BA

DSM

论文:Zubizarreta J, Aguinaga I, Montiel J M M. Direct sparse mapping[J]. arXiv preprint arXiv:1904.06577, 2019.

代码:https://github.com/jzubizarreta/dsm

openvslam

论文:Sumikura S, Shibuya M, Sakurada K. OpenVSLAM: A Versatile Visual SLAM Framework[C]//Proceedings of the 27th ACM International Conference on Multimedia. 2019: 2292-2295.

代码:https://github.com/xdspacelab/openvslam

se2lam(地面车辆位姿估计的视觉里程计)

论文:Zheng F, Liu Y H. Visual-Odometric Localization and Mapping for Ground Vehicles Using SE (2)-XYZ Constraints[C]//2019 International Conference on Robotics and Automation (ICRA). IEEE, 2019: 3556-3562.

代码:https://github.com/izhengfan/se2lam

作者的另外一项工作

论文:Zheng F, Tang H, Liu Y H. Odometry-vision-based ground vehicle motion estimation with se (2)-constrained se (3) poses[J]. IEEE transactions on cybernetics, 2018, 49(7): 2652-2663.

代码:https://github.com/izhengfan/se2clam

GraphSfM(基于图的并行大规模 SFM)

论文:Chen Y, Shen S, Chen Y, et al. Graph-Based Parallel Large Scale Structure from Motion[J]. arXiv preprint arXiv:1912.10659, 2019.

代码:https://github.com/AIBluefisher/GraphSfM

RESLAM(基于边的 SLAM)

论文:Schenk F, Fraundorfer F. RESLAM: A real-time robust edge-based SLAM system[C]//2019 International Conference on Robotics and Automation (ICRA). IEEE, 2019: 154-160.

代码:https://github.com/fabianschenk/RESLAM

 scale_optimization(将单目 DSO 拓展到双目)

论文:Mo J, Sattar J. Extending Monocular Visual Odometry to Stereo Camera System by Scale Optimization[C]. International Conference on Intelligent Robots and Systems (IROS), 2019.

代码:https://github.com/jiawei-mo/scale_optimization

 BAD-SLAM(直接法 RGB-D SLAM)

论文:Schops T, Sattler T, Pollefeys M. BAD SLAM: Bundle Adjusted Direct RGB-D SLAM[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019: 134-144.

代码:https://github.com/ETH3D/badslam

GSLAM(集成 ORB-SLAM2,DSO,SVO 的通用框架)

论文:Zhao Y, Xu S, Bu S, et al. GSLAM: A general SLAM framework and benchmark[C]//Proceedings of the IEEE International Conference on Computer Vision. 2019: 1110-1120.

代码:https://github.com/zdzhaoyong/GSLAM

ARM-VO(运行于 ARM 处理器上的单目 VO)

论文:Nejad Z Z, Ahmadabadian A H. ARM-VO: an efficient monocular visual odometry for ground vehicles on ARM CPUs[J]. Machine Vision and Applications, 2019: 1-10.

代码:https://github.com/zanazakaryaie/ARM-VO

CVO-rgbd(直接法 RGB-D VO)

论文:Ghaffari M, Clark W, Bloch A, et al. Continuous Direct Sparse Visual Odometry from RGB-D Images[J]. arXiv preprint arXiv:1904.02266, 2019.

代码:https://github.com/MaaniGhaffari/cvo-rgbd

CCM-SLAM(多机器人协同单目 SLAM)

论文:Schmuck P, Chli M. CCM‐SLAM: Robust and efficient centralized collaborative monocular simultaneous localization and mapping for robotic teams[J]. Journal of Field Robotics, 2019, 36(4): 763-781.

代码:https://github.com/VIS4ROB-lab/ccm_slam

Kimera(实时度量与语义定位建图开源库)

论文:Rosinol A, Abate M, Chang Y, et al. Kimera: an Open-Source Library for Real-Time Metric-Semantic Localization and Mapping[J]. arXiv preprint arXiv:1910.02490, 2019.

代码:https://github.com/MIT-SPARK/Kimera

NeuroSLAM(脑启发式 SLAM)

论文:Yu F, Shang J, Hu Y, et al. NeuroSLAM: a brain-inspired SLAM system for 3D environments[J]. Biological Cybernetics, 2019: 1-31.

代码:https://github.com/cognav/NeuroSLAM

ORB-SLAM2 + 目标检测/分割的方案语义建图

https://github.com/floatlazer/semantic_slam

https://github.com/qixuxiang/orb-slam2_with_semantic_labelling

https://github.com/Ewenwan/ORB_SLAM2_SSD_Semantic

SIVO(语义辅助特征选择)

论文:Ganti P, Waslander S. Network Uncertainty Informed Semantic Feature Selection for Visual SLAM[C]//2019 16th Conference on Computer and Robot Vision (CRV). IEEE, 2019: 121-128.

代码:https://github.com/navganti/SIVO

FILD(临近图增量式闭环检测)

论文:Shan An, Guangfu Che, Fangru Zhou, Xianglong Liu, Xin Ma, Yu Chen. Fast and Incremental Loop Closure Detection using Proximity Graphs. pp. 378-385, The 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2019)

代码:https://github.com/AnshanTJU/FILD

object-detection-sptam(目标检测与双目 SLAM)

论文:Pire T, Corti J, Grinblat G. Online Object Detection and Localization on Stereo Visual SLAM System[J]. Journal of Intelligent & Robotic Systems, 2019: 1-10.

代码:https://github.com/CIFASIS/object-detection-sptam

Map Slammer(单目深度估计 + SLAM)

论文:Torres-Camara J M, Escalona F, Gomez-Donoso F, et al. Map Slammer: Densifying Scattered KSLAM 3D Maps with Estimated Depth[C]//Iberian Robotics conference. Springer, Cham, 2019: 563-574.

代码:https://github.com/jmtc7/mapSlammer

NOLBO(变分模型的概率 SLAM)

论文:Yu H, Lee B. Not Only Look But Observe: Variational Observation Model of Scene-Level 3D Multi-Object Understanding for Probabilistic SLAM[J]. arXiv preprint arXiv:1907.09760, 2019.

代码:https://github.com/bogus2000/NOLBO

GCNv2_SLAM (基于图卷积神经网络 SLAM)

论文:Tang J, Ericson L, Folkesson J, et al. GCNv2: Efficient correspondence prediction for real-time SLAM[J]. IEEE Robotics and Automation Letters, 2019, 4(4): 3505-3512.

代码:https://github.com/jiexiong2016/GCNv2_SLAM

semantic_suma(激光语义建图)

论文:Chen X, Milioto A, Palazzolo E, et al. SuMa++: Efficient LiDAR-based semantic SLAM[C]//2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2019: 4530-4537.

代码:https://github.com/PRBonn/semantic_suma/

Eigen-Factors(特征因子平面对齐)

论文:Ferrer G. Eigen-Factors: Plane Estimation for Multi-Frame and Time-Continuous Point Cloud Alignment[C]//2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2019: 1278-1284.

代码:https://gitlab.com/gferrer/eigen-factors-iros2019

PlaneLoc

论文:Wietrzykowski J, Skrzypczyński P. PlaneLoc: Probabilistic global localization in 3-D using local planar features[J]. Robotics and Autonomous Systems, 2019, 113: 160-173.

代码:https://github.com/LRMPUT/PlaneLoc

VINS-RGBD

论文:Shan Z, Li R, Schwertfeger S. RGBD-Inertial Trajectory Estimation and Mapping for Ground Robots[J]. Sensors, 2019, 19(10): 2251.

代码:https://github.com/STAR-Center/VINS-RGBD

Open-VINS

论文:Geneva P, Eckenhoff K, Lee W, et al. Openvins: A research platform for visual-inertial estimation[C]//IROS 2019 Workshop on Visual-Inertial Navigation: Challenges and Applications, Macau, China. IROS 2019.

代码:https://github.com/rpng/open_vins

TUM Basalt

论文:Usenko V, Demmel N, Schubert D, et al. Visual-inertial mapping with non-linear factor recovery[J]. IEEE Robotics and Automation Letters, 2019.

代码:https://github.com/VladyslavUsenko/basalt-mirror

LARVIO(多状态约束卡尔曼滤波的单目 VIO,可以在线标定imu)

论文:Qiu X, Zhang H, Fu W, et al. Monocular Visual-Inertial Odometry with an Unbiased Linear System Model and Robust Feature Tracking Front-End[J]. Sensors, 2019, 19(8): 1941.

代码:https://github.com/PetWorm/LARVIO

 vig-init(垂直边缘加速视惯初始化)

论文:Li J, Bao H, Zhang G. Rapid and Robust Monocular Visual-Inertial Initialization with Gravity Estimation via Vertical Edges[C]//2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2019: 6230-6236.

代码:https://github.com/zju3dv/vig-init

PVIO

论文:Robust and Efficient Visual-Inertial Odometry with Multi-plane PriorsJinyu Li, Bangbang Yang, Kai Huang, Guofeng Zhang, and Hujun Bao*PRCV 2019, LNCS 11859, pp. 283–295, 2019.

代码:https://github.com/zju3dv/PVIO

ReFusion(动态场景利用残差三维重建)

论文:Palazzolo E, Behley J, Lottes P, et al. ReFusion: 3D Reconstruction in Dynamic Environments for RGB-D Cameras Exploiting Residuals[J]. arXiv preprint arXiv:1905.02082, 2019.

代码:https://github.com/PRBonn/refusion

RTAB-Map(激光视觉稠密重建)

论文:Labbé M, Michaud F. RTAB‐Map as an open‐source lidar and visual simultaneous localization and mapping library for large‐scale and long‐term online operation[J]. Journal of Field Robotics, 2019, 36(2): 416-446.

代码:https://github.com/introlab/rtabmap 

RobustPCLReconstruction(户外稠密重建)

论文:Lan Z, Yew Z J, Lee G H. Robust Point Cloud Based Reconstruction of Large-Scale Outdoor Scenes[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019: 9690-9698.

代码:https://github.com/ziquan111/RobustPCLReconstruction

 plane-opt-rgbd(室内平面重建)

论文:Wang C, Guo X. Efficient Plane-Based Optimization of Geometry and Texture for Indoor RGB-D Reconstruction[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. 2019: 49-53.

代码:https://github.com/chaowang15/plane-opt-rgbd

DenseSurfelMapping(稠密表面重建)

论文:Wang K, Gao F, Shen S. Real-time scalable dense surfel mapping[C]//2019 International Conference on Robotics and Automation (ICRA). IEEE, 2019: 6919-6925.

代码:https://github.com/HKUST-Aerial-Robotics/DenseSurfelMapping

surfelmeshing(网格重建)

论文:Schöps T, Sattler T, Pollefeys M. Surfelmeshing: Online surfel-based mesh reconstruction[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019.

代码:https://github.com/puzzlepaint/surfelmeshing

相关研究:基于超像素的单目 SLAM:Using Superpixels in Monocular SLAM ICRA 2014

Voxgraph(SDF 体素建图)

论文:Reijgwart V, Millane A, Oleynikova H, et al. Voxgraph: Globally Consistent, Volumetric Mapping Using Signed Distance Function Submaps[J]. IEEE Robotics and Automation Letters, 2019, 5(1): 227-234.

代码:https://github.com/ethz-asl/voxgraph

 ORB-SLAM3

论文:Carlos Campos, Richard Elvira, et al.ORB-SLAM3: An Accurate Open-Source Library for Visual, Visual-Inertial and Multi-Map SLAM[J]. arXiv preprint arXiv:2007.11898, 2020.

代码:https://github.com/UZ-SLAMLab/ORB_SLAM3

 Neural-SLAM(主动神经 SLAM)

论文:Chaplot D S, Gandhi D, Gupta S, et al. Learning to explore using active neural slam[C]. ICLR 2020.

代码:https://github.com/devendrachaplot/Neural-SLAM

TartanVO:一种通用的基于学习的 VO

论文:Wang W, Hu Y, Scherer S. TartanVO: A Generalizable Learning-based VO[J]. arXiv preprint arXiv:2011.00359, 2020.

代码:https://github.com/castacks/tartanvo
数据集:IROS2020 TartanAir: A Dataset to Push the Limits of Visual SLAM

VPS-SLAM(平面语义 SLAM)

论文:Bavle H, De La Puente P, How J, et al. VPS-SLAM: Visual Planar Semantic SLAM for Aerial Robotic Systems[J]. IEEE Access, 2020.

代码:https://bitbucket.org/hridaybavle/semantic_slam/src/master/

Structure-SLAM (低纹理环境下点线 SLAM)

论文:Li Y, Brasch N, Wang Y, et al. Structure-SLAM: Low-Drift Monocular SLAM in Indoor Environments[J]. IEEE Robotics and Automation Letters, 2020, 5(4): 6583-6590.

代码:https://github.com/yanyan-li/Structure-SLAM-PointLine

 PL-VINS

论文:Fu Q, Wang J, Yu H, et al. PL-VINS: Real-Time Monocular Visual-Inertial SLAM with Point and Line[J]. arXiv preprint arXiv:2009.07462, 2020.

代码:https://github.com/cnqiangfu/PL-VINS

versavis(多功能的视惯传感器系统)

论文:Tschopp F, Riner M, Fehr M, et al. VersaVIS—An Open Versatile Multi-Camera Visual-Inertial Sensor Suite[J]. Sensors, 2020, 20(5): 1439.

代码:https://github.com/ethz-asl/versavis

VIlib(VIO 前端库)

论文:Nagy B, Foehn P, Scaramuzza D. Faster than FAST: GPU-Accelerated Frontend for High-Speed VIO[J]. arXiv preprint arXiv:2003.13493, 2020.

代码:https://github.com/uzh-rpg/vilib

Kimera-VIO

论文:A. Rosinol, M. Abate, Y. Chang, L. Carlone, Kimera: an Open-Source Library for Real-Time Metric-Semantic Localization and Mapping. IEEE Intl. Conf. on Robotics and Automation (ICRA), 2020.

代码:https://github.com/MIT-SPARK/Kimera-VIO

CamVox:Lidar辅助视觉 SLAM

论文:ZHU, Yuewen, et al. CamVox: A Low-cost and Accurate Lidar-assisted Visual SLAM System. arXiv preprint arXiv:2011.11357, 2020.

代码:https://github.com/ISEE-Technology/CamVox

VDO-SLAM(动态物体感知的 SLAM)

论文:Zhang J, Henein M, Mahony R, et al. VDO-SLAM: A Visual Dynamic Object-aware SLAM System[J]. arXiv preprint arXiv:2005.11052, 2020.(IJRR Under Review)

相关论文

IROS 2020 Robust Ego and Object 6-DoF Motion Estimation and Tracking

ICRA 2020 Dynamic SLAM: The Need For Speed

代码:https://github.com/halajun/VDO_SLAM

DeepFactors(实时的概率单目稠密 SLAM)

论文:Czarnowski J, Laidlow T, Clark R, et al. DeepFactors: Real-Time Probabilistic Dense Monocular SLAM[J]. arXiv preprint arXiv:2001.05049, 2020.

代码:https://github.com/jczarnowski/DeepFactors (还未放出)

其他论文:Bloesch M, Czarnowski J, Clark R, et al. CodeSLAM—learning a compact, optimisable representation for dense visual SLAM[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2018: 2560-2568.

OpenREALM:无人机实时建图框架

论文:Kern A, Bobbe M, Khedar Y, et al. OpenREALM: Real-time Mapping for Unmanned Aerial Vehicles[J]. arXiv preprint arXiv:2009.10492, 2020.

代码:https://github.com/laxnpander/OpenREALM

本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系:hwhale#tublm.com(使用前将#替换为@)

当前开源的SLAM方案汇总2021.02 的相关文章

  • 云台控制协议总结(VISCA/PELCOD/PELCOP)

  • error: #20: identifier "TIM_TimeBaseInitTypeDef" is undefined

    如果出现多句错误 xff1a identifier 34 34 is undefined 解决问题方法一 xff1a C C 43 43 include paths 把文件路径添加进去 解决问题方法二 xff1a 在stm32f10x co
  • 使用pyqt5实现键盘(含组合键)鼠标事件响应

    使用pyqt5实现键盘 xff08 含组合键 xff09 鼠标事件响应 使用python3 6 xff0c pyqt5 xff0c 在macOS上测试有效 span class hljs keyword import span sys sp
  • 递归思想刷题总结

    核心思想 我们在调用递归函数的时候 xff0c 把递归函数当做普通函数 xff08 黑箱 xff09 来调用 xff0c 即明白该函数的输入输出是什么 xff0c 而不用管此函数内部在做什么 xff08 千万不要跳进去了 xff0c 你脑袋
  • anonymous unions are only supported in --gnu mode, or when enabled with #pragma anon_unions

    在keil工程下移植代码 xff0c 编译出现了这个问题 xff0c 字面上解决办法有 xff1a 1 打开GNU模式 option gt GNU extensions 2 在代码前加上 pragma anon union 就是代表支持匿名
  • 串口的深入理解

    1 串口是如何发送数据的 xff1f 一般说来 xff0c 串口发送数据是往数据寄存器sbuf填写数据 xff0c 一个字节一个字节的写入 xff0c 如果有串口中断 xff0c 那么发送完一个字节的数据 xff0c 就会进入串口中断一次
  • CMakeLists.txt的简单使用

    Makefile和CMakeLists的关系 环境准备 xff1a 需要安装gcc xff0c g 43 43 xff0c make sudo apt get install gcc g 43 43 sudo apt get isntall
  • .so文件的基本理解,使用。

    一 基本概念 Linux下的 so是基于Linux下的动态链接 其功能和作用类似与windows下 dll文件 代码编译 xff0c 链接 xff0c 最后生成可执行文件 xff1b 这个可执行文件就可看作是一个静态链接 xff0c 因为代
  • jz2440:QT控制LED灯点亮熄灭(11)

    1 LED灯的驱动 xff1a 首先要准备好在驱动文件 xff0c 通过insmod led ko来加载模块 xff0c 然后在QT的代码里面配合调用open xff0c write read函数来点亮 xff0c 关闭LED灯 这一步 x
  • win10下安装ubuntu双系统

    本文章记录自己在Win10系统下安装ubuntu双系统的过程 xff0c 以及注意事项 另一个不错的安装教程 1 下载系统镜像 在官网或清华镜像 xff0c 根据需要的ubuntu版本下载需要的ubuntu镜像文件 这里要注意 xff0c
  • C++ shared_ptr的reset 用法

    include lt iostream gt include lt memory gt class Tmp public Tmp int a Tmp void print a std cout lt lt 34 value 61 34 lt
  • C++ 模板类的继承

    模板类 xff1a template lt typename T gt 说白了就是向之后的内容传递参数类型 xff0c 把T当作一个数据类型传递 xff0c 而在声明一个变量的时候 xff0c 通过base lt xxxx gt pp xx
  • linuxptp源码研究

    目录 1 检查网卡是否支持相应的时间戳 2 linuxptp的目录架构 3 ptp4l的大致流程分析 4 gptp协议对应的sync follow up delay request delay response消息在代码的位置 5 slav
  • xv6---Lab3: page tables

    目录 参考资料 RISC V页表的简化图如下所示 编辑 多级页表 xv6内核页表 3 6 Process Address Space 3 7 Code Sbrk 3 8 Code Exec Print a page table A kern
  • 内存管理---分页机制

    目录 物理内存管理带来的问题 直接映射 一级页表 二级页表 参考 xff1a xff08 C语言内存七 xff09 分页机制究竟是如何实现的 xff1f Smah 博客园 物理内存管理带来的问题 比如4GB的flash 如果应用程序可直接访
  • xv6---Lab4 traps

    参考 xff1a Lab Traps 关于寄存器s0和堆栈 https pdos csail mit edu 6 828 2020 lec l riscv slides pdf RISC V assembly Q 哪些寄存器包含函数的参数
  • stm32F4 hal库之CAN通信的实现

    本文的目的是为了能够实现功能 xff0c 故写的时候比较简略 参考资料 xff1a https blog csdn net u012308586 article details 81001102 正点原子开发手册 目标 xff1a 通过ca
  • 调试sim800L模块

  • 51单片机 串口中断

    1 什么是中断 广义上的中断是指一个过程 xff0c 举个简单的例子 xff0c 打开了电脑 xff0c 你正在放音乐 xff0c 点击了暂停按钮 xff0c 于是歌停了 这就是一个很明显的中断的例子 CPU正在做自己的事情 xff08 放
  • STM32CubeMX应用 -- 定时器输入脉冲计数

    目录 参考链接 一 实现过程 二 STM32CubeMX配置示例 三 C语言示例程序 参考链接 https blog csdn net m0 37845735 article details 105395643 一 实现过程 当选择外部的同

随机推荐

  • 机器人导航dwa(局部避障)分析

    前面部分引用http blog csdn net lqygame article details 72861439 xff08 1 xff09 初始化 xff1a 在move base节点中 xff0c 通过类加载模块载入了BaseLoca
  • 2019年最新VSLAM比较汇总

    2019年最新VSLAM比较汇总 闭源SOFTSOFT2ESOsGAN VOLG SLAMRotRocc 43 GDVOElbrusROCCMonoROCCcv4xv1 sc 开源 xff1a VINS FusionORB SLAM2Ste
  • CMSIS到底是个什么东西

    目录 一 前言 二 CMSIS标准 三 CMSIS文件 1 Include文件 2 Source文件 四 总结 一 前言 使用过ARM单片机的朋友肯定听说过CMSIS xff0c 可以说CMSIS是开启ARM单片机的金钥匙 xff0c 是不
  • TouchGFX介绍

    目录 一 关于TouchGFX 1 TouchGFX是一个图形框架 2 TouchGFX可以减轻CPU负载 3 TouchGFX充分利用了STM32的硬件图形外设 4 TouchGFX创建最佳性能的用户界面 5 TouchGFX可工作于ST
  • rt-thread应用篇(03)---基于STM32F429实现web服务器功能

    目录 参考示例 前言 一 需使用的组件与软件包及其ENV配置 1 文件系统相关组件与软件包 1 1 DFS 框架 1 2 fal 软件包 1 3 SFUD 组件 2 网络通信相关组件和软件包 2 1 SAL组件 2 2 netdev组件 2
  • rt-thread的at组件在freeRTOS上的移植与应用

    目录 一 AT命令 二 rtthread at组件简介 三 移植到freeRTOS 3 1 数据结构 3 2 API 3 3 at client 流程 3 4 串口数据接收处理 3 5 数据缓存 顺序队列 四 使用示例 4 1 串口配置信息
  • rt-thread驱动篇(04)---STM32F429单片机模拟SPI FLASH驱动添加

    目录 一 添加驱动 1 新增模拟SPI驱动文件 drv soft spi c h 2 新增模拟SPI配置文件 soft spi config h 二 向工程添加文件 1 修改 board Kconfig 2 修改 rt thread com
  • RT-Thread实时操作系统简介

    目录 一 概述 二 架构 三 版本选择 四 内核启动流程 五 自动初始化机制 六 内核对象模型 七 I O设备模型 1 框架 2 设备驱动使用序列图 3 设备类型 八 FinSH控制台 九 ENV工具 1 menuconfig 2 Scon
  • Altium Allegro PADS到底该选哪个EDA设计软件

    废话少说 xff0c 就像之前 学好数理化 xff0c 走遍天下都不怕 一样 xff0c 在如今快速发展的电子时代 xff0c 掌握一门电子设计EDA软件工具 xff0c 在职场上真的走遍天下都不怕 哪哪都有可能跟电沾边 xff0c 跟控制
  • QML学习笔记【07】:QML访问复杂组件的子项

    1 访问复杂组件的子项 gt Row Column Grid Flow布局子项或Repeater子项 访问复杂组件的子项 gt Row Column Grid Flow布局子项或Repeater子项 Window width 640 hei
  • tslib-1.4在I.MX6ULL开发板上电容屏不能触摸问题

    一 前言 在采用触摸屏的移动终端中 xff0c 触摸屏性能的调试是个重要问题之一 xff0c 因为电磁噪声的缘故 xff0c 触摸屏容易存在点击不准确 有抖动等问题 Tslib是一个开源的程序 xff0c 能够为触摸屏驱动获得的采样提供诸如
  • C++与QML混合编程

    一 前言 简单来说 xff0c 混合编程就是通过Qml高效便捷的构建UI界面 xff0c 而使用C 43 43 来实现业务逻辑和复杂算法 Qt集成了QML引擎和Qt元对象系统 xff0c 使得QML很容易从C 43 43 中得到扩展 xff
  • 卸载ROS功能包

    步骤方法 xff1a 1 首先卸载包 sudo apt get purge ros indigo 2 然后卸载依赖包 sudo apt get autoremove
  • 要点初见:通过ROS包nmea_navsat_driver读取GPS、北斗定位信息(C/C++)

    先前在树莓派上用C C 43 43 读取过GPS北斗双模定位模块 xff0c 但因为定位模块的若干条定位数据无法立刻读取 xff0c 需要用delay 延迟1到2秒的时间才能把所有定位数据都读取进程序 xff0c 又不想写多线程 xff0c
  • 自动驾驶-使用卡尔曼滤波算法定位和跟踪

    参加过科一的人都知道 xff0c 学车的第一步不是操控车辆而是遵守交规 xff0c 行车礼让 xff0c 确保安全 可见安全驾驶才是行车的第一原则 为了确保安全 xff0c 司机应该观察周围车辆和行人的位置 xff0c 保持安全距离 自动驾
  • ROS使用笔记

    文章目录 1 提取bag中固定topic或者固定时间段数据2 提取pcd数据3 记录数据4 service amp action5 roslaunch文件6 自定义消息7 from raw velodyne packets to velod
  • linux安装Android Studio

    linux安装Android Studio 1 先在https developer android google cn studio hl 61 zh cn下载源码安装包 2 安装64位所需要的库 2 1如果使用的是Ubuntu的话执行以下
  • OpenCV中Mat的初始化与赋值

    1 type数据类型 常量类型的命名规则为 xff1a CV 位数 43 数据类型 43 通道数 关系如下 xff1a C1 C2 C3 C4 CV 8U 0 8 16 24 CV 8S 1 9 17 25 CV 16U 2 10 18 2
  • 近十年的VI-SLAM算法综述与发展

    本文主要总结一下这几年工作中遇到过以及改进过相关VIO算法 1 背景介绍 一个完整的 SLAM simultaneous localization and mapping 框架包括传感器数据 前端 后端 回环检测与建图 xff0c 如图1所
  • 当前开源的SLAM方案汇总2021.02

    感谢SLAMer前辈们不断的拼搏与进取 xff0c 才有了现在的丰富的学习资料 xff01 以下是至今SLAM开源代码的资料汇总 xff0c 后续将会更新主流slam开源代码的注释版本 xff0c 希望对研究SLAM的同学们有帮助 PTAM