General SLAM Framework which supports feature based or direct method and different sensors including monocular camera, RGB-D sensors or any other input types can be handled. https://github.com/zdzhaoyong/GSLAM
http://ethz-asl.github.io/okvis/index.html
https://github.com/unr-arl/rhem_planner
MCPTAM is a set of ROS nodes for running Real-time 3D Visual Simultaneous Localization and Mapping (SLAM) using Multi-Camera Clusters. It includes tools for calibrating both the intrinsic and extrinsic parameters of the individual cameras within the rigid camera rig.
https://github.com/aharmat/mcptam
visual place recognition algorithm https://github.com/arrenglover/openfabmap
https://github.com/davidmball/ratslam
An Open Framework for Research in Visual-inertial Mapping and Localization https://github.com/ethz-asl/maplab from Roland Siegwart
https://github.com/xdspacelab/openvslam
https://github.com/berndpfrommer/tagslam ROS ready, bag file available
https://github.com/izhengfan/se2clam
https://github.com/MarianoJT88/Joint-VO-SF published in ICRA 2017
http://wiki.ros.org/rtabmap_ros ... Many Demos are available in the website with Several ROS bags
https://github.com/strasdat/ScaViSLAM/
https://github.com/felixendres/rgbdslam_v2 ROS ready, It accompany a PHD thesis from TUM
https://github.com/tu-darmstadt-ros-pkg/hector_slam
https://github.com/tum-vision/dvo_slam
https://github.com/danping/CoSLAM
https://github.com/mp3guy/ElasticFusion ... it has nice gui and dataset , paper and video too .
https://github.com/mp3guy/Kintinuous
Based on PTAM and SLAM track 3d traingulated and 2d non triangulated features . https://github.com/plumonito/dtslam
https://github.com/dorian3d/RGBiD-SLAM
https://github.com/lifunudt/M2SLAM
from oxford university c++ SLAM
https://github.com/hanmekim/SceneLib2
https://github.com/ethz-asl/nbvplanner
https://github.com/ydsf16/dre_slam ROS kinetic, openCV 4.0, yolo v3, Ceres
https://github.com/BertaBescos/DynaSLAM
http://www.robots.ox.ac.uk/~gk/PTAM/
https://github.com/damienfir/android-ptam
https://github.com/raulmur/ORB_SLAM ....
its modification : ORB-SLAM2 is a real-time SLAM library for Monocular, Stereo and RGB-D cameras https://github.com/raulmur/ORB_SLAM2
its modification to work on IOS : https://github.com/Thunderbolt-sx/ORB_SLAM_iOS
https://github.com/UZ-SLAMLab/ORB_SLAM3
https://github.com/uzh-rpg/rpg_open_remode ... Probabilistic, Monocular Dense Reconstruction in Real Time
https://github.com/pizzoli/rpg_svo
no loop closure or bundle adjustment http://rpg.ifi.uzh.ch/svo2.html
https://github.com/tum-vision/lsd_slam
modification over the original package to work with rolling chatter camera ( cheap webcams) https://github.com/FirefoxMetzger/lsd_slam The change is mentioned in this video : https://www.youtube.com/watch?v=TZRICW6R24o
https://github.com/srv/viso2 It is supported till ROS-indigo.
https://github.com/HKUST-Aerial-Robotics/VI-MEAN with paper and video ICRA 2017 , rosbag as well.
https://github.com/shichaoy/cube_slam
https://github.com/xiefei2929/ORB_SLAM3-RGBD-Inertial
https://github.com/johannes-graeter/limo Virtual machine with all the dependencies is ready.
https://github.com/erik-nelson/blam
https://github.com/ethz-asl/segmatch A 3D segment based loop-closure algorithm | ROS ready
https://github.com/TixiaoShan/LIO-SAM real-time lidar-inertial odometry
UV-SLAM: Unconstrained Line-based SLAM Using Vanishing Points for Structural Mapping | ICRA'22 https://github.com/url-kaist/UV-SLAM
https://github.com/JakobEngel/dso
https://github.com/alejocb/dpptam Dense Piecewise Planar Tracking and Mapping from a Monocular Sequence IROS 2015
https://github.com/rubengooj/StVO-PL Stereo Visual Odometry by combining point and line segment features
https://github.com/johannes-graeter/momo
paper + pytorch code: https://github.com/Huangying-Zhan/DF-VO
https://github.com/Uehwan/SimVODIS
IMU camera calibration toolbox and more. https://github.com/ethz-asl/kalibr
Camera-to-IMU calibration toolbox https://github.com/hovren/crisp
Robust Visual Inertial Odometry https://github.com/ethz-asl/rovio
https://github.com/KumarRobotics/msckf_vio
https://github.com/HKUST-Aerial-Robotics/VINS-Mono
https://github.com/gaowenliang/vins_so
https://github.com/JuanTarrio/rebvo Specially targetted to embedded hardware.
https://github.com/rpng/R-VIO Monocular camera + 6 DOF IMU
https://github.com/TheFrenchLeaf/Bundle
https://github.com/danylaksono/Android-SfM-client
open geometrical vision https://github.com/marknabil/opengv
Structure from Motion library written in Python on top of OpenCV. It has dockerfile for all installation on ubuntu 14.04 https://github.com/mapillary/OpenSfM
An unsupervised learning framework for depth and ego-motion estimation from monocular videos https://github.com/tinghuiz/SfMLearner
Source material for the CVPR 2015 Tutorial: Open Source Structure-from-Motion https://github.com/mleotta/cvpr2015-opensfm
https://github.com/tinghuiz/SfMLearner
https://github.com/drormoran/Equivariant-SFM
http://vis.uky.edu/~stewe/FIVEPOINT/
SFMedu: A Matlab-based Structure-from-Motion System for Education https://github.com/jianxiongxiao/SFMedu
Lorenzo Torresani's Structure from Motion Matlab code https://github.com/scivision/em-sfm
https://github.com/vrabaud/sfm_toolbox
OpenMVG C++ library https://github.com/openMVG/openMVG
collection of computer vision methods for solving geometric vision problems https://github.com/laurentkneip/opengv
https://sites.google.com/view/kavehfathian/code its paper : https://arxiv.org/pdf/1704.02672.pdf
https://github.com/jzubizarreta/dsm
https://github.com/tum-vision/fastfusion
https://github.com/knagara/SLAMwithCameraIMUforAndroid
https://github.com/HKUST-Aerial-Robotics/VINS-Mobile
with some good documentation to how to read the image and so on from the kinect . https://github.com/AutoSLAM/SLAM
https://github.com/youngguncho/awesome-slam-datasets
http://projects.asl.ethz.ch/datasets/doku.php?id=kmavvisualinertialdatasets
visual-inertial datasets collected on-board a Micro Aerial Vehicle (MAV). The datasets contain stereo images, synchronized IMU measurements, and accurate motion and structure ground-truth.
https://vision.in.tum.de/data/datasets/visual-inertial-dataset different scenes for evaluating VI odometry
https://github.com/AaltoVision/ADVIO
https://daniilidis-group.github.io/penncosyvio/ from Pennsylvania, published in ICRA2017
https://www.doc.ic.ac.uk/~ahanda/VaFRIC/iclnuim.html benchmarking RGB-D, Visual Odometry and SLAM algorithms
https://sites.google.com/view/kavehfathian/code/benchmarking-pose-estimation-algorithms
https://github.com/uzh-rpg/rpg_trajectory_evaluation
https://github.com/mit-fast/FlightGoggles
Learning monocular visual odometry with dense 3D mapping from dense 3D flow
DeepVO: A Deep Learning approach for Monocular Visual Odometry
Survey with year,sensor used and best practice
Imperial college ICCV 2015 workshop
Deep Auxiliary Learning for Visual Localization and Odometry
http://studierstube.icg.tugraz.at/handheld_ar/cityofsights.php
for SFM, 3D reconstruction and V-SLAM https://github.com/openMVG/awesome_3DReconstruction_list