tiny_slam is a visual SLAM (Simultaneous Localization and Mapping) library. It heavily relies on general-purpose GPU computing via the tiny_wgpu library (primarily using the Vulkan backend).
tiny_slam is a work in progess.
tiny_slam aims to:
- Make visual SLAM accessible to developers, independent researchers, and small companies
- Decrease the cost of visual SLAM
- Bring edge computing to cross-platform devices (via wgpu)
- Increase innovation in drone / autonomous agent applications that are unlocked given precise localization
tiny_slam imposes these constraints on itself:
- Minimize number of dependencies
- Rely on computer shaders whenever possible
- Run in realtime on a Raspberry Pi 5
- Ergonomic design (Rust-like)
- Obtain required hardware
- Raspberry Pi 5
- High framerate camera, like this one
- Drone materials
- Receive realtime data from USB camera
- Use MediaFoundation API on Windows, video4linux on Linux, and AVFoundation on MacOS
- Software decode Motion JPEG (MJPEG) stream from webcam
- Utilize hardware decoding with higher framerate cameras
- Build helper library (tiny_wgpu) to increase compute shader workflow
- Increase default limits for push constants and number of bindings
- Enable read/write storage textures
- Support render pipelines
- Support reading data back to CPU via staging buffers
- Support multiple shader files
- Feature detection
- Color to grayscale conversion
- Implement manual luminance calculation
- (Optional) Use Y channel of YUV stream directly
- Oriented FAST corner detection
- Implement workgroup optimizations
- Implement bitwise corner detector
- Implement 4-corner shortcut
- Replace storage buffers with textures to improve memory reads
- Rotated BRIEF feature descriptors
- Two-pass gaussian blur
- Use linear sampler filtering to decrease number of samples
- Implement workgroup optimizations
- Read data back to CPU
- Color to grayscale conversion
- Local mapping
- Keyframe selection
- Insertion into current Map
- Cull unnecessary map points
- Local bundle adjustment