Skip to content

Commit 7cf04f8

Browse files
author
Yue Pan
committed
[UPDATE] new GUI, robust reboot function
1 parent 6da49a5 commit 7cf04f8

19 files changed

+1910
-421
lines changed

.gitignore

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -167,6 +167,10 @@ experiments/
167167

168168
*.json
169169

170+
171+
.viewpoints/
172+
screenshots/
173+
170174
bak
171175
TODO.md
172176
NOTE.md

README.md

Lines changed: 30 additions & 50 deletions
Original file line numberDiff line numberDiff line change
@@ -66,9 +66,6 @@
6666
<li>
6767
<a href="#docker">Docker</a>
6868
</li>
69-
<li>
70-
<a href="#visualizer-instructions">Visualizer instructions</a>
71-
</li>
7269
<li>
7370
<a href="#citation">Citation</a>
7471
</li>
@@ -125,14 +122,14 @@ PIN-SLAM can run at the sensor frame rate on a moderate GPU.
125122
### 1. Set up conda environment
126123

127124
```
128-
conda create --name pin python=3.8
125+
conda create --name pin python=3.10
129126
conda activate pin
130127
```
131128

132129
### 2. Install the key requirement PyTorch
133130

134131
```
135-
conda install pytorch==2.0.0 torchvision==0.15.0 torchaudio==2.0.0 pytorch-cuda=11.7 -c pytorch -c nvidia
132+
conda install pytorch==2.5.1 torchvision==0.20.1 torchaudio==2.5.1 pytorch-cuda=11.8 -c pytorch -c nvidia
136133
```
137134

138135
The commands depend on your CUDA version (check it by `nvcc --version`). You may check the instructions [here](https://pytorch.org/get-started/previous-versions/).
@@ -171,7 +168,7 @@ python3 pin_slam.py ./config/lidar_slam/run_demo.yaml -vsm
171168
<details>
172169
<summary>[Details (click to expand)]</summary>
173170

174-
You can visualize the SLAM process in PIN-SLAM visualizer and check the results in the `./experiments` folder.
171+
You can visualize the SLAM process in PIN-SLAM viewer GUI and check the results in the `./experiments` folder.
175172

176173
Use `run_demo_sem.yaml` if you want to conduct metric-semantic SLAM using semantic segmentation labels:
177174
```
@@ -203,7 +200,7 @@ Follow the instructions on how to run PIN-SLAM by typing:
203200
python3 pin_slam.py -h
204201
```
205202

206-
For an arbitrary data sequence, you can run with the default config file by:
203+
For an arbitrary data sequence with point clouds in the format of `*.ply`, `*.pcd`, `*.las` or `*.bin`, you can run with the default config file by:
207204
```
208205
python3 pin_slam.py -i /path/to/your/point/cloud/folder -vsm
209206
```
@@ -216,7 +213,7 @@ To run PIN-SLAM with a specific config file, you can run:
216213
python3 pin_slam.py path_to_your_config_file.yaml -vsm
217214
```
218215

219-
The flags `-v`, `-s`, `-m` toggle the visualizer, map saving and mesh saving, respectively.
216+
The flags `-v`, `-s`, `-m` toggle the viewer GUI, map saving and mesh saving, respectively.
220217

221218
To specify the path to the input point cloud folder, you can either set `pc_path` in the config file or set `-i INPUT_PATH` upon running.
222219

@@ -237,6 +234,8 @@ python3 pin_slam.py ./config/lidar_slam/run_ncd.yaml ncd 01 -vsm
237234
python3 pin_slam.py ./config/rgbd_slam/run_replica.yaml replica room0 -vsm
238235
```
239236

237+
**Use specific data loaders with the -d flag**
238+
240239
We also support loading data from rosbag, mcap or pcap (ros2) using specific data loaders (originally from [KISS-ICP](https://github.com/PRBonn/kiss-icp)). You need to set the flag `-d` to use such data loaders. For example:
241240
```
242241
# Run on a rosbag or a folder of rosbags with certain point cloud topic
@@ -265,11 +264,26 @@ For example, you can run on [KITTI-MOT dataset](https://www.cvlibs.net/datasets/
265264
python pin_slam.py ./config/lidar_slam/run_kitti_mos.yaml kitti_mot 00 -i data/kitti_mot -vsmd --deskew
266265
```
267266

267+
Other examples:
268+
```
269+
# MulRan sequence DCC01
270+
python3 pin_slam.py ./config/lidar_slam/run_mulran.yaml mulran -i data/MulRan/dcc/DCC01 -vsmd
271+
272+
# KITTI 360 sequence 00
273+
python3 pin_slam.py ./config/lidar_slam/run_kitti_color.yaml kitti360 00 -i data/kitti360 -vsmd --deskew
274+
275+
# M2DGR sequence street_01
276+
python3 pin_slam.py ./config/lidar_slam/run.yaml rosbag -i data/m2dgr/street_01.bag -vsmd
277+
278+
# Newer College 128 sequence stairs
279+
python3 pin_slam.py ./config/lidar_slam/run_ncd_128_s.yaml rosbag -i data/ncd128/staris/ -vsmd
280+
```
281+
268282
The SLAM results and logs will be output in the `output_root` folder set in the config file or specified by the `-o OUTPUT_PATH` flag.
269283

270284
For evaluation, you may check [here](https://github.com/PRBonn/PIN_SLAM/blob/main/eval/README.md) for the results that can be obtained with this repository on a couple of popular datasets.
271285

272-
The training logs can be monitored via Weights & Bias online if you set the flag `-w`. If it's your first time using [Weights & Bias](https://wandb.ai/site), you will be requested to register and log in to your wandb account. You can also set the flag `-l` to turn on the log printing in the terminal and set the flag `-r` to turn on the visualization logging by [rerun](https://github.com/rerun-io/rerun). If you want to get the dense merged point cloud map using the estimated poses of PIN-SLAM, you can set the flag `-p`.
286+
The training logs can be monitored via Weights & Bias online if you set the flag `-w`. If it's your first time using [Weights & Bias](https://wandb.ai/site), you will be requested to register and log in to your wandb account. You can also set the flag `-l` to turn on the log printing in the terminal. If you want to get the dense merged point cloud map using the estimated poses of PIN-SLAM, you can set the flag `-p`.
273287

274288
</details>
275289

@@ -321,19 +335,20 @@ We will add support for ROS2 in the near future.
321335
After the SLAM process, you can reconstruct mesh from the PIN map within an arbitrary bounding box with an arbitrary resolution by running:
322336

323337
```
324-
python3 vis_pin_map.py path/to/your/result/folder [marching_cubes_resolution_m] [(cropped)_map_file.ply] [output_mesh_file.ply] [mesh_min_nn]
338+
python3 vis_pin_map.py path/to/your/result/folder -m [marching_cubes_resolution_m] -c [(cropped)_map_file.ply] -o [output_mesh_file.ply] -n [mesh_min_nn]
325339
```
326340

327341
<details>
328342
<summary>[Details (click to expand)]</summary>
329343

330-
The bounding box of `(cropped)_map_file.ply` will be used as the bounding box for mesh reconstruction. This file should be stored in the `map` subfolder of the result folder. You may directly use the original `neural_points.ply` or crop the neural points in software such as CloudCompare. The argument `mesh_min_nn` controls the trade-off between completeness and accuracy. The smaller number (for example `6`) will lead to a more complete mesh with more guessed artifacts. The larger number (for example `15`) will lead to a less complete but more accurate mesh. The reconstructed mesh would be saved as `output_mesh_file.ply` in the `mesh` subfolder of the result folder.
344+
Use `python3 vis_pin_map.py -h` to check the help message. The bounding box of `(cropped)_map_file.ply` will be used as the bounding box for mesh reconstruction. This file should be stored in the `map` subfolder of the result folder. You may directly use the original `neural_points.ply` or crop the neural points in software such as CloudCompare. The argument `mesh_min_nn` controls the trade-off between completeness and accuracy. The smaller number (for example `6`) will lead to a more complete mesh with more guessed artifacts. The larger number (for example `15`) will lead to a less complete but more accurate mesh. The reconstructed mesh would be saved as `output_mesh_file.ply` in the `mesh` subfolder of the result folder.
331345

332346
For example, for the case of the sanity test described above, run:
333347

334348
```
335-
python3 vis_pin_map.py ./experiments/sanity_test_* 0.2 neural_points.ply mesh_20cm.ply 8
349+
python3 vis_pin_map.py ./experiments/sanity_test_* -m 0.2 -c neural_points.ply -o mesh_20cm.ply -n 8
336350
```
351+
337352
</details>
338353

339354
## Docker
@@ -354,44 +369,6 @@ sudo chmod +x ./start_docker.sh
354369
./start_docker.sh
355370
```
356371

357-
## Visualizer Instructions
358-
359-
We provide a PIN-SLAM visualizer based on [lidar-visualizer](https://github.com/PRBonn/lidar-visualizer) to monitor the SLAM process. You can use `-v` flag to turn on it.
360-
361-
<details>
362-
<summary>[Keyboard callbacks (click to expand)]</summary>
363-
364-
| Button | Function |
365-
|:------:|:------------------------------------------------------------------------------------------:|
366-
| Space | pause/resume |
367-
| ESC/Q | exit |
368-
| G | switch between the global/local map visualization |
369-
| E | switch between the ego/map viewpoint |
370-
| F | toggle on/off the current point cloud visualization |
371-
| M | toggle on/off the mesh visualization |
372-
| A | toggle on/off the current frame axis & sensor model visualization |
373-
| P | toggle on/off the neural points map visualization |
374-
| D | toggle on/off the training data pool visualization |
375-
| I | toggle on/off the SDF horizontal slice visualization |
376-
| T | toggle on/off PIN SLAM trajectory visualization |
377-
| Y | toggle on/off the ground truth trajectory visualization |
378-
| U | toggle on/off PIN odometry trajectory visualization |
379-
| R | re-center the view point |
380-
| Z | 3D screenshot, save the currently visualized entities in the log folder |
381-
| B | toggle on/off back face rendering |
382-
| W | toggle on/off mesh wireframe |
383-
| Ctrl+9 | Set mesh color as normal direction |
384-
| 5 | switch between point cloud for mapping and for registration (with point-wise weight) |
385-
| 7 | switch between black and white background |
386-
| / | switch among different neural point color mode, 0: geometric feature, 1: color feature, 2: timestamp, 3: stability, 4: random |
387-
| < | decrease mesh nearest neighbor threshold (more complete and more artifacts) |
388-
| > | increase mesh nearest neighbor threshold (less complete but more accurate) |
389-
| \[/\] | decrease/increase mesh marching cubes voxel size |
390-
| ↑/↓ | move up/down the horizontal SDF slice |
391-
| +/- | increase/decrease point size |
392-
393-
</details>
394-
395372
## Citation
396373

397374
If you use PIN-SLAM for any academic work, please cite our original [paper](https://ieeexplore.ieee.org/document/10582536).
@@ -424,3 +401,6 @@ If you have any questions, please contact:
424401
[KISS-ICP (RAL 23)](https://github.com/PRBonn/kiss-icp): A LiDAR odometry pipeline that just works
425402

426403
[4DNDF (CVPR 24)](https://github.com/PRBonn/4dNDF): 3D LiDAR Mapping in Dynamic Environments using a 4D Implicit Neural Representation
404+
405+
[PINGS](https://github.com/PRBonn/PINGS): Gaussian Splatting Meets Distance Fields within a Point-Based Implicit Neural Map
406+

config/lidar_slam/run_mulran.yaml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -34,6 +34,8 @@ pgo:
3434
map_context: True
3535
pgo_freq_frame: 30
3636
context_cosdist: 0.25
37+
virtual_side_count: 10 # added
38+
local_loop_dist_thre: 20.0 # added
3739
optimizer: # mapper
3840
iters: 15 # iterations per frame
3941
batch_size: 16384

dataset/dataset_indexing.py

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -14,13 +14,15 @@ def set_dataset_path(config: Config, dataset_name: str = '', seq: str = ''):
1414

1515
config.name = config.name + '_' + dataset_name + '_' + seq.replace("/", "")
1616

17+
# recommended
1718
if config.use_dataloader:
1819
config.data_loader_name = dataset_name
1920
config.data_loader_seq = seq
2021
print('Using data loader for specific dataset or specific input data format')
2122
from dataset.dataloaders import available_dataloaders
2223
print('Available dataloaders:', available_dataloaders())
23-
24+
25+
# not recommended
2426
else:
2527
if dataset_name == "kitti":
2628
base_path = config.pc_path.rsplit('/', 3)[0]
@@ -35,7 +37,7 @@ def set_dataset_path(config: Config, dataset_name: str = '', seq: str = ''):
3537
config.name = config.name + "_mulran_" + seq
3638
base_path = config.pc_path.rsplit('/', 2)[0]
3739
config.pc_path = os.path.join(base_path, seq, "Ouster") # input point cloud folder
38-
config.pose_path = os.path.join(base_path, seq, "poses.txt") # input pose file
40+
config.pose_path = os.path.join(base_path, seq, "poses.txt") # input pose file (for this you need to do the conversion from global_pose.csv to poses.txt in kitti format by yourself. Otherwise, use Mulran data loader with the -d flag)
3941
elif dataset_name == "kitti_carla":
4042
config.name = config.name + "_kitti_carla_" + seq
4143
base_path = config.pc_path.rsplit('/', 3)[0]

dataset/slam_dataset.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -545,7 +545,7 @@ def update_odom_pose(self, cur_pose_torch: torch.tensor):
545545
accu_travel_dist = self.travel_dist[cur_frame_id-1] + cur_frame_travel_dist
546546
self.travel_dist[cur_frame_id] = accu_travel_dist
547547
if not self.silence:
548-
print("Accumulated travel distance (m): %f" % accu_travel_dist)
548+
print("Accumulated travel distance (m): {:.3f}".format(accu_travel_dist))
549549

550550
self.last_pose_ref = self.cur_pose_ref # update for the next frame
551551

0 commit comments

Comments
 (0)