You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Dec 18, 2024. It is now read-only.
The code was tested with Python 3.7, PyTorch 1.8.0, OpenCV 4.5.1, timm 0.4.5
33
+
The code was tested with Python 3.7, PyTorch 1.8.0, OpenCV 4.5.1, and timm 0.4.5
34
34
35
35
36
36
### Usage
@@ -51,7 +51,7 @@ Segmentation:
51
51
52
52
3) The results are written to the folder `output_monodepth` and `output_segmentation`, respectively.
53
53
54
-
You can use the flag `-t` to switch between different models. Possible options are `dpt_hybrid` (default) and `dpt_large`.
54
+
Use the flag `-t` to switch between different models. Possible options are `dpt_hybrid` (default) and `dpt_large`.
55
55
56
56
57
57
### Citation
@@ -61,14 +61,14 @@ Please cite our paper if you use this code or any of the models:
61
61
@article{Ranftl2021,
62
62
author = {Ren\'{e} Ranftl and Alexey Bochkovskiy and Vladlen Koltun},
63
63
title = {Vision Transformers for Dense Prediction},
64
-
journal = {ArXiV Preprint},
64
+
journal = {ArXiv preprint},
65
65
year = {2021},
66
66
}
67
67
```
68
68
69
69
### Acknowledgements
70
70
71
-
Our work extensively builds on [timm](https://github.com/rwightman/pytorch-image-models) and [PyTorch-Encoding](https://github.com/zhanghang1989/PyTorch-Encoding).
71
+
Our work builds on [timm](https://github.com/rwightman/pytorch-image-models) and [PyTorch-Encoding](https://github.com/zhanghang1989/PyTorch-Encoding). We'd like to thank the authors for making these libraries available.
0 commit comments