Skip to content
This repository was archived by the owner on Dec 18, 2024. It is now read-only.

Commit d0a1704

Browse files
committed
Update README
1 parent 4c42dc1 commit d0a1704

File tree

1 file changed

+8
-8
lines changed

1 file changed

+8
-8
lines changed

README.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -15,13 +15,13 @@ This repository contains code and models for our [paper](TODO):
1515

1616

1717
Monodepth:
18-
- [dpt_hybrid-midas-501f0c75.pt](TODO)
19-
- [dpt_large-midas-2f21e586.pt](TODO)
18+
- [dpt_hybrid-midas-501f0c75.pt](TODO), [Mirror](TODO)
19+
- [dpt_large-midas-2f21e586.pt](TODO), [Mirror](TODO)
2020

2121

2222
Segmentation:
23-
- [dpt_hybrid-ade20k-53898607.pt](TODO)
24-
- [dpt_large-ade20k-XXXXXXXX.pt](TODO)
23+
- [dpt_hybrid-ade20k-53898607.pt](TODO), [Mirror](TODO)
24+
- [dpt_large-ade20k-b12dca68.pt](TODO), [Mirror](TODO)
2525

2626
2) Set up dependencies:
2727

@@ -30,7 +30,7 @@ Segmentation:
3030
pip install timm
3131
```
3232

33-
The code was tested with Python 3.7, PyTorch 1.8.0, OpenCV 4.5.1, timm 0.4.5
33+
The code was tested with Python 3.7, PyTorch 1.8.0, OpenCV 4.5.1, and timm 0.4.5
3434

3535

3636
### Usage
@@ -51,7 +51,7 @@ Segmentation:
5151

5252
3) The results are written to the folder `output_monodepth` and `output_segmentation`, respectively.
5353

54-
You can use the flag `-t` to switch between different models. Possible options are `dpt_hybrid` (default) and `dpt_large`.
54+
Use the flag `-t` to switch between different models. Possible options are `dpt_hybrid` (default) and `dpt_large`.
5555

5656

5757
### Citation
@@ -61,14 +61,14 @@ Please cite our paper if you use this code or any of the models:
6161
@article{Ranftl2021,
6262
author = {Ren\'{e} Ranftl and Alexey Bochkovskiy and Vladlen Koltun},
6363
title = {Vision Transformers for Dense Prediction},
64-
journal = {ArXiV Preprint},
64+
journal = {ArXiv preprint},
6565
year = {2021},
6666
}
6767
```
6868

6969
### Acknowledgements
7070

71-
Our work extensively builds on [timm](https://github.com/rwightman/pytorch-image-models) and [PyTorch-Encoding](https://github.com/zhanghang1989/PyTorch-Encoding).
71+
Our work builds on [timm](https://github.com/rwightman/pytorch-image-models) and [PyTorch-Encoding](https://github.com/zhanghang1989/PyTorch-Encoding). We'd like to thank the authors for making these libraries available.
7272
7373
### License
7474

0 commit comments

Comments
 (0)