Replies: 1 comment
-
Small models with a limited number of parameters generally do not require very strong data augmentation, and we have to replicate the data augmentation that officially attenuates training in comparison. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Thanks for mmlab !
I notice that deit-tiny in mmpretrain is 74.5% while the performance in official deit is 72.2%.
I want to know the reason why the deit-tiny in mmpretrain performs better ?
Thanks a lot !
Beta Was this translation helpful? Give feedback.
All reactions