Open
Description
I am running Protenix on a local Linux machine in a conda environment. My machine has 2 x Nvidia A6000 GPUs (CUDA visible devices 0 and 1). If I wanted to select GPU1 for Protenix predictions, would I just put "CUDA_VISIBLE_DEVICES=1" on the command line in front of "protenix predict ..."? In addition, is it possible to distribute the calculations over both GPUs? If so, how? Thank you.
Activity
zhangyuxuann commentedon Mar 27, 2025
protenix predict ...
, we do not support distribute the calculations over GPUs directly. But forinference_demo.sh
we support such features by torchrunrjrich commentedon Mar 27, 2025
Thank you for your quick and informative reply. Could you please explain which of the options under "torchrun" to use for specifying the use of more than one GPU and give an example of the corresponding command? Thank you.
zhangyuxuann commentedon Apr 1, 2025
@rjrich you can refer to https://pytorch.org/docs/stable/elastic/run.html#environment-variables, if you set those DDP Environment Variables, the above script is just a demo command