Description
Hi,
We've been trying to install whisperx==3.3.4
for a project which uses python 3.12 (though not sure if the python version is relevant), but we keep running into issues related to the missing libcudnn_ops_infer.so.8
file, even after setting LD_LIBRARY_PATH
to the correct directories under our .venv
as discussed in several other issues.
We're aiming for a self-contained installation and prefer not to rely on prebuilt CUDA images with bundled libraries. Our base image is ubuntu:25.04
torch>=2.5.1 (which is a whisperx requirement), requires cuDNN9. However, another dependency of WhisperX, ctranslate2==4.4.0 is only compatible with cuDNN8 (as evident from libcudnn_ops_infer.so.8
)
It turns out cuDNN9 doesn't include libcudnn_ops_infer.so
(nowhere to be found under .venv
).
I saw that support for cuDNN9 was added in ctranslate2==4.5.0:
https://github.com/OpenNMT/CTranslate2/releases/tag/v4.5.0
Would it make sense to bump the minimum required version of ctranslate2 to 4.5.0 in whisperx's pyproject.toml to avoid this conflict? We just gave it a try and everything worked fine – but we'd like to get a second opinion
Apologies if I'm oversimplifying. I'm still getting up to speed on how all of this fits together 😅
We use uv
for build, we've pinned torch to 2.5.1
& ctranslate2 to 4.5.0
Before switching to the base ubuntu:25.04
image, we were using the CUDA image nvidia/cuda:12.1.1-cudnn8-runtime-ubuntu22.04
, where everything seemed to work fine without any pinning/override dependencies. However, we later discovered that the base image included cuDNN8, while PyTorch was pulling in cuDNN9 and disparate systems seemingly were getting what they wanted and it all worked fine. It's unclear what the implications are for an application loading two different versions of the same shared library
BTW, forcing our project to only pull cuDNN8 (and not installing cuDNN9) was causing issues in torch