-
Notifications
You must be signed in to change notification settings - Fork 67
Open
Description
I am running on a NVIDIA RTX 4070 Ti GPU, using the docker image provided in the repo, the error message is as following:
ckpt_dir: /home/neal/debug/YCBV_weights/bleach_cleanser/model_best_val.pth.tar
dataset_info_path /home/neal/debug/YCBV_data/bleach_cleanser/train_data_blender_DR/../dataset_info.yml
test_data_path is : /home/neal/debug/YCBV_data/data_organized/0051
args.ycb_dir is : /home/neal/debug/YCBV_data
self.object_cloud loaded and downsampled
self.object_width= 285.37860394994397
Loading ckpt from /home/neal/debug/YCBV_weights/bleach_cleanser/model_best_val.pth.tar
/usr/local/lib/python3.6/dist-packages/torch/cuda/__init__.py:104: UserWarning:
NVIDIA GeForce RTX 4070 Ti with CUDA capability sm_89 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_70.
If you want to use the NVIDIA GeForce RTX 4070 Ti GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/
warnings.warn(incompatible_device_warn.format(device_name, capability, " ".join(arch_list), device_name))
pose track ckpt epoch=112
net init done
Using vispy renderer
model_path: /home/neal/debug/YCB_models_with_ply/CADmodels/021_bleach_cleanser/textured.ply
self.cam_K:
[[1.066778e+03 0.000000e+00 3.129869e+02]
[0.000000e+00 1.067487e+03 2.413109e+02]
[0.000000e+00 0.000000e+00 1.000000e+00]]
making dataset... for eval
#dataset: 0
self.trans_normalizer=0.03, self.rot_normalizer=0.08726646259971647
start_frame is: 0
gt_poses[0]=
[[ 0.86345637 -0.50391231 -0.02271197 -0.04536975]
[-0.23796631 -0.36722933 -0.89917466 -0.06449794]
[ 0.44476481 0.78180285 -0.43700019 1.03577502]
[ 0. 0. 0. 1. ]]
Traceback (most recent call last):
File "predict.py", line 679, in <module>
predictSequenceYcb()
File "predict.py", line 561, in predictSequenceYcb
cur_pose = tracker.on_track(A_in_cam, rgb, depth, gt_A_in_cam=gt_poses[i-1],gt_B_in_cam=gt_poses[i], debug=debug,samples=samples)
File "predict.py", line 271, in on_track
prediction = self.model(dataA,dataB)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/se3_tracknet/se3_tracknet.py", line 84, in forward
a = self.convA1(A)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/container.py", line 119, in forward
input = module(input)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py", line 399, in forward
return self._conv_forward(input, self.weight, self.bias)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py", line 396, in _conv_forward
self.padding, self.dilation, self.groups)
RuntimeError: CUDA error: no kernel image is available for execution on the device
It seems the only solution here is to update the CUDA version from the one used in the docker image, which is CUDA 10.1 to a higher version which is compatible with the hardware?
Thanks in advance for your help :)
Activity
wenbowen123 commentedon Jan 26, 2024
hi, we provided the dockerfile, you can change the cuda version that's compatible with your GPU and then use that one to build.
aThinkingNeal commentedon Jan 31, 2024
Thanks! I will try a newer cuda version and then update here.
Currently I am facing the issue while using the dockerfile provided. #74
ZisongXu commentedon Feb 26, 2024
@aThinkingNeal I am very sorry to bother you, I am also trying to run se3-tracknet right now, but I am not able to build the docker image successfully. the GPU I am using right now is 4090, if you build the docker image successfully, can you please share the dockerfile? I will be grateful. As I am new to docker and not very familiar with this, I keep getting errors while building.
aThinkingNeal commentedon Feb 28, 2024
@ZisongXu
Hi SongXu, I am still working on getting a workable dockerfile, will let you know if it works : )
Meanwhile, it seems folks in this issue #74 already got a working dockerfile, maybe you could consider asking them as well :)
ZisongXu commentedon Feb 28, 2024
@aThinkingNeal
Thank you so much! I am also trying to make it work, if I succeed, I will also share it to you.
Best Regards
Zisong Xu
ZisongXu commentedon Mar 4, 2024
@aThinkingNeal
Hi Dear aThinkingNeal:
I think I managed to create a singularity container that can run at 3080, I can share it if you need it.
Best Regards
Zisong Xu
aThinkingNeal commentedon Mar 7, 2024
@ZisongXu
Sure :)
You can send the dockerfile to my email athinkingneal@gmail.com or create a pull request to this repo I guess :)
Thanks in advance : )