-
Notifications
You must be signed in to change notification settings - Fork 75
Description
------------ Options -------------
batch_size: 64
beta1: 0.9
beta2: 0.999
channels: 1
checkpoints_path: ./checkpoints
cuda: True
dataset_path: ./dataset
debug: False
ensemble: 1
epochs: 30
gpu_ids: 1
log_interval: 50
lr: 0.001
network: 0
no_cuda: False
patch_stride: 256
seed: 1
test_batch_size: 1
testset_path: ./dataset/test
-------------- End ----------------
Loading "patch-wise" model...
Loading "patch-wise" model...
/home/zaikun/zaikun/ICIAR2018/src/models.py:222: UserWarning: volatile was removed and now has no effect. Use with torch.no_grad():
instead.
res = self.network.features(Variable(input_tensor, volatile=True))
00) Normal (100.0%) test0.tif
Traceback (most recent call last):
File "test.py", line 23, in
im_model.test(args.testset_path, ensemble=args.ensemble == 1)
File "/home/zaikun/zaikun/ICIAR2018/src/models.py", line 430, in test
patches = self.patch_wise_model.output(image)
File "/home/zaikun/zaikun/ICIAR2018/src/models.py", line 222, in output
res = self.network.features(Variable(input_tensor, volatile=True))
File "/home/zaikun/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in call
result = self.forward(*input, **kwargs)
File "/home/zaikun/.local/lib/python3.6/site-packages/torch/nn/modules/container.py", line 91, in forward
input = module(input)
File "/home/zaikun/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in call
result = self.forward(*input, **kwargs)
File "/home/zaikun/.local/lib/python3.6/site-packages/torch/nn/modules/conv.py", line 301, in forward
self.padding, self.dilation, self.groups)
RuntimeError: CUDA error: out of memory
I am using 24GB of gpu memory and even with a small test batch size, I still have this error when running the test command, I guess something is wrong here.