claraliorarafael | Master-IASD | boldeagle | Error | 0 | 100 | 0 | bytes: b'Traceback (most recent call last):\n File "/home/lamsade/testplatform/test-platform-a1/repos/Master-IASD/claraliorarafael/generate.py", line 40, in \n np.save("output.npy", predicted) ## DO NOT CHANGE THIS LINE\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.11/site-packages/numpy/lib/npyio.py", line 545, in save\n arr = np.asanyarray(arr)\n ^^^^^^^^^^^^^^^^^^\n File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.11/site-packages/torch/_tensor.py", line 970, in __array__\n return self.numpy()\n ^^^^^^^^^^^^\nRuntimeError: Can\'t call numpy() on Tensor that requires grad. Use tensor.detach().numpy() instead.\n' |
lecun-team | Master-IASD | boldeagle | Error | 0 | 100 | 0 | bytes: b'Traceback (most recent call last):\n File "/home/lamsade/testplatform/test-platform-a1/repos/Master-IASD/lecun-team/generate.py", line 39, in \n model.make_output(batch_size = 65536, tqdm_on = False)\n File "/home/lamsade/testplatform/test-platform-a1/repos/Master-IASD/lecun-team/algorithms/folded_deep_matrix_factorization_imprv.py", line 308, in make_output\n test_users_explicit_ds = self.R[test_users_idx].float()\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\ntorch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 1.22 GiB (GPU 0; 10.91 GiB total capacity; 3.26 GiB already allocated; 1.18 GiB free; 3.29 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF\n' |