Result table

This table was generated on 2024-10-04 at 07:23. See more results here. See last results here.

results
project_namegroup_namehostnamestatusTimeRMSEAccuracyerror_msg
code-brain-fuel
Master-IASD
coktailjet
Success
77.3
0.854
21.37
None
BestOf2023-2
profs
coktailjet
Success
186.24
0.861
26.07
None
psg-iasd
Master-IASD
coktailjet
Success
222.54
0.865
0.0
None
gitlegs
Master-IASD
coktailjet
Success
5.99
0.866
0.19
None
tetech
Master-IASD
coktailjet
Success
29.07
0.874
25.3
None
freshtomatoes
Master-IASD
coktailjet
Success
122.65
0.878
0.32
None
matrixe
Master-IASD
coktailjet
Success
128.74
0.919
0.0
None
esi
Master-IASD
coktailjet
Success
6.65
0.935
31.51
None
closeai
Master-IASD
coktailjet
Success
5.5
0.943
24.02
None
elcoma
Master-IASD
coktailjet
Success
47.89
1.014
29.76
None
theshawshankredemption
Master-IASD
coktailjet
Success
0.8
1.037
0.0
None
average
profs
coktailjet
Success
0.46
1.037
0.0
None
just-do-it
Master-IASD
coktailjet
Success
7.74
1.139
11.81
None
random
profs
coktailjet
Success
0.45
1.83
14.19
None
alexandre-verinotableandefficientmodel
Master-IASD
coktailjet
Error
0
100
0
bytes: b'/home/lamsade/testplatform/test-platform-a1/repos/Master-IASD/alexandre-verinotableandefficientmodel/NCF.py:120: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don\'t have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.\n model.load_state_dict(torch.load(\'best_model.pth\'))\nTraceback (most recent call last):\n File "/home/lamsade/testplatform/test-platform-a1/repos/Master-IASD/alexandre-verinotableandefficientmodel/generate.py", line 129, in \n table = NCF.complete_matrix(model, R, F)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File "/home/lamsade/testplatform/test-platform-a1/repos/Master-IASD/alexandre-verinotableandefficientmodel/NCF.py", line 175, in complete_matrix\n title_date_ids = torch.LongTensor(encoded_titles_dates[list(item_ids)]).to(device)\n ~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^\n File "/home/lamsade/testplatform/test-platform-a1/repos/Master-IASD/alexandre-verinotableandefficientmodel/venv/lib/python3.11/site-packages/torch/_tensor.py", line 1083, in __array__\n return self.numpy()\n ^^^^^^^^^^^^\nTypeError: can\'t convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.\n'
anxisa
Master-IASD
coktailjet
Error
0
100
0
bytes: b'Traceback (most recent call last):\n File "/home/lamsade/testplatform/test-platform-a1/repos/Master-IASD/anxisa/generate.py", line 22, in \n R_train = np.load("ratings_train.npy")\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File "/home/lamsade/testplatform/test-platform-a1/repos/Master-IASD/anxisa/venv/lib/python3.11/site-packages/numpy/lib/npyio.py", line 427, in load\n fid = stack.enter_context(open(os_fspath(file), "rb"))\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\nFileNotFoundError: [Errno 2] No such file or directory: \'ratings_train.npy\'\n'
elon
Master-IASD
coktailjet
Error
0
100
0
bytes: b'Traceback (most recent call last):\n File "/home/lamsade/testplatform/test-platform-a1/repos/Master-IASD/elon/generate.py", line 146, in \n table, test_RMSE, test_accuracy = model.fit(R_train, R_test)\n ^^^^^^^\nNameError: name \'R_train\' is not defined\n'
rimaya
Master-IASD
coktailjet
Error
0
100
0
bytes: b'2024-10-04 07:06:38.751439: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.\n2024-10-04 07:06:39.162041: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered\n2024-10-04 07:06:39.162131: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered\n2024-10-04 07:06:39.231817: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered\n2024-10-04 07:06:39.388495: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.\nTo enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.\n2024-10-04 07:06:42.091195: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT\nTraceback (most recent call last):\n File "/home/lamsade/testplatform/test-platform-a1/repos/Master-IASD/rimaya/generate.py", line 55, in \n users = torch.tensor(user_indices, dtype=torch.long)\n ^^^^^\nNameError: name \'torch\' is not defined\n'
BestOf2023-1
profs
coktailjet
Error
0
100
0
bytes: b'Traceback (most recent call last):\n File "/home/lamsade/testplatform/test-platform-a1/repos/profs/BestOf2023-1/generate.py", line 39, in \n model.make_output(batch_size = 65536, tqdm_on = False)\n File "/home/lamsade/testplatform/test-platform-a1/repos/profs/BestOf2023-1/algorithms/folded_deep_matrix_factorization_imprv.py", line 343, in make_output\n test_users_explicit_ds = self.R[test_users_idx].float() / 5\n ~~~~~~^^^^^^^^^^^^^^^^\ntorch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 2.43 GiB (GPU 0; 44.35 GiB total capacity; 605.75 MiB already allocated; 1.38 GiB free; 638.00 MiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF\n'
palm
Master-IASD
coktailjet
Success
96.31
NaN
0.0
None

Plots