Result table

This table was generated on 2023-10-13 at 05:26. See more results here. See last results here.

results
project_namegroup_namehostnamestatusTimeRMSEAccuracyerror_msg
deeprec
Master-IASD
boldeagle
Success
34.32
0.866
0.0
None
claraliorarafael
Master-IASD
boldeagle
Success
179.31
0.868
0.0
None
the-boring-group
Master-IASD
boldeagle
Success
199.04
0.877
27.52
None
bot
Master-IASD
boldeagle
Success
21.53
0.884
25.62
None
prestige-worldwide
Master-IASD
boldeagle
Success
26.87
0.886
25.63
None
equipe-404
Master-IASD
boldeagle
Success
128.69
0.891
0.0
None
matrix-brigade
Master-IASD
boldeagle
Success
104.36
0.891
24.95
None
a-m-y-group
Master-IASD
boldeagle
Success
44.21
0.895
9.84
None
la-grosse-descente
Master-IASD
boldeagle
Success
126.56
0.93
0.0
None
descente-optimale
Master-IASD
boldeagle
Success
16.27
0.933
21.62
None
forecastors
Master-IASD
boldeagle
Success
21.5
0.971
29.86
None
deepdeepmf
Master-IASD
boldeagle
Success
0.57
1.037
0.0
None
average
profs
boldeagle
Success
0.59
1.037
0.0
None
random
profs
boldeagle
Success
0.53
1.837
14.16
None
goat
Master-IASD
boldeagle
Error
0
100
0
bytes: b'Traceback (most recent call last):\n File "/home/lamsade/testplatform/test-platform-a1/repos/Master-IASD/goat/generate.py", line 6, in \n from DMF.models import DMF\n File "/home/lamsade/testplatform/test-platform-a1/repos/Master-IASD/goat/DMF/models.py", line 1, in \n import matplotlib.pyplot as plt\nModuleNotFoundError: No module named \'matplotlib\'\n'
lecun-team
Master-IASD
boldeagle
Error
0
100
0
bytes: b'Traceback (most recent call last):\n File "/home/lamsade/testplatform/test-platform-a1/repos/Master-IASD/lecun-team/generate.py", line 39, in \n model.make_output(batch_size = 65536, tqdm_on = False)\n File "/home/lamsade/testplatform/test-platform-a1/repos/Master-IASD/lecun-team/algorithms/folded_deep_matrix_factorization_imprv.py", line 308, in make_output\n test_users_explicit_ds = self.R[test_users_idx].float() / 5\n ~~~~~~^^^^^^^^^^^^^^^^\ntorch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 2.43 GiB (GPU 0; 10.91 GiB total capacity; 662.84 MiB already allocated; 1.32 GiB free; 3.04 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF\n'

Plots