project_name | group_name | hostname | status | time | time_per_image_ms | acc_nat | acc_pgdlinf | acc_pgdl2 | agg | error_msg |
---|
BestOf2023-1 | profs | boldeagle | Success | 106.94 | 5.35 | 62.5 | 70.24 | 70.83 | 141.07 | None |
jose-mourinho | Master-IASD | upnquick | Success | 3321.71 | 166.09 | 71.25 | 51.11 | 63.47 | 114.57 | None |
alhambra | Master-IASD | upnquick | Success | 1906.91 | 95.35 | 62.5 | 52.59 | 60.04 | 112.63 | None |
BestOf2023-2 | profs | coktailjet | Success | 62.26 | 3.11 | 56.25 | 40.87 | 53.39 | 94.26 | None |
lzattack | Master-IASD | boldeagle | Success | 152.66 | 7.63 | 43.75 | 24.86 | 36.14 | 61.0 | None |
fourchette | Master-IASD | boldeagle | Success | 153.22 | 7.66 | 56.25 | 25.16 | 32.03 | 57.19 | None |
star_wars_2 | Master-IASD | coktailjet | Success | 161.15 | 8.06 | 31.25 | 18.8 | 32.78 | 51.58 | None |
art_attack | Master-IASD | boldeagle | Success | 444.51 | 22.23 | 78.75 | 15.92 | 17.85 | 33.77 | None |
base_repos | profs | ourasi | Success | 166.92 | 8.35 | 62.5 | 6.05 | 25.14 | 31.19 | None |
houdini | Master-IASD | coktailjet | Success | 74.84 | 3.74 | 56.25 | 6.07 | 25.11 | 31.18 | None |
attack_of_mnist | Master-IASD | coktailjet | Success | 74.16 | 3.71 | 43.75 | 6.01 | 25.14 | 31.15 | None |
defense-against-the-dark-attacks | Master-IASD | upnquick | Success | 4202.92 | 210.15 | 68.75 | 12.22 | 13.73 | 25.95 | None |
shaq-attack | Master-IASD | upnquick | Success | 222.65 | 11.13 | 37.5 | 0.42 | 9.03 | 9.45 | None |
it-s-over-9000 | Master-IASD | readycash | Error | 0 | 0 | 0 | 0 | 0 | 0 | RuntimeError: CUDA error: out of memory
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
|
advengers | Master-IASD | readycash | Error | 0 | 0 | 0 | 0 | 0 | 0 | RuntimeError: CUDA error: out of memory
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
|
microsoft-defender | Master-IASD | readycash | Error | 0 | 0 | 0 | 0 | 0 | 0 | RuntimeError: CUDA error: out of memory
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
|
binaryattackers | Master-IASD | readycash | Error | 0 | 0 | 0 | 0 | 0 | 0 | OutOfMemoryError: CUDA out of memory. Tried to allocate 20.00 MiB. GPU 0 has a total capacity of 10.75 GiB of which 11.62 MiB is free. Process 2094081 has 268.00 MiB memory in use. Process 1705849 has 10.32 GiB memory in use. Including non-PyTorch memory, this process has 156.00 MiB memory in use. Of the allocated memory 14.00 KiB is allocated by PyTorch, and 1.99 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) |
mars-attack | Master-IASD | coktailjet | Error | 0 | 0 | 0 | 0 | 0 | 0 | ModuleNotFoundError: No module named 'core' |
advernope | Master-IASD | ourasi | Error | 0 | 0 | 0 | 0 | 0 | 0 | EOFError: Ran out of input |
naive_implem | profs | readycash | Error | 0 | 0 | 0 | 0 | 0 | 0 | OutOfMemoryError: CUDA out of memory. Tried to allocate 20.00 MiB. GPU 0 has a total capacity of 10.75 GiB of which 5.62 MiB is free. Process 2094081 has 268.00 MiB memory in use. Process 1705849 has 10.32 GiB memory in use. Including non-PyTorch memory, this process has 162.00 MiB memory in use. Of the allocated memory 6.30 MiB is allocated by PyTorch, and 1.70 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) |