CUDA utilization - PyTorch Forums
discuss.pytorch.org › t › cuda-utilizationDec 10, 2021 · RuntimeError: CUDA out of memory. Tried to allocate 286.00 MiB (GPU 0; 4.00 GiB total capacity; 1.39 GiB already allocated; 227.40 MiB free; 1.97 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Pytorch cannot allocate enough memory · Issue #913 ...
github.com › CorentinJ › Real-Time-Voice-Cloning@craftpag This is not a parameter to be found in the code here but a PyTorch command that (if I'm not wrong) needs to be set as an environment variable. Try setting PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:<cache in mb here>. Doc Quote: "max_split_size_mb prevents the allocator from splitting blocks larger than this size (in MB). This can help prevent fragmentation and may allow some borderline workloads to complete without running out of memory."
CUDA semantics — PyTorch 1.10.1 documentation
pytorch.org › docs › stableThe behavior of caching allocator can be controlled via environment variable PYTORCH_CUDA_ALLOC_CONF. The format is PYTORCH_CUDA_ALLOC_CONF=<option>:<value>,<option2><value2>... Available options: max_split_size_mb prevents the allocator from splitting blocks larger than this size (in MB). This can help prevent fragmentation and may allow some borderline workloads to complete without running out of memory.