torch.cuda.memory_summary — PyTorch 1.10.1 documentation
pytorch.org › torchtorch.cuda.memory_summary. Returns a human-readable printout of the current memory allocator statistics for a given device. This can be useful to display periodically during training, or when handling out-of-memory exceptions. device ( torch.device or int, optional) – selected device. Returns printout for the current device, given by current ...
torch.cuda.memory_stats — PyTorch 1.10.1 documentation
pytorch.org › torchtorch.cuda.memory_stats. Returns a dictionary of CUDA memory allocator statistics for a given device. The return value of this function is a dictionary of statistics, each of which is a non-negative integer. "allocated. {all,large_pool,small_pool}. {current,peak,allocated,freed}" : number of allocation requests received by the memory allocator.
CUDA semantics — PyTorch 1.10.1 documentation
pytorch.org › docs › stableUse of a caching allocator can interfere with memory checking tools such as cuda-memcheck. To debug memory errors using cuda-memcheck, set PYTORCH_NO_CUDA_MEMORY_CACHING=1 in your environment to disable caching. The behavior of caching allocator can be controlled via environment variable PYTORCH_CUDA_ALLOC_CONF.
torch.cuda.max_memory_allocated — PyTorch 1.10.1 documentation
pytorch.org › torchtorch.cuda.max_memory_allocated. Returns the maximum GPU memory occupied by tensors in bytes for a given device. By default, this returns the peak allocated memory since the beginning of this program. reset_peak_memory_stats () can be used to reset the starting point in tracking this metric. For example, these two functions can measure the peak ...