torch.cuda.memory_stats — PyTorch 1.10.1 documentation
pytorch.org › torchtorch.cuda.memory_stats. Returns a dictionary of CUDA memory allocator statistics for a given device. The return value of this function is a dictionary of statistics, each of which is a non-negative integer. "allocated. {all,large_pool,small_pool}. {current,peak,allocated,freed}" : number of allocation requests received by the memory allocator.
torch.cuda — PyTorch 1.10.1 documentation
https://pytorch.org/docs/stable/cuda.htmltorch.cuda This package adds support for CUDA tensor types, that implement the same function as CPU tensors, but they utilize GPUs for computation. It is lazily initialized, so you can always import it, and use is_available () to determine if your system supports CUDA. CUDA semantics has more details about working with CUDA. Random Number Generator
torch.cuda — PyTorch 1.10.1 documentation
pytorch.org › docs › stableReturns a human-readable printout of the current memory allocator statistics for a given device. memory_snapshot. Returns a snapshot of the CUDA memory allocator state across all devices. memory_allocated. Returns the current GPU memory occupied by tensors in bytes for a given device. max_memory_allocated. Returns the maximum GPU memory occupied by tensors in bytes for a given device. reset_max_memory_allocated. Resets the starting point in tracking maximum GPU memory occupied by tensors for ...