CUDA semantics — PyTorch 1.10.1 documentation
pytorch.org › docs › stablePyTorch supports the construction of CUDA graphs using stream capture, which puts a CUDA stream in capture mode. CUDA work issued to a capturing stream doesn’t actually run on the GPU. Instead, the work is recorded in a graph. After capture, the graph can be launched to run the GPU work as many times as needed.
Module — PyTorch 1.10.1 documentation
pytorch.org › generated › torchThis method modifies the module in-place. Returns self Return type Module cuda(device=None) [source] Moves all model parameters and buffers to the GPU. This also makes associated parameters and buffers different objects. So it should be called before constructing optimizer if the module will live on GPU while being optimized. Note