torch.Tensor — PyTorch 1.11.0 documentation
pytorch.org › docs › stabletorch.ByteTensor. /. 1. Sometimes referred to as binary16: uses 1 sign, 5 exponent, and 10 significand bits. Useful when precision is important at the expense of range. 2. Sometimes referred to as Brain Floating Point: uses 1 sign, 8 exponent, and 7 significand bits. Useful when range is important, since it has the same number of exponent bits ...
PyTorch 30.上下采样函数--interpolate - 知乎专栏
https://zhuanlan.zhihu.com/p/166323682torch.nn.functional.interpolate(input, size=None, scale_factor=None, mode='nearest', align_corners=None):. Down/up samples the input to either the given size or the given scale_factor The algorithm used for interpolation is determined by mode. Currently temporal, spatial and volumetric sampling are supported, i.e. expected inputs are 3-D, 4-D or 5-D in shape.
torch.lerp — PyTorch 1.11.0 documentation
pytorch.org › docs › stabletorch.lerp — PyTorch 1.11.0 documentation torch.lerp torch.lerp(input, end, weight, *, out=None) Does a linear interpolation of two tensors start (given by input) and end based on a scalar or tensor weight and returns the resulting out tensor. \text {out}_i = \text {start}_i + \text {weight}_i \times (\text {end}_i - \text {start}_i) outi = starti