LogSoftmax — PyTorch 1.10.1 documentation
pytorch.org › generated › torchLogSoftmax — PyTorch 1.10.0 documentation LogSoftmax class torch.nn.LogSoftmax(dim=None) [source] Applies the \log (\text {Softmax} (x)) log(Softmax(x)) function to an n-dimensional input Tensor. The LogSoftmax formulation can be simplified as: \text {LogSoftmax} (x_ {i}) = \log\left (\frac {\exp (x_i) } { \sum_j \exp (x_j)} \right) LogSoftmax(xi
torch.nn.functional.log_softmax — PyTorch 1.10.1 documentation
pytorch.org › torchtorch.nn.functional.log_softmax(input, dim=None, _stacklevel=3, dtype=None) [source] Applies a softmax followed by a logarithm. While mathematically equivalent to log (softmax (x)), doing these two operations separately is slower and numerically unstable. This function uses an alternative formulation to compute the output and gradient correctly.
AdaptiveLogSoftmaxWithLoss — PyTorch 1.10.1 documentation
https://pytorch.org/docs/stable/generated/torch.nn...AdaptiveLogSoftmaxWithLoss¶ class torch.nn. AdaptiveLogSoftmaxWithLoss (in_features, n_classes, cutoffs, div_value = 4.0, head_bias = False, device = None, dtype = None) [source] ¶. Efficient softmax approximation as described in Efficient softmax approximation for GPUs by Edouard Grave, Armand Joulin, Moustapha Cissé, David Grangier, and Hervé Jégou. Adaptive …
Softmax — PyTorch 1.10.1 documentation
pytorch.org › docs › stableSoftmax — PyTorch 1.10.0 documentation Softmax class torch.nn.Softmax(dim=None) [source] Applies the Softmax function to an n-dimensional input Tensor rescaling them so that the elements of the n-dimensional output Tensor lie in the range [0,1] and sum to 1. Softmax is defined as: