다중 모드 회귀 — L1 및 L2 손실을 넘어서
https://ichi.pro/ko/dajung-modeu-hoegwi-l1-mich-l2-sonsil-eul-neom...Note that L1 loss is no better. L2 loss assumes a Gaussian prior, and L1 loss assumes a Laplacian prior, which is also a type of unimodal distribution.Intuitively, smooth L1 loss, or Huber loss, which is a combination of L1 and L2 loss, also assumes a unimodal underlying distribution.. It is generally a good idea to visualize the distribution of the regression target first, and …
【multi-scale系列】HRNet系列:HRNet、HRNetV2、HRNetV2p …
https://zhuanlan.zhihu.com/p/359663844There are related multi-scale networks for classification and segmentation [5, 8, 72, 78, 29, 73, 53, 54, 23, 80, 53, 51, 18]. Our work is partially inspired by some of them [54, 23, 80, 53], and there are clear differences making them not applicable to our problem.
MultiMarginLoss — PyTorch 1.10.1 documentation
pytorch.org › torchCreates a criterion that optimizes a multi-class classification hinge loss (margin-based loss) between input x x x (a 2D mini-batch Tensor) and output y y y (which is a 1D tensor of target class indices, 0 ≤ y ≤ x.size (1) − 1 0 \leq y \leq \text{x.size}(1)-1 0 ≤ y ≤ x.size (1) − 1):